entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02259v1
20230703172948
Did we hear the sound of the Universe boiling? Analysis using the full fluid velocity profiles and NANOGrav 15-year data
[ "Tathagata Ghosh", "Anish Ghoshal", "Huai-Ke Guo", "Fazlollah Hajkarim", "Stephen F King", "Kuver Sinha", "Xin Wang", "Graham White" ]
astro-ph.HE
[ "astro-ph.HE", "hep-ph" ]
[[email protected]]... Harish-Chandra Research Institute, A CI of Homi Bhabha National Institute, Chhatnag Road, Jhusi, Prayagraj 211019, India [[email protected]]... Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, 02-093 Warsaw, Poland [[email protected]]... International Centre for Theoretical Physics Asia-Pacific, University of Chinese Academy of Sciences, 100190 Beijing, China [[email protected]]... Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019, USA [[email protected]]... School of Physics and Astronomy, University of Southampton, Southampton SO17 1BJ, United Kingdom [[email protected]]... Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019, USA [[email protected]]... School of Physics and Astronomy, University of Southampton, Southampton SO17 1BJ, United Kingdom [[email protected]]... School of Physics and Astronomy, University of Southampton, Southampton SO17 1BJ, United Kingdom In this paper, we analyse sound waves arising from a cosmic phase transition where the full velocity profile is taken into account as an explanation for the gravitational wave spectrum observed by multiple pulsar timing array groups. Unlike the broken power law used in the literature, in this scenario the power law after the peak depends on the macroscopic properties of the phase transition, allowing for a better fit with pulsar timing array (PTA) data. We compare the best fit with that obtained using the usual broken power law and, unsurprisingly, find a better fit with the gravitational wave (GW) spectrum that utilizes the full velocity profile. We then discuss models that can produce the best-fit point and complementary probes using CMB experiments and searches for light particles in DUNE, IceCUBE-Gen2, neutrinoless double β-decay, and forward physics facilities (FPF) at the LHC like FASERν, etc. Did we hear the sound of the Universe boiling? Analysis using the full fluid velocity profiles and NANOGrav 15-year data Graham White August 1, 2023 ========================================================================================================================== § INTRODUCTION It has been known for some time that Pulsar Timing Array (PTA) experiments can be used to detect gravitational waves (GWs) <cit.>. This is possible by studying the timing distortions of successive light pulses emitted by millisecond pulsars, which are extremely stable clocks. The PTAs search for spatially correlated fluctuations in the pulse arrival time measurements of such pulsars, due to GWs perturbing the space-time metric along the line of sight to each pulsar. For GWs, the timing distortions should exhibit the angular dependence expected for an isotropic background of spin two GWs which enables them to be distinguished from either spin-zero or spin-one waves, and other effects, according to the work of Hellings and Down <cit.>. Recently, several PTA projects have reported the discovery of a stochastic gravitational wave background (SGWB). In particular, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) <cit.>, the European PTA <cit.>, the Parkes PTA <cit.> and the Chinese CPTA <cit.> have all released results which seem to be consistent with a Hellings-Downs pattern in the angular correlations which is characteristic of the SGWB. In particular, the largest statistical evidence for SGWB is seen in the NANOGrav 15-year data (NANOGrav15) <cit.>. This is the first discovery of GWs in the frequency around 10^-8 Hz, and wavelengths around 10 light years. The most obvious origin of such an SGWB is due to the merging supermassive black hole binaries (SMBHBs) resulting from the collision of two galaxies, each with an SMBH with masses in the range 10^8-9 solar masses at its centre <cit.>. The expected amplitude has an order of magnitude uncertainty depending on the density, redshift, and other properties of SMBH sources. Indeed, there may be millions of such sources contributing to the SGWB. However, the current data does not allow individual SMBH binary sources to be identified, so it is unclear if the observed SGWB has an astrophysical or cosmological origin <cit.>. For example, the cosmological origin of SGWB could be due to first-order phase transitions <cit.>, cosmic strings <cit.>, domain walls <cit.>, or scalar-induced gravitational waves (SIGWs) generated from primordial fluctuations <cit.>. Such possibilities represent new physics beyond the standard model (BSM) and it would be interesting to know how such alternative scenarios could be distinguished. One characteristic feature is the shape of the spectrum in the recent data, which, unlike the previous results, seems to be blue-tilted <cit.>. The analysis of the NANOGrav 12.5-year data release suggested a nearly flat GW spectrum as a function of frequency (f), Ω_GW∝ f^(-1.5,0.5) at one sigma, in a narrow range of frequencies around 5.5 nHz <cit.>. By contrast, the recent 15-year data release finds a steeper slope, Ω_GW∝ f^(1.3,2.4) at one sigma <cit.>. The naive scaling predicted for GW from SMBH binaries is disfavoured by the latest NANOGrav data, although environmental and statistical effects can lead to different predictions <cit.>. Motivated by the above considerations, new analyses are necessary to explore which SGWB formation mechanisms can lead to the generation of a signal consistent with these updated observations. Indeed, following the recent announcements, several papers have appeared which address some of these issues <cit.>. In this paper, we consider the sound waves arising from a cosmic phase transition where the full velocity profile is taken into account. We compare the best fit with that obtained using the usual broken power law and find a better fit to NANOGrav data using the full velocity profile. We first explain how to obtain this result before discussing some models that can produce such thermal parameters. Finally, we discuss complementary probes of hidden sectors. § PTA DATA AND THE SOUND SHELL MODEL Multiple PTA collaborations observed strong evidence for a gravitational wave spectrum, with NANOGrav and EPTA giving the best fit for a power law spectrum parametrized as follows, Ω = 8 π ^4 f^5 Φ (f)/H_0^2 Δ f with Φ = A^2/12 π ^2 T_ obs( f/ yr ^-1) ^- γ yr ^3 where Δ f =1/T_ obs and H_0=h × 100 km s^-1 Mpc^-1 is the current value of the Hubble rate. The best fit values of the parameters A and γ in Eq. <ref> are given by γ = {[ 3.2 ± 0.6 NANOGrav; 3.1 ^+0.77 _-0.68 EPTA ]. A = {[ 6.4^+4.2_-2.7× 10^-15 NANOGrav; 10^-14.13 ± 0.12 EPTA ]. While inspiralling SMBHBs provide the standard astrophysical explanation for the signal, a first-order phase transition (FOPT) at the 𝒪( MeV) scale is an intriguing alternative. In this Section, we model the FOPT with the sound shell model <cit.>, obtain the corresponding GW spectrum, and compare our results with the fit performed by the NANOGrav Collaboration. The GW spectrum from a FOPT is characterized by the following parameters: the nucleation temperature T_n, the strength of the FOPT α_n, the average separation of bubbles R_n which can be related to the bubble nucleation rate β, and the bubble wall velocity v_w. The fit frequently appearing in the literature, and in particular in the recent analysis of the NANOGrav paper describes a single broken power-law of the form <cit.> h^2 Ω_GW = 8.5 × 10^-6(100/g_*(T_e))^1/3Γ^2 U̅_f^4 [ H_s/β(v_w)] v_w ×Υ × S(f) , where U̅_f is the root mean square fluid velocity, Γ∼ 4/3 is the adiabatic index and Υ is the suppression factor arising from the finite lifetime <cit.> (τ_sh) of the sound waves <cit.> Υ = 1 - 1/√(1 + 2 τ_sh H_s) . Finally, the spectral form has the shape S(f) = ( f/f_p)^3 ( 7/4+3(f/f_p)^2)^7/2 where f_p is the peak frequency given by f_p = 8.9 × 10^-61/v_w( β/H_e) ( z_p/10) ( T_e/100 GeV) ( g_∗ (T_e)/100)^1/6 Hz However, a full calculation of the sound shell model can see qualitative deviations from this curve <cit.> with a better fit being a double broken power law. Most important for our interests is the fact that the power law after the peak depends on the strength of the phase transition and the bubble wall velocity <cit.>. A more optimistic scenario was studied in <cit.> where the power law on either side of the peak was treated as a free parameter. In this work, we will perform a full calculation of the sound shell model to take advantage of this flexibility in the peak of the spectrum. Note that in the sound shell model, by keeping the force term between the bubble wall and the plasma longer, the shape can be modified in the infrared <cit.>. We will perform a scan over the space of thermal parameters, (α _n, T _n, v_w, β/H_n), to find the best fit to the NANOGrav data (who have released their full data including uncertainties). The scans are performed over the following ranges: nucleation temperature 3 MeV<T_n<150 MeV, bubble wall velocity 0<v_w<1, phase transition strength 0<α_n<1, and the efficiency of bubble formation w.r.t. the expansion rate 0<β/H<100. Since the relevant ranges of temperature and frequency are around the quark-gluon confinement regime near 150 MeV, we consider g_*(T_n) the evolution of degrees of freedom for the energy density of the thermal bath of SM particles at the nucleation temperature <cit.>. The best fit point we use the following figure of merit χ_ fit^2 = ∑_i=1^N(log_10Ω_ thh^2- log_10Ω_ exph^2)^2/2σ̅_i^2 , where Ω _ thh^2 and Ω _ exph^2 represent the GW relic from theoretical prediction of FOPT and experimental value from PTA, respectively. Note that we ignore the width in the uncertainty regions, taking the midpoint and fitting to the vertical width. That is, σ, in the above equation is the distance from the midpoint value of log _10h^2 Ω _ GW for each uncertainty region to the top or bottom. In Fig. <ref> we display the results of our scan for data corresponding to NANOGrav. Using the data of NANOGrav we obtain the following values for the best fit point: v_w≃0.09, α_n≃0.85, T_n≃132.95 MeV and β/H≃42.02. § BSM SCENARIOS AND COMPLEMENTARY LABORATORY PROBES We are somewhat spoilt for choice in models that can produce a strong first order phase transition at roughly the QCD scale. The very large strength of the transition lends credit to solitosynthesis as a possible explanation <cit.>, as this mechanism typically leads to a stronger transition than conventional nucleation. The low wall velocity, however, supports a model that can predict a lot of friction like perhaps a SIMP model which can contain particles with large multiplicites <cit.>. Quite a few other dark sector phase transitions have been considered in this temperature range, see for instance  <cit.>. Of course, while the QCD phase transition is a crossover in the standard model at low density, a high lepton asymmetry or a different number of light quarks can change this picture  <cit.>. We focus here on a dark sectors that have the prospect of having complementary probes in searches for long lived particles. A full model survey we leave for future work. Let us now briefly discuss model-independent constraints on a MeV scale FOPT in the dark or hidden sector. During a FOPT, the vacuum energy contained in the false vacuum gets released, and a part of it goes into reheating the photons or neutrinos in the plasma. The released energy may also end up heating relativistic particles in the dark sector. If the reheating of the SM particles happens at around or after the thermal decoupling of neutrinos and photons, either or both of their temperatures will differ from the predictions of standard cosmology. This will change the relativistic degrees of freedom, N_eff, which is strongly constrained by Big Bang Nucleosynthesis (BBN) and the Cosmic Microwave Background (CMB). The abundances of light element will also be modified and offer further bounds. N_eff measurements severely constrain the dark sector reheating scenario as well. While our best fit point has a percolation temperature well above the scale at which we need to be concerned with BBN constraints, there are some points the agree well with NANOGrav data and have a much lower percolation temperature. We first consider the reheating of the dark radiation case. In the approximation T_n^D ≪ T_n^γ, one can show that α_n < 0.08 for T_n^γ∼𝒪(MeV) <cit.>. Ref. <cit.> also derived model-independent constraints on phase temperature (T_n) and strength parameter (α_n) from N_eff and helium and primary deuterium abundance ratios (Y_P and D/H|_P, respectively) measured by CMB and BBN experiments when the FOPT heats the SM particles. For illustration, we discuss here the neutrino reheating scenario since the portal operator that can induce it (and the associated phenomenology) is relatively well-studied <cit.>. Using the BBN data from PDG <cit.> and the CMB data from the latest Planck results <cit.>, Ref. <cit.> shows that the neutrino reheating temperature, T^ν_rh, has to be greater than ∼ 3 MeV for α_n > 0.1. A future CMB experiment like CMB-S4 <cit.> will improve the bounds to T^ν_rh≳ 4 MeV. One can translate the above bounds on T^ν_rh to the phase transition temperature T_n by using the formula, T^ν_rh = [ 1 + α_n + α_n g_*^γ(t_rh)/g_*^ν(t_rh)( T_n^γ/T_n^ν)^4 ]^1/4 T_n^ν , where for reheating temperatures above MeV, g_*^γ(t_rh) ≈ 11/2 and g_*^ν(t_rh) ≈ 21/4. Thus, one can conclude that the existing CMB and BBN data bounds place an almost flat constraint on T_n^ν≳ 2 MeV for α_n > 0.1 as shown by the blue line in Fig. <ref>. The bound on T_n from the photon reheating case is almost the same but extends to a bit smaller values of α_n <cit.>. Finally, we provide a brief discussion on the interactions between the SM neutrinos and the dark sector scalar that is responsible for the FOPT under discussion. There is one point that we should clarify that the reheating has to be instantaneous for the above constraints to be applicable. For a delayed reheating, the constraints on the FOPT is expected be much stronger <cit.>. The neutrino reheating can happen from the decay of a dark scalar ϕ to a pair of neutrinos via the dimension-6 effective operator 𝒪 = λ_αβ (L^T_α i σ_2 H) (H^T i σ_2 L_β) ϕ/Λ^2, where L and H are the SM lepton and Higgs doublets, respectively. After the electroweak symmetry breaking the above operator will generate an interaction term g_αβ ννϕ, where α, β = e, μ , τ and g_αβ = λ_αβ v^2/Λ^2. Significant bounds already exists on the g_αβ-m_ϕ plane from existing laboratory experiments like meson decay spectra <cit.>, neutrinoless double β-decay <cit.>, Z or the SM Higgs invisible decay or τ decay <cit.>. Also, these couplings are significant interest of study in upcoming experiments like DUNE<cit.>, generation-2 IceCube <cit.>, and forward physics facilities (FPF) <cit.> at the LHC. We show a subset of these existing and projected bounds on the g_αβ-m_ϕ plane in Fig. <ref>. For the detailed phenomenology of these couplings at various terrestrial and celestial experiments we refer the interested readers to the recent review paper on this topic <cit.>. As far as the UV-completion of the effective operator of Eq. <ref> is concerned, the most canonical models that can provide it are the massive Majoron models <cit.>. In addition, the generation of this effective operator from an inverse seesaw model <cit.> and a U(1)_B-L <cit.> model has been considered in the literature. § DISCUSSIONS AND CONCLUSIONS In this paper we have had an in depth look at sound wave induced gravitational waves from a strong first order cosmic phase transition as a possible explanation for the recent signal at multiple pulsar timing arrays. In particular, we have looked at how much including the full velocity profile rather than using a broken power law fit improves agreement with data. The best fit parameters also look a bit more realistic than what can be achieved via the broken power law, with the time scale of the phase transition being a smaller fraction of the Hubble time. We of course emphasize the caveat that understanding the spectrum from sound shell models is still in a state of flux. Reheating can suppress the nucelation rate enhancing the spectrum <cit.>. On the other hand, energy lost to vorticity can suppress the spectrum <cit.>. We leave a detailed analysis of this to future work. We then took a brief look at dark sector models that can be responsible for such a phase transition. We show that one for phase transitions occurring at low temperatures, the cosmological constraints from BBN, PLANCK data and future sensitivities from CMB experiments like CMB-S4, CMB-HD, CMB-Bharat, LiteBIRD will be complementary to the gravitational wave detectors to essential probe phase transition parameter space. This complementarity approach to probe phase transitions via GW detectors as well as CMB detectors paves the way distinguish the SMBHB and phase transition explanations to observed gravitational waves. Furthermore we showed that once we fix an operator that decides the interactions between the SM sector and the invisible sector (Eqn. 11) one is able to search for such mediators which is responsible for such interactions. We also discussed possible UV-complete neutrino mass models that can give rise to such low scale phase transitions and GW from sound waves measured in PTA data however detailed analysis involving a complete UV-complete model is beyond the scope of the current paper and will be taken up in a future publication. We envisage that the precision measurements that the GW cosmology and GW astronomy offers us from current data and from the planned worldwide network of GW detectors will make the dream of testing particle physics and fundamental BSM scenarios a reality in the very near future. Acknowledgments The work of T.G. is supported by the funding available from the Department of Atomic Energy (DAE), Government of India for Harish-Chandra Research Institute. A.G. thanks hospitality of University of Pisa during the ongoing work. SFK acknowledges the STFC Consolidated Grant ST/L000296/1 and the European Union’s Horizon 2020 Research and Innovation programme under Marie Sklodowska-Curie grant agreement HIDDeN European ITN project (H2020-MSCA-ITN-2019//860881-HIDDeN). KS is supported by the U. S. Department of Energy grant DE-SC0009956. XW acknowledges the Royal Society as the funding source of the Newton International Fellowship. utphys
http://arxiv.org/abs/2307.01753v1
20230704144923
Local primordial non-Gaussianity from the large-scale clustering of photometric DESI luminous red galaxies
[ "Mehdi Rezaie", "Ashley J. Ross", "Hee-Jong Seo", "Hui Kong", "Anna Porredon", "Lado Samushia", "Edmond Chaussidon", "Alex Krolewski", "Arnaud de Mattia", "Florian Beutler", "Jessica Nicole Aguilar", "Steven Ahlen", "Shadab Alam", "Santiago Avila", "Benedict Bahr-Kalus", "Jose Bermejo-Climent", "David Brooks", "Todd Claybaugh", "Shaun Cole", "Kyle Dawson", "Axel de la Macorra", "Peter Doel", "Andreu Font-Ribera", "Jaime E. Forero-Romero", "Satya Gontcho A Gontcho", "Julien Guy", "Klaus Honscheid", "Theodore Kisner", "Martin Landriau", "Michael Levi", "Marc Manera", "Aaron Meisner", "Ramon Miquel", "Eva-Maria Mueller", "Adam Myers", "Jeffrey A. Newman", "Jundan Nie", "Nathalie Palanque-Delabrouille", "Will Percival", "Claire Poppett", "Graziano Rossi", "Eusebio Sanchez", "Michael Schubnell", "Gregory Tarlé", "Benjamin Alan Weaver", "Christophe Yèche", "Zhimin Zhou", "Hu Zou" ]
astro-ph.CO
[ "astro-ph.CO", "cs.LG", "physics.comp-ph", "physics.data-an" ]
firstpage–lastpage Role of work function distribution on field emission effects Rajib Mahato August 1, 2023 ============================================================ We use angular clustering of luminous red galaxies from the Dark Energy Spectroscopic Instrument (DESI) imaging surveys to constrain the local primordial non-Gaussianity parameter . Our sample comprises over 12 million targets, covering 14,000 square degrees of the sky, with redshifts in the range 0.2< z < 1.35. We identify Galactic extinction, survey depth, and astronomical seeing as the primary sources of systematic error, and employ linear regression and artificial neural networks to alleviate non-cosmological excess clustering on large scales. Our methods are tested against log-normal simulations with and without and systematics, showing superior performance of the neural network treatment in reducing remaining systematics. Assuming the universality relation, we find = 47^+14(+29)_-11(-22) at 68%(95%) confidence. With a more aggressive treatment, including regression against the full set of imaging maps, our maximum likelihood value shifts slightly to ∼ 50 and the uncertainty on increases due to the removal of large-scale clustering information. We apply a series of robustness tests (e.g., cuts on imaging, declination, or scales used) that show consistency in the obtained constraints. Despite extensive efforts to mitigate systematics, our measurements indicate > 0 with a 99.9 percent confidence level. This outcome raises concerns as it could be attributed to unforeseen systematics, including calibration errors or uncertainties associated with low-ℓ systematics in the extinction template. Alternatively, it could suggest a scale-dependent model–causing significant non-Gaussianity around large-scale structure while leaving cosmic microwave background scales unaffected. Our results encourage further studies of with DESI spectroscopic samples, where the inclusion of 3D clustering modes should help separate imaging systematics. cosmology: inflation - large-scale structure of the Universe § INTRODUCTION Inflation is a widely accepted paradigm in modern cosmology that explains many important characteristics of our Universe. It predicts that the early Universe underwent a period of accelerated expansion, resulting in the observed homogeneity and isotropy of the Universe on large scales <cit.>. After the period of inflation, the Universe entered a phase of reheating in which primordial perturbations were generated, setting the initial seeds for structure formation <cit.>. Although inflation is widely accepted as a compelling explanation, the characteristics of the field or fields that drove the inflationary expansion remain largely unknown in cosmology. While early studies of the cosmic microwave background (CMB) and large-scale structure (LSS) suggested that primordial fluctuations are both Gaussian and scale-invariant <cit.>, some alternative classes of inflationary models predict different levels of non-Gaussianities in the primordial gravitational field. Non-Gaussianities are a measure of the degree to which the distribution of matter in the Universe deviates from a Gaussian distribution, which would have important implications for the growth of structure and galaxies in the Universe <cit.>. In its simplest form, local primordial non-Gaussianity (PNG) is parameterized by the non-linear coupling constant <cit.>: Φ = ϕ + [ϕ^2 - <ϕ^2>], where Φ is the primordial curvature perturbation and ϕ is assumed to be a Gaussian random field. Local-type PNG generates a primordial bispectrum, which peaks in the squeezed triangle configuration where one of the three wave vectors is much smaller than the other two. This means that one of the modes is on a much larger scale than the other two, and this mode couples with the other two modes to generate a non-Gaussian signal, which then affects the local number density of galaxies. The coupling between the short and long wavelengths induces a distinct bias in the galaxy distribution, which leads to a k^-2-dependent feature in the two-point clustering of galaxies and quasars <cit.>. Obtaining reliable, accurate, and robust constraints on is crucial in advancing our understanding of the dynamics of the early Universe. For instance, the standard single-field slow-roll inflationary model predicts a small value of ∼ 0.01 <cit.>. On the other hand, some alternative inflationary scenarios involve multiple scalar fields that can interact with each other during inflation, leading to the generation of larger levels of non-Gaussianities. These models predict considerably larger values of that can reach up to 100 or higher <cit.>. With σ ()∼ 1, we can rule out or confirm specific models of inflation and gain insight into the physics that drove the inflationary expansion <cit.>. The current tightest bound on comes from Planck's bispectrum measurement of CMB anisotropies, =0.9± 5.1 <cit.>. Limited by cosmic variance, CMB data cannot enhance the statistical precision of measurements enough to break the degeneracy amongst various inflationary paradigms <cit.>. On the other hand, LSS surveys probe a 3D map of the Universe, and thus provide more modes to limit . However, nonlinearities raised from structure formation pose a serious challenge for measuring with the three-point clustering of galaxies, and these nonlinear effects are non-trivial to model and disentangle from the primordial signal <cit.>. Currently, the most precise constraints on from LSS reach a level of σ() ∼ 20-30, with the majority of the constraining power coming from the two-point clustering statistics that utilize the scale-dependent bias effect <cit.>. Surveying large areas of the sky can unlock more modes and help improve these constraints. The Dark Energy Spectroscopic Instrument (DESI) is ideally suited to enable excellent constraints on primordial non-Gaussianity from the galaxy distribution. DESI uses 5000 robotically-driven fibers to simultaneously collect spectra of extra-galactic objects <cit.>. DESI is designed to deliver an unparalleled volume of spectroscopic data covering ∼ 14,000 square degrees that promises to deepen our understanding of the energy contents of the Universe, neutrino masses, and the nature of gravity <cit.>. Moreover, DESI alone is expected to improve our constraints on local PNG down to σ()=5, assuming systematic uncertainties are under control <cit.>. With multi-tracer techniques <cit.>, cosmic variance can be further reduced to allow surpassing CMB-like constraints <cit.>. For instance, the distortion of CMB photons around foreground masses, which is referred to as CMB lensing, provides an additional probe of LSS, but from a different vantage point. We can significantly reduce statistical uncertainties below σ()∼ 1 by cross-correlating LSS data with CMB-lensing, or other tracers of matter, such as 21 cm intensity mapping <cit.>. However, further work is needed to fully harness the potential of the scale-dependent bias effect in constraining with LSS. The amplitude of the signal in the galaxy distribution is proportional to the bias parameter b_ϕ, such that Δ b ∝ b_ϕ k^-2. Assuming the universality relation, b_ϕ∼ (b - p), where b is the linear halo bias and p=1 is a parameter that describes the response of galaxy formation to primordial potential perturbations in the presence of local PNG <cit.>. The value of p is not very well constrained for other tracers of matter <cit.>, and <cit.> showed that marginalizing over p even with wide priors leads to biased constraints because of parameter space projection effects. More simulation-based studies are necessary to investigate the halo-assembly bias and the relationship between b_ϕ and b for various galaxy samples. For instance, <cit.> used N-body simulations to investigate secondary halo properties, such as concentration, spin and sphericity of haloes, and found that halo spin and sphericity preserve the universality of the halo occupation function while halo concentration significantly alters the halo function. Without better-informed priors on p, it is argued that the scale-dependent bias effect can only be used to constrain the b_ϕ term <cit.>. Nevertheless, the detection significance of local PNG remains unaffected by various assumptions regarding p. This means that a nonzero detection of b_ϕ at a certain confidence level will still indicate a nonzero detection of at that same confidence level. In this work, we assume the relation that links b_ϕ to b-p and, further, fix the value of p. In addition to the theoretical uncertainties, measuring through the scale-dependent bias effect is a difficult task due to various imaging systematic effects that can modulate the galaxy power spectrum on large scales. The imaging systematic effects often induce wide-angle variations in the density field, and in general, any large-scale variations can translate into an excess signal in the power spectrum <cit.>, that can be misinterpreted as the signature of non-zero local PNG <cit.>. Such spurious variations can be caused by Galactic foregrounds, such as dust extinction and stellar density, or varying imaging conditions, such as astrophysical seeing and survey depth <cit.>. The imaging systematic issues have made it challenging to accurately measure , as demonstrated in previous efforts to constrain it using the large-scale clustering of galaxies and quasars <cit.>, and it is anticipated that they will be particularly problematic for wide-area galaxy surveys that observe regions of the night sky closer to the Galactic plane and that seek to incorporate more lenient selection criteria to accommodate fainter galaxies <cit.>. The primary objective of this paper is to utilize the scale-dependent bias signature in the angular power spectrum of galaxies selected from DESI imaging data to constrain the value of . With an emphasis on a careful treatment of imaging systematic effects, we aim to lay the groundwork for subsequent studies of local PNG with DESI spectroscopy. To prepare our sample for measuring such a subtle signal, we employ linear multivariate regression and artificial neural networks to mitigate spurious density fluctuations and ameliorate the excess clustering power caused by imaging systematics. We thoroughly investigate potential sources of systematic error, including survey depth, astronomical seeing, photometric calibration, Galactic extinction, and local stellar density. Our methods and results are validated against simulations, with and without imaging systematics. This paper is structured as follows. Section <ref> describes the galaxy sample from DESI imaging and lognormal simulations with, or without, PNG and synthetic systematic effects. Section <ref> outlines the theoretical framework for modelling the angular power spectrum, strategies for handling various observational and theoretical systematic effects, and statistical techniques for measuring the significance of remaining systematics in our sample after mitigation. Our results are presented in Section <ref>, and Section <ref> summarizes our conclusions and directions for future work. § DATA Luminous red galaxies (LRGs) are massive galaxies that populate massive haloes, lack active star formation, and are highly biased tracers of the dark matter gravitational field <cit.>. A distinct break around 4000 Å in the LRG spectrum is often utilized to determine their redshifts accurately. LRGs are widely targeted in previous galaxy redshift surveys <cit.>, and their clustering and redshift properties are well studied <cit.>. DESI is designed to collect spectra of millions of LRGs covering the redshift range 0.2<z<1.35. DESI selects its targets for spectroscopy from the DESI Legacy Imaging Surveys, which consist of three ground-based surveys that provide photometry of the sky in the optical g, r, and z bands. These surveys include the Mayall z-band Legacy Survey using the Mayall telescope at Kitt Peak <cit.>, the Beijing–Arizona Sky Survey using the Bok telescope at Kitt Peak <cit.>, and the Dark Energy Camera Legacy Survey on the Blanco 4m telescope <cit.>. As shown in Figure <ref>, the BASS and MzLS programmes observed the same footprint in the North Galactic Cap (NGC) while the DECaLS programme observed both caps around the galactic plane; the BASS+MzLS footprint is separated from the DECaLS NGC at DEC > 32.375 degrees, although there is an overlap between the two regions for calibration purposes <cit.>. Additionally, the DECaLS programme integrates observations executed from the Blanco instrument under the Dark Energy Survey <cit.>, which cover about 1130 ^2 of the South Galactic Cap (SGC) footprint. The DESI imaging catalogues also integrate the 3.4 (W1) and 4.6 μ m (W2) infrared photometry from the Wide-Field Infrared Explorer <cit.>. §.§ DESI imaging LRGs Our sample of LRGs is drawn from the DESI Legacy Imaging Surveys Data Release 9 <cit.> using the color-magnitude selection criteria designed for the DESI 1% survey <cit.>, described as the Survey Validation 3 (SV3) selection in more detail in <cit.>. The color-magnitude selection cuts are defined in the g, r, z bands in the optical and W1 band in the infrared, as summarized in Table <ref>. The selection cuts vary for each imaging survey, but they are designed to achieve a nearly consistent density of approximately 800 galaxies per square degree across a total area of roughly 14,000 square degrees. Table <ref> summarizes the mean galaxy density and area for each region. This is accomplished despite variations in survey efficiency and photometric calibration between DECaLS and BASS+MzLS. The implementation of these selection cuts in the DESI data processing pipeline is explained in <cit.>. The redshift distribution of our galaxy sample are inferred respectively from DESI spectroscopy during the Survey Validation phase <cit.>, and is shown via the solid curve in Figure <ref>. <cit.> analyzed the DESI LRG targets and found that the redshift evolution of the linear bias for these targets is consistent with a constant clustering amplitude and varies via 1/D(z), where D(z) is the growth factor (as illustrated by the dashed red line in Figure <ref>). The LRG sample is masked rigorously for foreground bright stars, bright galaxies, and clusters of galaxies[See <https://www.legacysurvey.org/dr9/bitmasks/> for maskbit definitions.] to further reduce stellar contamination <cit.>. Then, the sample is binned into HEALPix <cit.> pixels at nside=256, corresponding to pixels of about 0.25 degrees on a side, to construct the 2D density map (as shown in the top panel of Figure <ref>). The LRG density is corrected for the pixel incompleteness and lost areas using a catalogue of random points, hereafter referred to as randoms, uniformly scattered over the footprint with the same cuts and masks applied. Moreover, the density of galaxies is matched to the randoms separately for each of the three data sections (BASS+MzLS, DECaLS North / South) so the mean density differences are mitigated (see Table <ref>). The DESI LRG targets are selected brighter than the imaging survey depth limits, e.g., g=24.4, r=23.8,  and z=22.9 for the median 5σ detection in AB mag in the DECaLS North region (Table <ref>); and thus the LRG density map does not exhibit severe spurious fluctuations. §.§.§ Imaging systematic maps The effects of observational systematics in the DESI targets have been studied in great detail <cit.>. <cit.> has previously identified nine astrophysical properties as potential sources of imaging systematic errors in the DESI LRG targets. These imaging properties are mapped into HEALPix of nside=256. As illustrated by the 3x3 grid in the bottom panel of Figure <ref>, the maps include local stellar density constructed from point-like sources with a G-band magnitude in the range 12 ≤ G < 17 from the Gaia DR2 <cit.>; Galactic extinction E[B-V] from <cit.>; survey depth (galaxy depth in g, r, and z and PSF depth in W1) and astronomical seeing (i.e., point spread function, or psfsize) in g, r, and z. The depth maps have been corrected for extinction using the coefficients adapted from <cit.>. Table <ref> summarizes the median values for the imaging properties in each region. In addition to these nine maps, we consider two external maps for the neutral hydrogen column density (HI) from <cit.> and photometric calibration in the z-band (CALIBZ) from <cit.> to further test the robustness of our analysis against unknown systematics. The fluctuations in each imaging map are unique and tend to be correlated with the LRG density map. For instance, large-scale LRG density fluctuations could be caused by stellar density, extinction, or survey depth; while small scale-fluctuations could be caused by psfsize variations. Some regions of the DR9 footprint are removed from our analysis to avoid potential photometric calibration issues. These regions are either disconnected from the main footprint (e.g., the islands in the NGC with DEC <-10) or calibrated using different catalogues of standard stars (e.g., DEC <-30 in the SGC). The potential impact of not imposing these declination cuts on the LRG sample and our constraints is explored in Section <ref>. We employ the Pearson correlation coefficient to characterize the correlation between the galaxy and imaging properties, which for two random variables x and y is given by, Pearson (x, y) = ∑ (x_i-x̅)(y_i-y̅)/√(∑ (x_i-x̅)^2∑ (y_i-y̅)^2), where x̅ and y̅ represent the mean estimates of the random variables. Figure <ref> shows the Pearson correlation coefficient between the DESI LRG target density map and the imaging systematics maps for the three imaging regions (DECaLS North, DECaLS South, and BASS+MzLS) in the top panel. The horizontal curves represent the 95% confidence regions for no correlation and are constructed by cross-correlating 100 synthetic lognormal density fields and the imaging systematic maps. Consistent among the different regions, there are statistically significant correlations between the LRG density and depth, extinction, and stellar density. There are less significant correlations between the LRG density and the W1-band depth and psfsize. The signs of the correlations imply that there are more targets where extinction is high, and less targets where depth is high. Another interpretation might be that more contaminants are targeted where depth is shallow. Figure <ref> (bottom panel) shows the correlation matrix among the imaging systematics maps for the entire DESI footprint. Significant inner correlations exist among the imaging systematic maps themselves, especially between local stellar density and Galactic extinction; also, the r-band and g-band survey properties are more correlated with each other than with the z-band counterpart. Additionally, we compute the Spearman correlation coefficients between the LRG density and imaging systematic maps to assess whether or not the correlations are impacted by outliers in the imaging data, but find no substantial differences from Pearson. §.§.§ Treatment of imaging systematics There are several approaches for handling imaging systematic errors, broadly classified into data-driven and simulation-based modeling approaches <cit.>. The general idea behind these approaches is to use the available data or simulations to learn or forward model the relationship between the observed target density and the imaging systematic maps, and to use this relationship, which is often described by a set of imaging weights, to mitigate spurious fluctuations in the observed target density. Another techniques for reducing the effect of imaging systematics rely on cross-correlating different tracers of dark matter to ameliorate excess clustering signals, as each tracer might respond differently to a source of systematic error <cit.>. These methods have their limitations and strengths <cit.>. In this paper, data-driven approaches, including linear multivariate regression and artificial neural networks, are applied to the data to correct for imaging systematic effects. Linear multivariate model: The linear multivariate model only uses the imaging systematic maps up to the linear power to predict the number counts of the DESI LRG targets in pixel i, N_i = log ( 1 + exp[a·x_i+a_0]), where a_0 is a global offset, and a·x_i represents the inner product between the parameters, a, and the values for imaging systematics in pixel i, x_i. The Softplus functional form for N_i is adapted to force the predicted galaxy counts to be positive <cit.>. Then, Markov Chain Monte Carlo (MCMC) search is performed using the emcee package <cit.> to explore the parameter space by minimizing the negative Poisson log-likelihood between the actual and predicted number counts of galaxies. Spatial coordinates are not included in x_i to help avoid over-correction. As a result, the predicted number counts solely reflect the spurious density fluctuations that arise from varying imaging conditions. The number of pixels is substantially larger than the number of parameters for the linear model, and thus no training-validation-testing split is applied to the data for training the linear model. This aligns with the methodology used for training linear models in previous analyses <cit.>. The predicted galaxy counts are evaluated for each region using the marginalized mean estimates of the parameters, combined with those from other regions to cover the DESI footprint. The linear-based imaging weights are then defined as the inverse of the predicted target density, normalized to a median of unity. Neural network model: Our neural network-based mitigation approach uses the implementation of fully connected feedforward neural networks from <cit.>. With the neural network approach, a·x_i in Equation <ref> is replaced with NN(x_i|a), where NN represents the fully connected neural network and a denotes its parameters. The implementation, training, validation, and application of neural networks on galaxy survey data are presented in <cit.>. We briefly summarize the methodology here. A fully connected feedforward neural network (also called a multi-layer perceptron) is a type of artificial neural network where the neurons are arranged in layers, and each neuron in one layer is connected to every neuron in the next layer. The imaging systematic information flows only in one direction, from input to output. Each neuron applies a non-linear activation function (i.e., transformation) to the weighted sum of its inputs, which are the outputs of the neurons in the previous layer. The output of the last layer is the model prediction for the number counts of galaxies. Our architecture consists of three hidden layers with 20 rectifier activation functions on each layer, and a single neuron in the output layer. The rectifier is defined as max(0, x) to introduce nonlinearities in the neural network <cit.>. This simple form of nonlinearity is very effective in enabling deep neural networks to learn more complex, non-linear relationships between the input imaging maps and output galaxy counts. Compared with linear regression, neural networks potentially are more prone to overfitting, i.e., excellent performance on training data and poor performance on validation (or test) data. Therefore, our analysis uses a training-validation-testing split to avoid over-fitting and ensure that the neural network is well-optimized. Specifically, 60% of the LRG data is used for training, 20% is used for validation, and 20% is used for testing. The split is performed randomly aside from the locations of the pixels. We also test a geometrical split in which neighboring pixels belong to the same set of training, testing, or validation, but no significant performance difference is observed. The neural networks are trained for up to 70 training epochs with the gradient descent Adam optimizer <cit.>, which iteratively updates the neural network parameters following the gradient of the negative Poisson log-likelihood. The step size of the parameter updates is controlled via the learning rate hyper-parameter, which is initialized with a grid search and is designed to dynamically vary between two boundary values of 0.001 and 0.1 to avoid local minima <cit.>. At each training epoch, the neural network model is applied to the validation set, and ultimately the model with the best performance on validation is identified and applied to the test set. The neural network models are tested on the entirety of the LRG sample with the technique of permuting the choice of the training, validation, or testing sets <cit.>. With the cross-validation technique, the model predictions from the different test sets are aggregated together to form the predicted target density map into the DESI footprint. To reduce the error in the predicted number counts, we train an ensemble of 20 neural network models and average over the predictions. The imaging weights are then defined as the inverse of the predicted target density, normalized to a median of unity. §.§ Synthetic lognormal density fields Density fluctuations of galaxies on large scales can be approximated with lognormal distributions <cit.>. Unlike N-body simulations, simulating lognormal density fields is not computationally intensive, and allows quick and robust validation of data analysis pipelines. Lognormal simulations are therefore considered efficient for our study since the signature of local PNG appears on large-scales and small-scale clustering is not used in our analysis. The package FLASK <cit.> is employed to generate ensembles of synthetic lognormal density maps that mimic the bias, redshift, and angular distributions of the DESI LRG targets, as illustrated in Figure <ref> and <ref>. Two universes with =0 and 76.9 are considered. A set of 1000 realizations is produced for every . The mocks are designed to match the clustering signal of the DESI LRG targets on scales insensitive to . The analysis adapts the fiducial BOSS cosmology <cit.> which assumes a flat ΛCDM universe, including one massive neutrino with m_ν=0.06 eV, Hubble constant h = 0.68, matter density Ω_M=0.31, baryon density Ω_b=0.05, and spectral index n_s=0.967. The amplitude of the matter density fluctuations on a scale of 8 h^-1Mpc is set as σ_8=0.8225. The same fiducial cosmology is used throughout this paper unless specified otherwise. Our robustness tests show that the none of the cosmological parameters can produce a -like signatures, and therefore, our analysis is not sensitive to the choice of fiducial cosmology. §.§.§ Contaminated mocks We employ a linear multivariate model to introduce synthetic spurious fluctuations in the lognormal density fields, and validate our imaging systematic mitigation methods. The motivation for choosing a linear contamination model is to assess how much of the clustering signal can be removed by applying more flexible models, based on neural networks, for correcting less severe imaging systematic effects. The imaging systematic maps considered for the contamination model are extinction, depth in z, and psfsize in r. As shown in the Pearson correlation and will be discussed later in <ref>, the DESI LRG targets correlate strongly with these three maps. We fit for the parameters of the linear models with the MCMC process, executed separately on each imaging survey (BASS+MzLS, DECaLS North, and DECaLS South). Then, the imaging selection function for contaminating each simulation is uniquely determined by randomly drawing from the parameter space probed by MCMC, and then the results from each imaging survey are combined to form the DESI footprint. The clean density is then multiplied by the contamination model to induce systematics. The same contamination model is used for both the =0 and 76.9 simulations. Similar to the imaging systematic treatment analysis for the DESI LRG targets, the neural network methods with various combinations of the imaging systematic maps are applied to each simulation, with and without PNG, and with and without systematics, to derive the imaging weights. Section <ref> presents how the simulation results are incorporated to calibrate biases due to over-correction. We briefly summarize two statistical tests based on the mean galaxy density contrast and the cross power spectrum between the galaxy density and the imaging systematic maps to assess the quality of the data and the significance of the remaining systematic effects <cit.>. We calculate these statistics and compare the values to those measured from the clean mocks before looking at the auto power spectrum of the DESI LRG targets. § ANALYSIS TECHNIQUES We address imaging systematics in DESI data by performing a separate treatment for each imaging region (e.g., DECaLS North) within the DESI footprint to reduce the impact of systematic effects specific to that region. Once the imaging systematic weights are obtained for each imaging region separately, we combine the data from all regions to compute the power spectrum for the entire DESI footprint to increase the overall statistical power and enable more robust measurements of . We then conduct robustness tests on the combined data to assess the significance of any remaining systematic effects. §.§ Power spectrum estimator We first construct the density contrast field from the LRG density, ρ, δ_g = ρ- ρ/ρ, where the mean galaxy density ρ is estimated from the entire LRG sample. As a robustness test, we also analyze the power spectrum from each imaging region individually, in which ρ is calculated separately for each region. Then, we use the pseudo angular power spectrum estimator <cit.>, C̃_ℓ = 1/2ℓ +1∑_m=-ℓ^ℓ |a_ℓ m|^2, where the coefficients a_ℓ m are obtained by decomposing δ_g into spherical harmonics, Y_ℓ m, a_ℓ m = ∫ dΩ δ_g W Y^*_ℓ m, where W represents the survey window that is described by the number of randoms normalized to the expected value. We use the implementation of from the HEALPix package <cit.> to do fast harmonic transforms (Equation <ref>) and estimate the pseudo angular power spectrum of the LRG targets and the cross power spectrum between the LRG targets and the imaging systematic maps. §.§ Modelling The estimator in Equation <ref> yields a biased power spectrum when the survey sky coverage is incomplete. Specifically, the survey mask causes correlations between different harmonic modes <cit.>, and the measured clustering power is smoothed on scales near the survey size. An additional potential cause of systematic error arises from the fact that the mean galaxy density used to construct the density contrast field (Equation <ref>) is estimated from the available data, rather than being known a priori. This introduces what is known as an integral constraint effect, which can cause the power spectrum on modes near the size of the survey to be artificially suppressed, effectively pushing it towards zero <cit.>. Since is highly sensitive to the clustering power on these scales, it is crucial to account for these systematic effects in the model galaxy power spectrum to obtain unbiased constraints <cit.>, which we describe below. The other theoretical systematic issues are however subdominant in the angular power spectrum. For instance, relativistic effects generate PNG-like scale-dependent signatures on large scales, which interfere with measuring with the scale-dependent bias effect using higher order multipoles of the 3D power spectrum <cit.>. Similarly, matter density fluctuations with wavelengths larger than survey size, known as super-sample modes, modulate the galaxy 3D power spectrum <cit.>. In a similar way, the peculiar motion of the observer can mimic a PNG-like scale-dependent signature through aberration, magnification and the Kaiser-Rocket effect, i.e., a systematic dipolar apparent blue-shifting in the direction of the observer's peculiar motion <cit.>. §.§.§ Angular power spectrum The relationship between the linear matter power spectrum P(k) and the projected angular power spectrum of galaxies is expressed by the following equation: C_ℓ = 2/π∫_0^∞dk/kk^3P(k)|Δ_ℓ(k)|^2 + N_ shot, where N_ shot is a scale-independent shot noise term. The projection kernel Δ_ℓ(k) = Δ^ g_ℓ(k) + Δ^ RSD_ℓ(k) includes redshift space distortions and determines the contribution of each wavenumber k to the galaxy power spectrum on mode ℓ. For more details on this estimator, refer to <cit.>. The non-linearities in the matter power spectrum are negligible for the scales of interest <cit.>. For ℓ=40, Δ_ℓ(k) peaks at k∼ 0.02  hMpc^-1, which is above the non-linear regime. The FFTLog[https://github.com/xfangcosmo/FFTLog-and-beyondgithub.com/xfangcosmo/FFTLog-and-beyond] algorithm and its extension as implemented in <cit.> are employed to calculate the integrals for the projection kernel Δ_ℓ(k), which includes the l^ th order spherical Bessel functions, j_ℓ(kr), and its second derivatives, Δ^ g_ℓ(k) = ∫dr/r r (b+Δ b) D(r) dN/dr j_ℓ(kr), Δ^ RSD_ℓ(k) = - ∫dr/r r f(r) D(r) dN/dr j^''_ℓ(kr), where b is the linear bias (dashed curve in Figure <ref>), D represents the linear growth factor normalized as D(z=0)=1, f(r) is the growth rate, and dN/dr is the redshift distribution of galaxies normalized to unity and described in terms of comoving distance[dN/dr = (dN/dz)(dz/dr) ∝ (dN/dz)H(z)] (solid curve in Figure <ref>). The PNG-induced scale-dependent shift is given by <cit.> Δ b = b_ϕ(z) 3 Ω_m H^2_0/2 k^2T(k)D(z) c^2g(∞)/g(0), where Ω_m is the matter density, H_0 is the Hubble constant[H_0=100 ( km  s^-1)/(h^-1 Mpc) and k is in unit of h Mpc^-1], T(k) is the transfer function, and g(∞)/g(0) ∼ 1.3 with g(z)≡ (1+z) D(z) is the growth suppression due to non-zero Λ because of our normalization of D <cit.>. We assume the universality relation which directly relates b_ϕ to b via b_ϕ = 2 δ_c(b - p) with δ_c= 1.686 representing the critical density for spherical collapse <cit.>. We fix p=1 in our analysis and marginalize over b <cit.>. §.§.§ Survey geometry and integral constraint We employ a technique similar to the one proposed by <cit.> to account for the impact of the survey geometry on the theoretical power spectrum. The ensemble average for the partial sky power spectrum is related to that of the full sky power spectrum via a mode-mode coupling matrix, M_ℓℓ^', <C̃_ℓ> = ∑_ℓ^' M_ℓℓ^'<C_ℓ^'>. We convert this convolution in the spherical harmonic space into a multiplication in the correlation function space. Specifically, we first transform the theory power spectrum (Equation <ref>) to the correlation function, ω̂^ model. Then, we estimate the survey mask correlation function, ω̂^ window, and obtain the pseudo-power spectrum, C̃^ model_ℓ = 2π∫ω̂^ modelω̂^ window P_ℓ(cosθ) dcosθ. The integral constraint is another systematic effect which is induced since the mean galaxy density is estimated from the observed galaxy density, and therefore is biased by the limited sky coverage <cit.>. To account for the integral constraint, the survey mask power spectrum is used to introduce a scale-dependent correction factor that needs to be subtracted from the power spectrum as, C̃^ model, IC_ℓ = C̃^ model_ℓ - C̃^ model_ℓ=0(C̃^ window_ℓ/C̃^ window_ℓ=0), where C̃^ window is the survey mask power spectrum, i.e., the spherical harmonic transform of ω̂^ window. The lognormal simulations are used to validate our survey window and integral constraint correction. Figure <ref> shows the mean power spectrum of the =0 simulations (dashed) and the best-fitting theory prediction before and after accounting for the survey mask and integral constraint. The simulations are neither contaminated nor mitigated. The light and dark shades represent the 68% estimated error on the mean and one single realization, respectively. The DESI mask, which covers around 40% of the sky, is applied to the simulations. We find that the survey window effect modulates the clustering power on ℓ < 200 and the integral constraint alters the clustering power on ℓ < 6. §.§ Parameter estimation Our parameter inference uses standard MCMC sampling. A constant clustering amplitude is assumed to determine the redshift evolution of the linear bias of our DESI LRG targets, b(z) = b/D(z), which is supported by the HOD fits to the angular power spectrum <cit.>. In MCMC, we allow , N_ shot, and b to vary, while all other cosmological parameters are fixed at the fiducial values (see <ref>). The galaxy power spectrum is divided into a discrete set of bandpower bins with Δℓ=2 between ℓ=2 and 20 and Δℓ=10 from ℓ=20 to 300. Each clustering mode is weighted by 2ℓ+1 when averaging over the modes in each bin. The expected large-scale power is highly sensitive to the value of such that the the amplitude of the covariance for C_ℓ is influenced by the true value of , see also <cit.> for a discussion. As illustrated in the top row of Figure <ref>, we find that the distribution of the power spectrum at the lowest bin, 2≤ℓ < 4, is highly asymmetric and its standard deviation varies significantly from the simulations with =0 to 76.9. We can make the covariance matrix less sensitive to by taking the log transformation of the power spectrum, log C_ℓ. As shown in the bottom panels in Figure <ref>, the log transformation reduces the asymmetry and the difference in the standard deviations between the =0 and 76.9 simulations. Therefore, we minimize the negative log likelihood defined as, -2logℒ = (logC̃(Θ)-logC̃)^†ℂ^-1 (logC̃(Θ)-logC̃), where Θ represents a container for the parameters , b, and N_ shot; C̃(Θ) is the (binned) expected pseudo-power spectrum; C̃ is the (binned) measured pseudo-power spectrum; and ℂ is the covariance on logC̃ constructed from the =0 log-normal simulations. Log-normal simulations have been commonly used and validated to estimate the covariance matrices for galaxy density fields, and non-linear effects are subdominant on the scales of interest to <cit.>. We also test for the robustness of our results against an alternative covariance constructed from the =76.9 mocks. Flat priors are implemented for all parameters: ∈ [-1000, 1000], N_ shot∈ [-0.001, 0.001], and b ∈ [0, 5]. §.§ Characterization of remaining systematics One potential problem that can arise in the data-driven mitigation approach is over-correction, which occurs when the corrections applied to the data remove the clustering signal and induce additional biases in the inferred parameter. The neural network approach is more prone to this issue compared to the linear approach due to its increased degrees of freedom. As illustrated in the bottom panel of Figure <ref>, the significant correlations among the imaging systematic maps may pose additional challenges for modeling the spurious fluctuations in the galaxy density field. Specifically, using highly correlated imaging systematic maps increases the statistical noise in the imaging weights, which elevates the potential for over subtracting the clustering power. These over-correction effects are estimated to have a negligible impact on baryon acoustic oscillations <cit.>; however, they can significantly modulate the galaxy power spectrum on large scales, and thus lead to biased constraints <cit.>. Although not explored thoroughly, the over-correction issues could limit the detectability of primordial features in the galaxy power spectrum and that of parity violations in higher order clustering statistics <cit.>. Therefore, it is crucial to develop, implement, and apply techniques to minimize and control over-correction in the hope of ensuring that the constraints are as accurate and reliable as possible; one such approach is to reduce the dimensionality of the problem. Our goal is to reduce the correlations between the DESI LRG target density and the imaging systematic maps while controlling the over-correction effect. Below, we describe how we achieve this objective, by employing a series of simulations along with the residual systematics that we construct based on the cross power spectrum between the LRG density and imaging maps, and the mean LRG density as a function of imaging. We test different sets of the imaging systematic maps to identify the optimal set of the feature maps: * Two maps: Extinction, depth in z. * Three maps: Extinction, depth in z, psfsize in r. * Four maps: Extinction, depth in z, psfsize in r, stellar density. * Five maps: Extinction, depth in z, psfsize in r, neutral hydrogen density, and photometric calibration in z. * Eight maps: Extinction, depth in grzW1, psfsize in grz. * Nine maps: Extinction, depth in grzW1, psfsize in grz, stellar density. It is imperative to note that these sets are selected prior to examining the auto power spectrum of the LRG sample and unblinding the constraints, and that the auto power spectrum and measurements are unblinded only after our mitigation methods passed our rigorous tests for residual systematics. §.§.§ Cross power spectrum We characterize the cross correlations between the galaxy density and imaging systematic maps by C̃_X, ℓ = [C̃_x_1, ℓ, C̃_x_2, ℓ, C̃_x_3, ℓ, ..., C̃_x_9, ℓ], where C̃_x_i, ℓ represents the the square of the cross power spectrum between the galaxy density and i^ th imaging map, x_i, divided by the auto power spectrum of x_i: C̃_x_i, ℓ = (C̃_gx_i, ℓ)^2/C̃_x_ix_i,ℓ. With this normalization, C̃_x_i, ℓ estimates the contribution of systematics at every multipole up to the linear order to the galaxy power spectrum. Then, the χ^2 value for the cross power spectra is calculated via, χ^2 = C̃^T_X, ℓℂ_X^-1C̃_X, ℓ, where the covariance matrix ℂ_X = < C̃_X, ℓC̃_X, ℓ' > is constructed from the lognormal mocks. These χ^2 values are measured for every clean mock realization with the leave-one-out technique and compared to the values observed in the LRG sample with various imaging systematic corrections. Specifically, we use 999 realizations to estimate a covariance matrix and then apply the covariance matrix from the 999 realizations to measure the χ^2 for the one remaining realization. This process is repeated for all 1000 realizations to construct a histogram for χ^2. We only include the bandpower bins from ℓ=2 to 20 with Δℓ=2, and test for the robustness with higher ℓ modes in Appendix <ref>. We identify extinction and depth in the z band as two primary potential contaminants, and run the linear model with these two maps to derive the systematic weights. Linear two maps is the most conservative systematic treatment method in terms of both the model flexibility and the number of input maps. We clean the sample using the imaging weights obtained from linear two maps, and find that the linear two maps approach mitigates most of the spurious density fluctuations and reduces the cross-correlations between the LRG density and the imaging systematic maps, except for the trends against psfsize in the r and z bands. Adding the r-band psfsize improves the linear model performance such that the cross correlations are similar to those obtained from linear eight maps, which indicates no further information can be extracted from eight maps. Therefore, we identify extinction, depth in z, and psfsize in r (three maps) as the primary sources of systematic effects in the DESI LRG targets. Then, we adapt neural network three maps to model non-linear systematic effects. Compared with the linear three maps method, we find that the neural network-based weights significantly reduce the cross correlations and spurious density fluctuations of the DESI LRG sample and imaging systematic maps. Additionally, we consider neural networks with four, five, and nine maps to further examine the robustness of our cleaning methods. Figure <ref> shows C̃_X from the DESI LRG targets before and after applying various corrections for imaging systematics. The dark and light shades show the 97.5^ th percentile from the =0 and 76.9 mocks, respectively. Without imaging weights, the LRG sample has the highest cross-correlations against extinction, stellar density, and depth in z (solid black curve). There are less significant correlations against depth in the g and r bands, and psfsize in the z band, which could be driven because of the inner correlations between the imaging systematic maps. First, we consider cleaning the sample with the linear model using two maps (extinction and depth in z) as identified from the Pearson correlation. With linear two maps (red dashed curve), most of the cross power signals are reduced below statistical uncertainties, especially against extinction, stellar density, and depth. However, the cross power spectra against psfsize in r and z increases slightly on 6<ℓ<20 and 6<ℓ<14, respectively. Regression against extinction and depth in the z-band helps mitigate large-scale cross correlations (ℓ < 6), but there are some residual cross correlations on smaller scales (ℓ > 6) which cannot be mitigated with our set of two maps. The linear three maps (blue dot-dashed curve) approach alleviates the cross power spectrum against psfsize in r. Additionally, nonlinear three maps (green dashed curve) can reduce the cross correlations against both the r and z-band psfsize maps, which shows the benefit of using a nonlinear approach. For benchmark, we also show the normalized cross spectra after cleaning the LRG sample with linear eight maps (orange dotted curve) and nonlinear four maps (pink dot-dashed curve). §.§.§ Mean density contrast We calculate the histogram of the mean density contrast relative to the j^ th imaging property, x_j: δ_x_j = (ρ)^-1∑_iρ_i W_i/∑_i W_i, where ρ is the global mean galaxy density, W_i is the survey window in pixel i, and the summations over i are evaluated from the pixels in every bin of x_j. We compute the histograms against all nine imaging properties (see Figure <ref>). We use a set of eight equal-width bins for every imaging map, which results in a total of 72 bins. Then, we construct the total mean density contract as, δ_X = [δ_x_1, δ_x_2, δ_x_3, ..., δ_x_9], and the total residual error as, χ^2 = δ_X^Tℂ_δ^-1δ_X, where the covariance matrix ℂ_δ = < δ_Xδ_X> is constructed from the lognormal mocks. Figure <ref> shows the mean density contrast against the imaging properties for the DESI LRG targets. The dark and light shades represent the 1σ level fluctuations observed in 1000 lognormal density fields respectively with =0 and 76.9. The DESI LRG targets before treatment (solid curve) exhibits a strong trend around 10% against the z-band depth which is consistent with the cross power spectrum. Additionally, there are significant spurious trends against extinction and stellar density at about 5-6%. The linear approach is able to mitigate most of the systematic fluctuations with only extinction and depth in the z-band as input; however, a new trend appears against the r-band psfsize map with the linear two maps approach (red dashed curve), which is indicative of the psfsize-related systematics in our sample. This finding is in agreement with the cross power spectrum. We re-train the linear model with three maps, but we still observe around 2% residual spurious fluctuations in the low end of depth_z and around 1% in the high end of psfsize_z, which implies the presence of nonlinear systematic effects. We find that the imaging weights from the nonlinear model trained with the three identified maps (or four maps including the stellar density) are capable of reducing the fluctuations below 2%. Even with the nonlinear three maps, we have about 1% remaining systematic fluctuations against the z-band psfsize. We use the χ^2 statistics to assess how significant these fluctuations are in comparison to the clean mocks. Figure <ref> presents χ^2 histograms for the normalized cross spectrum (top) and mean density contrast (bottom) statistics obtained from lognormal mocks with different values. The χ^2 tests are insensitive to , providing consistent distributions regardless of its value that was used for mock generation, and thus these tests help separate systematics from cosmology. No mitigation is applied to the mocks, ensuring unbiased χ^2 values. DESI LRG target χ^2 values are compared via the vertical lines and summarized in Table <ref>. After cleaning, the cross power spectrum's χ^2 is reduced. However, linear three maps approach fails to clean the data properly at 95% confidence (χ^2=195.9 and p-value < 0.04). Alternative nonlinear three maps approach reduces χ^2 significantly with p-value=0.59, supporting the need for nonlinear cleaning. For this test, we only use multipoles for up to ℓ=20, and we find no remaining systematic errors by investigating at higher multipoles (see Appendix <ref>). We observe similar performance in the mean density test. The linear two maps weights reduce the χ^2 from 679.8 to 178.8, but significant systematics remain (p-value <0.001). Applying the linear three maps approach lowers the error to χ^2=130, yet systematics remain significant (p-value <0.001). Cleaning with linear eight maps results in χ^2=90 and p-value =0.08, but using too many imaging maps increases the risk of removing the true clustering signal. Alternatively, using imaging weights from the nonlinear three maps approach yields χ^2=74.3 and p-value =0.39. Adding the stellar density map (nonlinear four maps) slightly improves the results: χ^2=73.2 and p-value =0.42. The small impact on χ^2 suggests that the stellar density trend can be explained by extinction due to the correlation between these properties, such that in regions with high stellar density, there is likely to be a higher concentration of dust, which can cause greater extinction of light. The tests conducted here demonstrate the effectiveness of various cleaning approaches for the LRG sample without revealing the measured power spectrum or constraints. Results summarized in Table <ref> show that cleaning with nonlinear three maps consistently produces χ^2 values similar to clean mocks with reasonable p-values (=0.39 for mean density and =0.59 for cross power spectrum). On the other hand, the linear three maps approach fails to sufficiently mitigate systematics, as evidenced by low p-values (< 0.001 for mean density and =0.04 for cross power spectrum). Considering p=0.05 as a threshold for clean maps, the nonlinear three maps approach is optimal, as adding more imaging systematics maps may exacerbate over-correction issues. §.§ Calibration of over-correction The template-based mitigation of imaging systematics removes some of the true clustering signal, and the amount of the removed signal increases as more maps are used for the regression. We calibrate the over-correction effect using the mocks presented in <ref>. We apply the neural network model to both the =0 and 76.9 simulations, with and without imaging systematics, using various sets of imaging systematic maps. Specifically, we consider nonlinear three maps, nonlinear four maps, and nonlinear nine maps. Then, we measure the power spectra from the mocks. We fit both the mean power spectrum and each individual power spectrum from the mocks. Figure <ref> displays a comparison between the estimates of before and after mitigation for the clean mocks. The best-fitting estimates are represented by the solid curves, and the individual spectra results are displayed as the scatter points. The results from fitting the mean power spectrum of the contaminated mocks are also shown via the dashed curves. We find nearly identical results for the biases caused by mitigation, whether or not the mocks have any contamination, which can be seen by observing the solid and dashed curves displayed on Figure <ref> (see, also, Figure <ref>, for a comparison of the mean power spectrum). For clarity, the best-fitting estimates for the individual contaminated data are not shown in the figure. To calibrate our methods, we fit a linear curve to the estimates from the mean power spectrum of the mocks, f_ NL, no mitigation, clean = m_1 f_ NL, mitigated + m_2. The m_1 and m_2 coefficients for nonlinear three, four, and nine maps are summarized in Table <ref>. These coefficients represent the impact of the cleaning methods on the likelihood. The uncertainty in after mitigation increases by m_1-1. Figure <ref> also shows that the choice of our cleaning method can have significant implications for the accuracy of the measured , and careful consideration should be given to the selection of the primary imaging systematic maps and the calibration of the cleaning algorithms in order to minimize systematic uncertainties. § RESULTS We now present our constraints obtained from the power spectrum of the DESI LRG targets. The treatment of the imaging systematic effects is performed on each imaging region (BASS+MzLS, DECaLS North/South) separately. After cleaning, the regions are combined for the measurement of the power spectrum. We unblind the galaxy power spectrum and the values after our cleaning methods are validated and vetted by the cross power spectrum and mean galaxy density diagnostics. We also conduct additional tests to check the robustness of our constraints against various assumptions, such as analyzing each region separately, applying cuts on imaging conditions, and changing the smallest mode used in fitting for . §.§ DESI imaging LRG sample We find that the excess clustering signal in the power spectrum of the DESI LRG targets is mitigated after correcting for the imaging systematic effects. Figure <ref> shows the measured power spectrum of the DESI LRG targets before and after applying imaging weights and the best-fitting theory curves. The solid line and the grey shade represent respectively the mean power spectrum and 1σ error, estimated from the =0 lognormal simulations. The differences between various cleaning methods are significant on large scales (ℓ < 20), but the small scale clustering measurements are consistent. By comparing linear two maps to linear three maps, we find that the measured clustering power on modes with 6≤ℓ < 10 are noticeably different between the two methods. We associate the differences to the additional map for psfsize in the r-band, which is included in linear three maps. On other scales, the differences between linear three maps and linear eight maps are negligible, supporting the idea that our feature selection procedure has been effective in identifying the primary maps which cause the large-scale excess clustering signal. Comparing non-linear three maps to linear three maps, we find that the measured spectra on 4 ≤ℓ < 6 are very different, probably indicating some non-linear spurious fluctuations with large scale characteristics due to extinction. Adding stellar density in the non-linear approach (non-linear four maps) further reduces the excess power relative to the mock power spectrum, in particular on modes between 2≤ℓ < 4. However, when calibrated on the lognormal simulations, we find that the over-subtraction due to stellar density is reversed after accounting for over-correction. §.§.§ Calibrated constraints All constraints presented here are calibrated for the effect of over-correction using the lognormal simulations. Table <ref> describes the best-fitting and marginalized mean estimates of from fitting the power spectrum of the DESI LRG targets before and after cleaning with the non-linear approach given various combinations for the imaging systematic maps. Figure <ref> shows the marginalized probability distribution for in the top panel, and the 68% and 95% probability contours for the linear bias parameter and in the bottom panel, from our sample before and after applying various corrections for imaging systematics. Overall, we find the maximum likelihood estimates to be consistent among the various cleaning methods. We obtain 36 (25) < < 61(76) at 68%(95%) confidence with χ^2=34.6 for non-linear three maps with 34 degrees of freedom. Accounted for over-correction, we obtain 37(25) < < 63(78) with χ^2=35.2 with the additional stellar density map in the non-linear four maps. With or without stellar density, the confidence intervals are consistent with each other and significantly off from zero PNG; specifically, the probability that is greater than zero, P( >0)=99.9 per cent. We also apply a more aggressive systematics treatment that includes regression using the nonlinear approach against the full set of imaging maps we identified, nonlinear nine maps, and find that our maximum likelihood value changes only slightly to ∼ 50, but the uncertainty on increases due to the aggressive treatment removing large-scale clustering information. For comparison, we obtain 98(84) < < 133(152) at 68% (95%) confidence with χ^2=44.4 for the no weight case. §.§.§ Uncalibrated constraints: robustness tests Figure <ref> shows the probability distributions of for various treatments before accounting for the over-correction effect. The method with the largest flexibility and more number of imaging systematic maps is more likely to regress out the clustering signal aggressively and return biased constraints. As expected, non-linear nine maps yields the smallest maximum likelihood estimate of =-6. Our non-linear three maps returns a best-fitting estimate of =29 with the 68%(95%) confidence of 19(9)< <41(53) and χ^2=34.6. With the stellar density map included, non-linear four maps yields a smaller best-fitting estimates of =17 with the error of 7(-2)<<27(38). The non-linear nine maps gives an asymetric posterior with the marginalized mean =-9, best estimate =-6 with the error of -21(-34)<<2(12). Now we proceed to perform some robustness tests and assess how sensitive the constraints are to the assumptions made in the analysis or the quality cuts applied to the data. For each case, we re-train the cleaning methods and derive new sets of imaging weights. Accordingly, for the cases where a new survey mask is applied to the data, we re-calculate the covariance matrices using the new survey mask to account for the changes in the survey window and integral constraint effects. Calibrating the mitigation biases for all of these experiments is beyond the scope of this work and redundant, as we are only interested in the relative shift in the constraints after changing the assumptions. Therefore, the absolute scaling of the constraints presented here are biased because of the over-correction effect. Table <ref> summarizes the uncalibrated constraints from the DESI LRG targets. Our tests are as follows: * Linear methods: Even though the linear three maps approach shows significant remaining systematics (Figure <ref>), we obtain identical constraints from linear eight maps and linear three maps, respectively, 26(16)<<49(62) and 26(16)<<50(63) at 68%(95%) confidence. * Imaging regions: We compare how our constraints from fitting the power spectrum of the whole DESI footprint compares to that from the power spectrum of each imaging region individually, namely BASS+MzLS, DECaLS North, and DECaLS South. Figure <ref> shows the 68% and 95% probability contours on and b from each individual region, compared with that from DESI. The cleaning method here is non-linear three maps, and the covariance matrices are estimated from the =0 mocks. The bias in DECaLS North is lower than the ones from DECaLS South and BASS+MzLS, which might indicate some remaining systematic effects that could not be mitigated with the available imaging systematic maps. This is because given the negative correlation between b and , a larger value of due to excess clustering power needs to be compensated by a smaller value of b. Overall, we find that the constraints from analyzing each imaging survey separately are consistent with each other and DESI within 68% confidence. Ignoring the over-correction effect, we find that the results from the DECaLS North region to be the only one that finds PNG nonzero at greater than 95%, which motivates follow-up studies with the spectroscopic sample of LRGs in DECaLS North. * Stellar density template (nStar): When not accounting for over-correction, adding the stellar density map appears to result in significant changes in the constraints, e.g., compare non-linear three maps with non-linear four maps in Table <ref>. But these changes disappear when we account for the mitigation bias and we find all methods recover the same maximum likelihood estimate for ∼ 50 within 69% confidence, see Table <ref>, which implies that these changes can be associated with the over-correction issue from the chance correlations between the stellar density map and large-scale structure. * Pixel completeness (comp. cut): We discard pixels with fractional completeness less than half to assess the effect of partially complete pixels on . This pixel completeness cut removes 0.6% of the survey area, and no changes in the constraints are observed. * Imaging quality (imag. cut): Pixels with poor photometry are removed from our sample by applying the following cuts on imaging; E[B-V]<0.1, nStar < 3000, depth_g > 23.2, depth_r > 22.6, depth_z > 22.5, psfsize_g<2.5, psfsize_r<2.5, and psfsize_z<2. Although these cuts remove 8% of the survey mask, there is a negligible impact on the best-fitting from fitting the DESI power spectrum. However, when each region is fit individually, the BASS+MzLS constraint shift toward higher values of by approximately Δ∼ 10, whereas the constraints from DECaLS North and DECaLS South do not change significantly. * Covariance matrix (cov): We fit the power spectrum of our sample cleaned with non-linear three maps correction, but use the covariance matrix constructed from the =76.9 mocks. With the alternative covariance, a 12% increase in the σ() is observed. We also find that the best-fitting and marginalized mean estimates of increase by 10-11%. Overall, we find that the differences are not significant in comparison to the statistical precision. * External maps (CALIBZ+HI): The neural network five maps correction includes the additional maps for HI and CALIBZ. With this correction, the best-fitting increases from 41.02 to 55.46 for DECaLS North and from 31.24 to 33.79 for DECaLS South, which might suggest that adding HI and CALIBZ increases the input noise, and thus negatively impacts the performance of the neural network model. This test is not performed on BASS+MzLS due to a lack of coverage from the CALIBZ map. * Declination mask (no DEC cut): The fiducial mask removes the disconnected islands in DECaLS North and regions with DEC <-30 in DECaLS South, where there is a high likelihood of calibration issues as different standard stars are used for photometric calibrations. We analyze our sample without these cuts, and find that the best-fitting and marginalized mean estimates from DECaLS South shift significantly to higher values of by Δ∼ 10, which supports the case that there are remaining photometric systematics in the DECaLS South region below DEC =-30. On the other hand, the constraints from DECaLS North do not change significantly, indicating the islands do not induce significant contaminations. * Scale dependence (varying ℓ_ min): We raise the value of the lowest harmonic mode ℓ_ min used for the likelihood evaluation during MCMC. This is equivalent to utilizing smaller spatial scales in the measurements of the power spectrum. By doing so, we anticipate a reduction in the impact of imaging systematics on inference as lower ℓ modes are more likely to be contaminated. Figure <ref> illustrates the power spectra before and after the correction with non-linear three maps in the top panel. The bottom panel shows the marginalized mean and 68% error on with non-linear three maps for the DESI, BASS+MzLS, DECaLS North, and DECaLS South regions. We discover that a slight upward shift in the mean estimates of on scales ranging from 12 to 18 for DECaLS North and BASS+MzLS when we utilized a higher ℓ_ min. This outcome might imply that the imaging systematic maps do not contain enough information to help the cleaning method null out the contaminating signal in the NGC. We also find that the bump is resilient against an alternative correction, in which we apply the neural networks trained on the DECaLS South to the DECaLS North region (see <ref>). Overall, this result is contrary to what one might predict if a significant systematic-induced spike existed at the very low ℓ. As a result, it suggests that the underlying issue is more subtle than originally anticipated. § DISCUSSION AND CONCLUSION We have measured the local PNG parameter using the scale-dependent bias in the angular clustering of LRGs selected from the DESI Legacy Imaging Survey DR9. Our sample includes more than 12 million LRG targets covering around 14,000 square degrees in the redshift range of 0.2< z < 1.35. We leverage early spectroscopy during DESI Survey Validation <cit.> to infer the redshift distribution of our sample (Figure <ref>). In our fiducial analysis, we have obtained a maximum likelihood value of =47 with a significant probability that is greater than zero, P(>0)=99.9 (Figure <ref> and Table <ref>). The signature of local PNG is very sensitive to excess clustering signals caused by imaging systematic effects. We have applied a series of robustness tests to investigate the impact of how the galaxy selection function is determined. Specifically, both linear and nonlinear methods are applied using various combinations of imaging systematic maps (Galactic extinction, stellar density, depth in grzW1, psfsize in grz, and neutral hydrogen column density). We also examine the effect of different masks based on imaging. Overall, we find no change in the analysis that shifts the maximum likelihood value of to a significantly lower value (Figure <ref>, Figure <ref>, and Table <ref>). The only manner in which the significance of nonzero PNG decreases is due to the uncertainty on the measurement increasing when we employ more imaging systematic maps for the selection function estimation and by doing so remove large-scale clustering information (the effect of which on recovery we have calibrated with mocks, as shown in Figure <ref>). When comparing our fiducial results to recent CMB and QSO measurements, as shown in Figure <ref>, we find a significant tension with CMB but consistent constraints with LSS within 95% confidence. Either we have measured an signal that is inconsistent with CMB measurements or there is a hidden source of systematic contamination in our data which cannot be mitigated with available imaging systematic maps. To be consistent with CMB measurements, our results would need to correspond to some kind of scale-dependent model which has a larger non-gaussianity on LSS scales but negligible one at CMB scales <cit.>. Our analysis can be considered as the first attempt to identify major systematics in DESI, so we can be ready for constraining with DESI spectroscopy. Internal DESI tests of the photometric calibration were unable to uncover DESI-specific issues, e.g., when comparing to Gaia data. The most significant trends that we find are with the E(B-V) map. The source of such a trend would be a mis-calibration of the E(B-V) map itself or the coefficients applied to obtain Galactic extinction corrected photometry. Such a mis-calibration would plausibly be proportional in amplitude to the estimated E(B-V) map, though it may not have E(B-V)’s spatial distribution. In order to explain the signal we measure, such an effect would need to be approximately twice that of the trend we find with E(B-V). There are ongoing efforts within DESI to obtain improved Galactic extinction information, which will help establish if this is indeed the cause. § ACKNOWLEDGEMENTS We would like to thank Dragan Huterer and Douglas Finkbeiner for feedback on an early version of the manuscript; Violeta Gonzalez-Perez for handling the DESI internal review process; Tanveer Karim, Sukhdeep Singh, Ahmad Shamloumehr, and Reza Katebi for helpful discussions; and Rongpu Zhou for providing the maps for galaxy density and imaging systematics. MR would like to thank Ohio State's Center for Cosmology and AstroParticle Physics, in particular, John Beacom and Lisa Colarosa, for their hospitality and support. MR is supported by the U.S. Department of Energy grants DE-SC0021165 and DE-SC0011840. H-JS acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics under grant No. DE-SC0019091 and No. DE-SC0023241. FB is a University Research Fellow, and has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement 853291). BB-K is supported by the project UTF8mj우주거대구조를 이용한 암흑우주 연구 (“Understanding Dark Universe Using Large Scale Structure of the Universe’’), funded by the Ministry of Science of the Republic of Korea. We acknowledge the support and resources from the Ohio Supercomputer Center <cit.>. This research has made substantial use of the arXiv preprint server, NASA’s Astrophysics Data System, Github's online software development platform, and many open-source software, such as Pytorch, Nbodykit, HEALPix, Fitsio, Scikit-Learn, NumPy, SciPy, Pandas, IPython, and Jupyter. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: https://www.desi.lbl.gov/collaborating-institutionshttps://www.desi.lbl.gov/collaborating-institutions. The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory. Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. The complete acknowledgments can be found at https://www.legacysurvey.org/https://www.legacysurvey.org/. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U. S. National Science Foundation, the U. S. Department of Energy, or any of the listed funding agencies. The authors are honored to be permitted to conduct scientific research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. § DATA AVAILABILITY The DR9 catalogues from the DESI Legacy Imaging Surveys are publicly available at https://www.legacysurvey.org/dr9/https://www.legacysurvey.org/dr9/. The software used for cleaning the imaging data is available at https://github.com/mehdirezaie/sysnetdevhttps://github.com/mehdirezaie/sysnetdev. All data points shown in the published graphs are available in a machine-readable form at https://github.com/mehdirezaie/dimagfnlhttps://github.com/mehdirezaie/dimagfnl. mnras
http://arxiv.org/abs/2307.02877v1
20230706092903
Towards accurate instance segmentation in large-scale LiDAR point clouds
[ "Binbin Xiang", "Torben Peters", "Theodora Kontogianni", "Frawa Vetterli", "Stefano Puliti", "Rasmus Astrup", "Konrad Schindler" ]
cs.CV
[ "cs.CV" ]
1ETH Zürich, Switzerland - (bxiang, tpeters, tkontogianni, vfrawa, schindler)@ethz.ch 2Norwegian Institute of Bioeconomy Research (NIBIO) - (Stefano.Puliti, rasmus.astrup)@nibio.no XX, YY XX/YY Panoptic segmentation is the combination of semantic and instance segmentation: assign the points in a 3D point cloud to semantic categories and partition them into distinct object instances. It has many obvious applications for outdoor scene understanding, from city mapping to forest management. Existing methods struggle to segment nearby instances of the same semantic category, like adjacent pieces of street furniture or neighbouring trees, which limits their usability for inventory- or management-type applications that rely on object instances. This study explores the steps of the panoptic segmentation pipeline concerned with clustering points into object instances, with the goal to alleviate that bottleneck. We find that a carefully designed clustering strategy, which leverages multiple types of learned point embeddings, significantly improves instance segmentation. Experiments on the NPM3D urban mobile mapping dataset and the FOR-instance forest dataset demonstrate the effectiveness and versatility of the proposed strategy. TOWARDS ACCURATE INSTANCE SEGMENTATION IN LARGE-SCALE LIDAR POINT CLOUDS Binbin Xiang1, Torben Peters1, Theodora Kontogianni1, Frawa Vetterli1, Stefano Puliti2, Rasmus Astrup2, Konrad Schindler1 ============================================================================================================================= § INTRODUCTION Laser scanning has emerged as a main sensing technology to digitise 3D scenes, thanks to its ability to deliver dense 3D point observations with high reliability. The unstructured point clouds it produces are, however, not directly usable as a product (except for visualisation) and must be processed further to extract meaningful entities for mapping and analysis. Panoptic segmentation <cit.> addresses the case where the desired entities are semantically meaningful objects, like individual trees or traffic signs. [As opposed to low-level primitives without semantic meaning, such as salient keypoints or planar surfaces.] Panoptic segmentation is a generic and versatile processing step that may be useful across many different fields. In the context of street scenes, it facilitates scene understanding and mapping at the level of objects, like buildings, traffic signs, pedestrians, etc. <cit.>, which in turn supports applications from urban planning to autonomous vehicles. In forest regions, panoptic segmentation can localise and delimit individual trees, which in turn supports applications like resource management, environmental protection and ecological restoration <cit.>. Large-scale outdoor point clouds pose particular challenges for panoptic segmentation. Besides common problems of point cloud processing, such as occlusions, moving objects and a large range of object scales and point densities <cit.>, an important issue is the lack of natural “processing units”: unlike indoor scans that can be processed on a per-room basis <cit.> or panoramic scans from robotic systems that can be processed on a per-scan basis <cit.>, there is no natural way of dividing an outdoor scene into independent subsets. A specific difficulty of panoptic segmentation is the requirement to separate objects of the same category, which can be considerably harder than only assigning semantic labels to points, as in the case of trees with overlapping crowns. Modern panoptic segmentation techniques are often built upon a 3D deep network backbone that extracts per-point features, followed by network branches that segment the points into semantic categories and into object instances, based on those features. The backbone network is not the focus of this paper. We treat it as a plug-in module of our overall network that ingests a point cloud and returns a feature vector of fixed length for every point. Multiple well-proven, trainable feature extractors exist for the task <cit.>. Semantic segmentation also has reached a certain level of maturity and can be regarded as a commodity. Technically, the associated network branch is a classifier that maps the feature representation to a list of (pseudo-)probabilities per point and is typically trained by minimising the cross-entropy loss. We follow that practice, but do not deeply delve into the details. The focus of the present paper is on the instance segmentation branch, arguably the least explored part of the problem and the current performance bottleneck. There are two different strategies to identify object instances in point clouds. §.§ Top-down instance detection The top-down approach first performs object detection to obtain a set of bounding boxes around 3D object candidates. Then the points inside each box are separated into points on the object and points on the background with a binary classifier. The quality of such methods largely depends on the object detection step. The earliest attempts at instance segmentation were top-down methods <cit.>, following the success of Mask-RCNN <cit.> in the image domain, but later they were surpassed by bottom-up methods (see below). There is recent evidence that in certain types of (indoor) scenarios the top-down approach is competitive or even superior <cit.>. Here, we do not further investigate the top-down strategy for two reasons: (1) For outdoor scenes, bounding box detectors tend to work well only for a small number of categories, especially pedestrians and vehicles <cit.>, whereas they often miss small objects like bollards, and objects that have greatly varying shape and aspect ratio, for instance trees. (2) Outdoor mapping point clouds cannot be split into natural entities like rooms, instead they have to be subdivided into arbitrary, computationally manageable chunks using a sliding window or random sampling. Consequently, many objects – especially large ones – are cut into parts and only partially visible in each chunk, making them hard to find for an object detector based on global shape and layout. §.§ Bottom-up instance grouping The bottom-up strategy aims to equip each individual point with an instance-sensitive feature representation, such that instances can be found by clustering the points in the associated feature space. These (learned) instance features are computed with the help of a neural network, based on the point coordinates and/or the backbone features <cit.>. A natural feature to find instances is the offset from the point to the instance centre, in the spirit of the (generalised) Hough transform. An important finding in this context was that the unsupervised clustering step is, by itself, rather unreliable. To address the issue, PointGroup <cit.> proposed to run multiple clustering variants and obtain a redundant set of clusters. The quality of these instance candidates is then estimated with a (learned) ScoreNet, such that they can be sorted by their scores and pruned to an optimal set of instances with non-maximum suppression (NMS). The principle to let multiple candidate segmentations compete, and to thereby benefit from the complementary strengths of different clustering methods, proved to work very well and sparked a series of follow-up works that further explored the idea. MaskGroup <cit.> clusters at different spatial scales, and SoftGroup <cit.> keeps soft semantic labels, so as to enable clustering across different categories and rectify semantic segmentation errors. HAIS <cit.> adds a MaskNet after the clustering step, which examines each individual cluster and aims to detect and remove points that do not belong to the object instance. <cit.> is also based on instance proposals, and refines them by modelling their relations with a graph neural network, which is again more suitable for complete, self-contained target areas like indoor rooms. <cit.> construct a cluster hierarchy and traverse the associated tree to generate proposals, which are then again assessed with a ScoreNet. For the present study we also build on the PointGroup principle. We note that, contrary to most other existing work, it has also been shown to work in outdoor settings. For completeness, we mention that directly clustering points into instances in 3D scene space – arguably the most obvious strategy – does not work well for many mapping tasks. For compact, well-separated objects like vehicles on the road this strategy can work quite well <cit.>, but it tends to fail as soon as objects are located in close proximity, such as tightly parked cars; or even touch, like the crowns of nearby trees. There is a consensus in the literature that in such situations a-priori knowledge about the objects' shapes and layouts is required, e.g., <cit.>. Conveniently, that knowledge is also needed for semantic segmentation, so it can be derived from the same latent features with little computational overhead. Recently, transformer-type neural networks were applied for instance segmentation <cit.>, following a trend in the 2D image domain <cit.>. The principle is to replace the explicit instance feature extraction and clustering step with instance queries, based on the attention mechanism of the transformer architecture. So far these methods have only been demonstrated on indoor datasets. They appear to be particularly successful in terms of detecting instances, whereas the per-point segmentation performance is on par with PointGroup-style methods. In practical terms, the learned proposal generator is rather elegant, but comes at the cost of significantly higher memory demand. When processing densely scanned outdoor point clouds, GPU memory is the limiting factor even for conventional, convolution-based methods. Hence, we exclude transformers from this study, but note that adapting them to outdoor mapping is an interesting future direction. §.§ Contributions We have developed an effective deep learning-based workflow for panoptic segmentation of large outdoor mapping point clouds, based on the bottom-up grouping strategy. Design choices for instance segmentation are carefully evaluated and analysed in a series of experiments on two different data sets, one showing streets scenes (NPM3D) and one dense forests (FOR-instance). Our main findings are: (1) The often used grouping based on centroid offsets struggles to separate object instances located close to each other. A learned feature embedding, trained with a contrastive loss to discriminate instances, can often separate such instances. (2) On the other hand, offset vectors more accurately separate objects that have similar local shape, but are located far from each other. We find that the best results are achieved by combining both methods and letting the subsequent ScoreNet select from both proposal sets. (3) Clustering based on embedding features, which does not depend directly on the semantic segmentation result, reduce mistakes caused by incorrect semantic labels and thereby improve the completeness of the affected instances. (4) A simple block merging strategy is sufficient to combine the segmentations of local subsets into a coherent large-scale panoptic segmentation map. (5) State-of-the-art methods for 3D panoptic segmentation work well even for challenging tasks like separating tree crowns in dense forests. We expect those methods to be more widely adopted for practical applications in the near future. § METHOD The overall pipeline of our proposed method, shown in Figure <ref>, consists of three main components: an input data generator (Section <ref>), a deep neural network (Section <ref>), and a post-processor (Section <ref>). §.§ Input data generator As a first step, the entire point cloud is voxel-grid subsampled to sparsify overly dense regions and achieve a homogeneous (maximum) point density. The voxel size for the filter depends on the scene. In our implementation we use 12×12×12cm3 (579 points/m3) for urban scenes and 20×20×20cm3 (125 points/m3) for forest scenes, see Table <ref>. These values were chosen based on extensive ablation studies performed in <cit.>. Even so, outdoor scans are far too large to process as a whole on existing hardware. As an example, a single scene from the NPM3D dataset <cit.>, covering a stretch of road of length ≈600m, has several million points. It is therefore necessary to process the data in local blocks. When applying the trained network to new data these blocks can be sampled in sliding-window fashion. During training, we simply sample them randomly. There are different ways to define the local neighborhood of points that constitutes a block around a sampled location, popular choices include cubic boxes, spheres or cylinders. We opt for the cylinder, for the following reasons: (1) It avoids cutting objects along the vertical, which on the one hand improves the handling of long vertical objects such as street lamps or trees, and on the other hand simplifies block merging to a 2D problem (Section <ref>). (2) It ensures computational efficiency. When working with large point clouds, the computational bottleneck is not error back-propagation, but rather geometric queries like finding neighbours. Cylindrical neighbourhoods are compliant with efficient algorithms like fast radius search (normally implemented via spatial search structures such as KD-trees, and available in the Torch-Point3D framework <cit.>). To sample training cylinders in a way that ensures sufficient coverage of rare categories, we take inspiration from KPConv <cit.>: the location of the vertical cylinder axis is found by randomly sampling one of the training data points, with sampling probabilities proportional to the square root of the inverse class frequencies, P_i∝√(1/N_i). After sampling a fixed training set of many cylindrical blocks, the points' (x,y)-coordinates in each of them are shifted to have their origin in the cylinder centre. Moreover, various data augmentation techniques are applied: isotropic, additive Gaussian random noise on the point coordinates (jittering), random rotations around the cylinder axis, random anisotropic scaling by factors s∈[0.9,1.1], and random reflection along the y-axis. At test time the cylindrical blocks are sampled regularly with fixed step size along the (x,y)-grid so as to ensure even coverage of the point cloud, see the illustration of the input data generator in Figure <ref>. §.§ Network architecture As feature extraction backbone we use the Minkowski Engine <cit.>, which offers a favourable trade-off between performance and computational cost <cit.>. In a nutshell, it is a 3D U-Net that operates on the voxelised point cloud with sub-manifold sparse convolutions. The resulting per-point feature vectors of length 16 serve as input for three output branches: one that estimates point-wise semantic labels, one that regresses offsets to the instance center, and one that extracts instance-discriminative embedding features. The semantic segmentation branch consists only of a multi-layer perceptron (MLP) with a single hidden layer with softmax activations and outputs semantic class probabilities for each point. That branch is trained with a standard cross-entropy loss. Semantic labels are obtained by taking the argmax over the predicted category probabilities. Points assigned to categories that cannot be divided into well-defined instances (so-called “stuff” categories, like for instance “road” or “building facade”) are ignored during instance segmentation. The centre offset branch, advocated by several studies about instance segmentation <cit.>, operates in 3D scene space: it takes as input the latent encoding extracted by the backbone and, for each point, predicts a 3D offset vector that would take that point to the estimated instance centre. I.e., if the predictions were perfect then shifting all points by their offsets would collapse each instance to a single point. The corresponding loss function is a combination of (1) the cosine distance between the true and predicted offset vectors and (2) the L_1 distance between their endpoints. ruled Comment/* */ The instance embedding branch also ingests the latent encoding from the backbone. Instead of trying to find the geometric object centre, it embeds each point in a 5D feature space that is optimised to discriminate between instances. The embedding is supervised with a contrastive loss function that favours small distances between points from the same instance and large distances between points from different instances. Importantly, the embedding space has more than three dimensions, hence it has some spare capacity to represent object properties beyond being a compact cluster around a 3D centre point. We found that the two ways of measuring point-to-instance affinities, either by regressing explicit centre offsets in geometric space <cit.> or by contrastive embedding <cit.>, complement each other. In fact, instance segmentation based on local 3D point configurations must balance different a-priori expectations. On the one hand, points that form a compact structure surrounded by empty space are indeed likely to belong to the same object, and that situation is easy to encode in the form of centroid offsets – e.g., for a local region on an isolated car one can often guess the direction to the object centre just from the local surface shape. On the other hand, when objects are located near each other it becomes important to look past proximity – e.g., for a region of a forest canopy it is often easy to say which tree it belongs to, but nevertheless difficult to point to a clear object centre. This is why we employ both strategies. The predicted offsets are simply applied to the 3D point coordinates to shift them to the estimated object centre, and then clustered into instance candidates by region growing with a distance threshold. Note, mapping the latent features to 3D offset vectors discards the semantic category information originally contained in the features. Hence, the clustering is constrained to only include points from the same category in a candidate instance. In the 5D embedding space, where distances do not have a direct geometric meaning, candidates are found with mean-shift clustering. From the redundant set of instance candidates, we want to retain the subset that best explains the scene. To that end we train a network branch to predict how well each candidate matches a ground truth instance. This ScoreNet regresses the highest expected IoU between the candidate and any of the actual objects. It is a small 3D U-Net model on top of the backbone features, followed by max-pooling and a fully connected layer that outputs a scalar score between 0 and 1 per candidate. §.§ Post-processing After scoring we are left with an over-complete list of instance candidates, each consisting of a subset of the 3D point cloud, and equipped with an estimate of its goodness-of-fit to some actual object instance. These are post-processed into a final set of instances in the following way. First, clusters with very few points (in our implementation, 10; see Table <ref> for a complete list of hyper-parameters) are discarded. Second, we perform non-maximum suppression (NMS) based on the predicted scores to get rid of redundant clusters. Third, clusters with low scores are also discarded. Having obtained our final estimate per cylindrical block, we run block merging to combine them into a single result for the entire region of interest, see Algorithm <ref>. In brief, the block merging re-assigns instance IDs such that they are globally unique, and greedily fuses instances that were split between different blocks. After block merging we have a final segmentation of the voxel-grid subsampled point cloud into semantic categories and into object instances. As a final step, we upsample all labels back from the voxel-gridded point cloud to the complete, original one with the nearest-neighbour method. Instance labels for “stuff” classes that do not have well-defined instances are set to -1. § EXPERIMENTS §.§ Experimental settings Datasets. For our experiments we use two datasets, NPM3D <cit.> and FOR-instance. NPM3D consists of mobile laser scanning (MLS) point clouds collected in four different regions in the French cities of Paris and Lille, where each point has been annotated with two labels: one that assigns it to one out of 10 semantic categories and another one that assigns it to an object instance. When inspecting the data, we found 9 cases where multiple tree instances had not been separated correctly (i.e., they had the same ground truth instance label). These cases were manually corrected using the CloudCompare software (https://www.cloudcompare.org, last accessed 03/2023), and 35 individual tree instances were obtained. Our variant of the dataset with 10 semantic categories and enhanced instance labels is publicly available. [https://doi.org/10.5281/zenodo.8118986] The FOR-instance dataset is a recent benchmark dataset from the forestry domain, aimed at tree instance segmentation and biophysical parameter retrieval. The point clouds were collected from drones equipped with survey-grade laser scanners such as the Riegl VUX-1 UAV and Mini-VUX. The dataset covers diverse regions and forest types across multiple countries. For our purposes, we removed points assigned to the category “outpoints” (i.e., partially observed tree instances on region borders), leaving us with only two semantic categories, tree and non-tree (where the latter includes the forest floor). The panoptic segmentation task thus becomes to separate trees from non-trees and to divide the tree class into individual instances. Both datasets have been released only recently. Previous outdoor point cloud datasets either did not provide instance annotations or were too small to train deep neural networks, consequently there are hardly any baseline results to compare to. We have made both the data and our source code[https://github.com/bxiang233/PanopticSegForLargeScalePointCloud] publicly available for future reference. Evaluation Metrics. Semantic segmentation quality is measured by the mean intersection-over-union (mIoU) across all categories. To assess instance segmentation we follow <cit.> and compute the mean precision (mPrec) and mean recall (mRec) over all instances, the corresponding F1-score (harmonic mean of precision and recall), as well as the mean coverage (mCov), defined as the average IoU between ground truth instances and their best-matching instance predictions. We also calculate a variant of mCov that weights instances by their ground truth point count (mWCov). For the combined panoptic segmentation quality we adopt the metrics proposed by <cit.>, segmentation quality (SQ), recognition quality (RQ) and panoptic quality (PQ). Implementation Details. Our source code is based on the Torch-Point3D library <cit.>. Unless explicitly specified for a given experiment, we use the default parameter values listed in Table <ref>. All experiments were conducted on a machine with 8-core Intel CPU, 8GB of memory per core, and one Nvidia Titan RTX GPU with 24GB of on-board memory. §.§ Ablation studies on NPM3D Experiments were conducted on NPM3D to investigate the effects of different hyper-parameters. In all ablation studies, the training portions of Lille1_1, Lille1_2, and Lille2 serve as training set and the test portion of Paris serves as test set. Radius of cylindrical blocks. As explained, we sample local cylindrical regions from the data to keep computations tractable. A larger cylinder radius means more points, and thus more spatial context, and at the same time fewer incomplete objects and boundary effects; but also slower training and inference. Figures <ref>a illustrates the impact of the radius on instance segmentation and on semantic segmentation. As a general conclusion, the cylinder radius has little influence on the semantic segmentation quality (in terms of mIoU), seemingly a limited amount of local context is sufficient to categorise points. On the contrary, too small blocks markedly degrade instance segmentation (measured by the F1-score), confirming the intuition that it relies more on a complete view of object shape and layout. The performance metrics in Figure <ref> refer to end-to-end segmentation performance from a system user view, after cylinder merging and upsampling to the original input point cloud. We point out that increasing the cylinder radius from 8m to 20m doubles the inference time for the complete set from 12min to 24min, and also the training takes roughly twice as long. Mean-shift bandwidth. The discriminative training uses the two margins 0.5 and 1.5, meaning that in theory it should bring the feature vectors of all points on an instance to within 0.5 units of the associated cluster centre, whereas there should be a distance of at least 2×1.5 units between two cluster centres <cit.>. Based on these values we empirically determine the optimal bandwidth of the flat (rectangular) mean-shift kernel, see Figure <ref>b. Indeed, semantic segmentation performance and the closely related panoptic quality peak at a bandwidth of 0.5, whereas instance segmentation peaks at a slightly higher value of 0.6. Voxel grid resolution. As expected, the overall trend is that point cloud analysis deteriorates with increasing voxel size (stronger down-sampling). As can be seen in Figure <ref>c, instance segmentation does not benefit from overly dense sampling and reaches its best performance at a voxel size of 12×12×12cm3. Obviously, smaller voxels significantly increase the computational cost of both training and testing, Figure <ref>d. We note that the trade-off between resolution, cylinder radius and computational cost depends on the scene properties, c.f. <ref>e, which is why we chose different values for NPM3D and FOR-instance (Table <ref>). §.§ Panoptic segmentation results for NPM3D The focus of the present paper is on how to best segment object instances. We compare different designs of the instance clustering branches in Table <ref>. Setting I corresponds to only the discriminative embedding, without predicting and clustering 3D centroid offsets. Conversely, setting II only clusters based on the predicted centroid offsets and does not learn a discriminative embedding. Setting III denotes the configuration advocated by PointGroup <cit.>, where the clustering based on centroid offsets is complemented by clustering also the raw 3D points (before shifting them by the offset vectors), and the best instances are selected from the resulting, redundant set of clusters with a ScoreNet. Setting IV is the combination of centroid-based clustering and embedding feature clustering (again followed by a ScoreNet), as described above. Finally, setting V additionally includes clusters based on raw point coordinates, as advocated by <cit.>, on top of the two cluster sets of setting IV; thus further enlarging the candidate pool, but also making score-based pruning harder. All results were computed with four-fold cross-validation: in turn, each sub-regions serves as test set once, whereas the other three are used for training. Then the predictions for all four regions are concatenated to obtain labels for the test dataset, and the performance metrics are calculated. The metrics for all five settings are given in Table <ref>. It can be seen that the proposed setting IV yields the best balance between precision and recall for instance segmentation (F1-score), as well as the best semantic segmentation (mIoU), and consequently also the highest PQ values. Clustering based solely on either offsets or embedding features significantly reduces precision. The clustering variant introduced by PointGroup, based on raw point coordinates, noticeably reduces recall. It appears that, for quite a number of object instances, the scan point distribution is too diffuse to delineate them. The results for setting V show that instance candidates based on raw points, surprisingly, not only miss many points but even distract from better, competing candidates. It appears that these poorly matching clusters inject noise into the ranking procedure. In other words, complementary methods to diversify the candidate set are only beneficial if the additional candidates are of sufficient quality. Figure <ref> illustrates the differences qualitatively for four representative examples. In Area 1, adjacent trash cans challenge the instance segmentation. The centroid offset prediction fails to separate them, whereas discriminative embedding succeeds, as can be seen in the t-SNE projection of the 5D embedding space. Area 2 highlights a case where PointGroup suffers from its hard assertion that only points from the same semantic category can be clustered together. This over-reliance on the category labels means that instance segmentation cannot correct semantic segmentation errors, as on the streetlights marked by a black circle. Area 3 shows an example for the particularly challenging category of trees, which have large shape variability and are not delimited by well-defined surfaces. When they are located close to each other, the centroid method becomes unreliable, whereas they can still be separated in the discriminative embedding. Area 4 illustrates the opposite case, where the embedding features are unable to separate two cars, which sometimes occurs especially when there are many instances in close proximity. But since the cars are well enough separated, the centroid offsets are correctly predicted for most of their points and rectify the mistake. §.§ Evaluation on FOR-instance The FOR-instance dataset defines a canonical train/test split, to which we adhere. Within the training portion, we randomly set aside 25% of the data files as our validation set to monitor generalisation and hyper-parameters. As for NPM3D, we concatenate the results of all test sets and compute the performance metrics from that overall segmentation result. Ablation of voxel size. Unsurprisingly, the rather simple segmentation between tree and non-tree points is hardly affected by the voxel grid filtering. But also instance segmentation performance is remarkably stable across a wide range of voxel sizes, Figure <ref>e. It reaches its maximum for voxels with side length 20 to 25cm, but even at 40cm the panoptic quality PQ drops less than 2.5 percent points under the maximum. Also very small voxels degrade performance only a little (likely because of diminished spatial context information, due to empty voxels), but significantly increase the training time. From our results, we do not see a reason to decrease the voxel size below 20×20×20cm^3 for this application. Ablation of cylinder radius. As shown in Table <ref>, expanding the radius of the input blocks from 4m to 8m improves all performance metrics. The main reasons is that the bigger radius increases the chance of covering trees completely with a single block, leading to better instance segmentation. For forestry applications we therefore recommend to use rather large neighbourhoods, despite the significantly longer training time. We also compare our preferred setting, with embedding and offset branches, block radius 8m and voxel size 20cm, to our implementation of the PointGroup method, see Table <ref>. We observe a marked improvement of all metrics with our proposed version, with over 17 percent points difference in F1-score. It appears that in the forest setting, where object centroids are hard to estimate and object boundaries are diffuse, the discriminative embedding has a clear advantage over clustering methods that operate in 3D geometric object space. Figure <ref> shows example results from different locations in the FOR-instance dataset. We note that both tested methods produce surprisingly compelling instance segmentations in most cases, across a range of forest characteristics. Still, our mixed clustering approach consistently yields results on par or better than PointGroup, see differences marked with white ellipses. FOR-instance was released only recently, and we are not aware of any other published results on the dataset. From the user perspective, we note that our pipeline achieves satisfactory instance segmentation without region-specific parameter tuning or post-processing, challenges commonly reported in the context of tree segmentation, e.g., <cit.>. § CONCLUSION We have studied the bottom-up approach to panoptic segmentation of outdoor 3D point clouds. We found that the bottleneck is the correct clustering of points into instances, and have constructed a pipeline with two complementary segmentation branches: one that is based on 3D centroid prediction and is well-suited for well-separated, compact objects; and a second one that is based on a discriminative embedding of the 3D points and better handles (nearly) contiguous objects and fuzzy object borders. In experiments on two different datasets, a contemporary panoptic segmentation pipeline with a carefully designed instance clustering stage was able to reach F1-scores 74% for objects in an urban mapping context and, remarkably, F1-scores 68% for trees in dense forest plots. 1.17
http://arxiv.org/abs/2307.02262v1
20230705130048
Low-energy Ion Beam Diagnostics: An Integrated Solution
[ "A. Adıgüzel", "H. Çetinkaya", "Ş. Esen", "D. Halis", "T. B. İlhan", "A. Kılıçgedik", "S. Oğur", "S. Öz", "A. Özbey", "V. E. Özcan", "N. G. Ünel" ]
physics.acc-ph
[ "physics.acc-ph", "hep-ex" ]
1,10]A.Adıgüzel [1]İstanbul University, Department of Physics, İstanbul 2]H.Çetinkaya [2]Kütahya Dumlupınar University, Department of Physics, Kütahya 1]Ş.Esen 3]D.Halis [3]Yıldız Technical University, Department of Physics, İstanbul 4]T.B.İlhan [4]İstinye University, Department of Physics, İstanbul 5]A.Kılıçgedik [5]Marmara University, Department of Physics,İstanbul 6]S.Oğur [6]ADAM SA, Geneva 7]S.Öz [7]Boğaziçi University, Department of Mechanical Engineering, İstanbul 8]A.Özbey [8]İstanbul University Cerrahpaşa, Department of Mechanical Engineering, İstanbul 9,10]V.E.Özcan [9]Boğaziçi University, Department of Physics, İstanbul 11,9]N.G.Ünel [11]University of California Irvine, Physics Department, Irvine [10]Boğaziçi University, Feza Gürsey Center for Physics and Mathematics, İstanbul Low-energy Ion Beam Diagnostics: An Integrated Solution [ August 1, 2023 ======================================================= High gradient accelerator injectors have been widely studied throughout the world-leading accelerator facilities. The demand for high frequency cavities have led the Detector, Accelerator and instrumentation laboratory (KAHVELab) in Istanbul to deploy a four-vane Radio Frequency Quadrupole (RFQ) operating at 800 MHz to accelerate 20 keV protons to 2 MeV. The protons from the microwave discharge ion source are transversely matched to the RFQ via an optimized Low Energy Beam Transport (LEBT) line which also contains an integrated measurement station, called measurement station (MBOX). The MBOX is used to measure the proton beam’s current along with pulse length duration, profile as well as the beam emittance upstream of the RFQ. It contains a number of home-built diagnostic tools: a Faraday cup, a scintillator screen and a pepper pot plate (PP). The analysis software is also locally developed and tested for the PP photo analysis. In this note, the design, construction and tests of the integrated measurement station are discussed. The results from various measurements, especially on beam profile and charge, are compared to the simulation predictions. § INTRODUCTION In an ion beamline, it is essential to be able to measure the properties of the beam, especially before it enters an accelerating cavity. This is even more critical for a low energy ion beamline as the Radio Frequency Quadrupole, which could be damaged by an unmatched beam, is costly in terms of financial resources and time. Therefore, measuring the ion beam properties (such as its current, profile and emittance) is critical for its acceleration and its downstream use. This makes an accurate and effective diagnostic station a necessity for the entire beamline. An additional example can be found in the field of nuclear medicine: it is very important to correctly measure the cross-sectional area of the beam, the number of particles passing through (beam current) and the beam emittance to determine the area and depth of the penetrating radiation that will define the treatment. Here, we present our measurement station, custom designed and built based on previous experiences <cit.>. The measurement station is currently installed on the proton test beam at KAHVELab, Istanbul, Turkey <cit.>. The proton beamline is being constructed as part of an ongoing R&D program with the short term goal of obtaining a PIXE (Proton Induced Xray Emission) measurement setup. Regarding the PIXE's efficiency onto lower Z-materials, i.e. Z=11 to 32, and lower impact onto the studied material, building a high frequency RFQ enabling 2 MeV beam energy with a charge at the order of μA is of great usage <cit.>. Furthermore, the long-term goals of the project range from building a medical proton accelerator to production of ion implant machines for semiconductor production. A generic goal of this R&D program is to use domestic resources as much as possible, for the design and production of related parts (such as the beamline components, magnets, etc.) to increase the high technology awareness in the local manufacturers. Additionally, students and experts will be trained on the job, and considerable experience will be gained on both design and production on the beamline components and beam monitoring devices. Moreover, since the computer control of the whole system is to be carried out with home-grown software, the expertise gained in system control could also be reflected to other ongoing national projects such as the Turkish Accelerator Center <cit.>. Therefore, the particular diagnostics box study presented in this note has two objectives: 1) to design and produce a test station for measuring the properties of the ion beam, and to develop the associated computer software to monitor and control the equipment. The control software, using some degree of automatization, is expected to minimize the errors and the needs for human intervention to the detector set while measuring the beam properties. 2) to use the developed hardware and software in a realistic beamline and to develop the necessary software for the analysis of the acquired data. Accordingly, this paper starts by presenting the proton beamline and its components. The next section focuses on the details of the measurement station and the last section gives an outlook and conclusions. § PROTON TESTBEAM AT KAHVELAB The proton beamline design consists of an ion source, a low energy beam transport (LEBT) section and a radio frequency quadrupole (RFQ) at KAHVELab. Such a simple setup is the first stage of almost all modern ion accelerators such as the Linac4 injector of the Large Hadron Collider complex at CERN <cit.>. A view of the KAHVELab proton beamline installation can be seen in Fig. <ref>. §.§ Ion Source and LEBT Line A microwave discharge type ion source (MDIS) was selected for its low production cost, long lifetime, high beam current, reliability, low maintenance requirements. The MDIS setup consists of three main parts: The RF section, the plasma chamber and the extraction system, respectively. The details of MDIS design and production can be found elsewhere <cit.>. After the Ion Source, the proton beam line continues with a Low Energy Beam Transport (LEBT) line. The aim of the LEBT line is to transmit the proton beam from the extraction point of the ion source to RFQ entrance point in a most efficient way. The LEBT line shown in Fig. <ref>, was designed using TRAVEL<cit.> and DemirciPro<cit.> software programs. The solenoid effective length are imported into the dynamics code, where the solenoid effective length is calculated as: ℓ_eff = ∫_z_1^z_2B_z^2 dz/B_eff^2, with B_eff=B_max that refers to field at the solenoid center, i.e. the maximum longitudinal solenoid field component: B_z. Regarding the hard edge solenoid assumption and the solenoid field map, this assumption has resulted a better agreement with the beam measurements than the direct interpretation of the Ampère's law. In that figure, the beam moves from left to right. The leftmost vertical solid line represents the beam extraction from the IS and the rightmost dashed line represents the coupling to the RFQ. The total LEBT length is about 165 cm. Although the details of the LEBT line are presented elsewhere it is worth mentioning that the LEBT line has two movable solenoids for beam focusing (shown as blue rectangles) and two steerer magnets (shown as smaller gray rectangles) for beam guiding. A diagnostics station is placed between the two steerer magnets. The red line represents the RMS beam envelope minimized at the RFQ entrance. The measurement results for a current scan on the focusing solenoids is shown in Figure <ref>. As expected, the first solenoid has a much wider magnetic field magnitude range to cover any imperfections of the ion source extraction system whereas the second solenoid allows a finer control over a much smaller range to match the RFQ input parameters. §.§ RFQ The Radio Frequency Quadruople (RFQ) based on an in-house designed and to-be-locally-produced is the last part of the proton beamline. It operates at 800 MHz to achieve the target energy. This RFQ is 4-vane cavity to accelerate the 20 keV H+ beam extracted from plasma ion source to 2 MeV. It has been optimized to accelerate a proton beam of 1 mA, in a 98 cm cavity with an acceleration efficiency of about 30 % with the lowest RF power. The details of the RFQ can be found elsewhere <cit.>. § DIAGNOSTIC STATION The characterization and the diagnostic of the ion beam is achieved by the so called “measurement box” (MBOX) installed between the two solenoid magnets in LEBT line as shown in Figure <ref>. The MBOX, contains three different detectors for beam charge, profile and emittance measurements which are the Faraday Cup (FC), Scintillation Screen (SS) and Pepperpot (PP) Plate. Furthermore, the FC is also used to acquire information on the pulse duration regarding the signal generated in the FC. To keep the MBOX as compact as possible its vacuum vessel is designed and manufactured with the following two requirements: 1) the volume under vacuum should be kept as small as possible but be large enough to host the relevant detectors 2) the wall thickness should be reduced as much as possible for a minimum total weight bit the vessel should be still robust enough to withstand the atmospheric pressure. The final outcome was a stainless steel polygonal box of 5 mm wall thickness with dimensions of 379×130×235 mm. The lower section of the box weighs about 15 kg and the top cover about 10 kg. The design drawings of the MBOX body can be seen in Figure <ref> left side. The connection to the beampipe is achieved with KF50 type connectors. One lateral side of the MBOX houses the vacuum port, whereas the other one contains the glass view-port, and vacuum gauge port (KF40). Its upper cover houses a two channel feed-through connector (KF25) and three 3 pneumatic actuators to move the associated detectors into and out of the beamline. The Figure <ref> right side shows the MBOX with the top cover and the actuators attached. In this image, the MBOX is shown from the other side and its lateral side is rendered transparent to show the detectors, the vacuum pump connector (ISO100) can also be seen. A remote-controlled gate valve is also installed upstream of the MBOX to separate the ionization chamber section from the beam diagnostics detectors, allowing for intervention with minimal disturbance to the beamline. Inside the MBOX, the most upstream actuator is connected to the emittance meter, the middle one is attached to the screen and the plane mirror facing the view-port and the downstream to the Faraday Cup. The vacuum sealing between the lower and upper parts of the vessel is achieved with a Viton o-ring. With a single turbo-molecular vacuum pump, a pressure of 10^-6 mbar was easily achieved using an independent setup as in Figure <ref> right side. §.§ Detectors for Beam Diagnostics The term beam diagnostics usually refers to the measurement of the 6-Dimensional beam properties such that their dynamics can be constructed and beam parameters can be monitored through the accelerator. Particular to the KAHVELab LEBT line, diagnostics refers to the measurements of the beam transverse profile, beam emittance and the beam current as well as pulse duration. Emittance measurement being the peculiar one, it can be measured using various methods such as the quadrupole scan technique <cit.>, slit scanning technique <cit.> or laser wire technique <cit.>. To benefit from past experience and to be able to measure both the X and Y components simultaneously, in our setup the pepperpot technique is used <cit.>. Although this technique will be discussed in the following text, it should be mentioned that it has the advantage of monitoring the transverse beam emittance and profile in quasi-real time. The Figure <ref> contains a photo of the three detectors of the MBOX as these are attached to the actuators discussed above. In this view, the beam moves from left to right and the detectors are the pepperpot plate (PP), the scintillator screen (SS) and the Faraday cup (FC). The remainder of this section discusses these detectors individually. §.§.§ Faraday Cup The beam current measurements are carried out using a Faraday Cup (FC). In the literature, Faraday cups can be classified into two groups: electrically biased or magnetically filtered. In this setup, an electrically biased Faraday cup was preferred to prevent reading errors that may originate from beamline magnetic elements. It is important to prevent the escape of secondary electrons and ions from Faraday cup to avoid either positive nor negative beam current readings <cit.>. The energy transferred from a 20 keV proton beam to secondary electrons as a result of Coulomb Collision is calculated as 43.59 eV that corresponds to -43.59 V bias voltage to keep in secondary electrons inside the Faraday cup <cit.>. The absorber material of the Faraday cup was selected as Copper. The minimum necessary cup thickness can be predicted by calculating the proton and electron ranges inside the absorber. The range of the proton beam in the copper was calculated by using SRIM program <cit.> and the range of the electron beam in copper was taken from the NIST ESTAR database <cit.>. 20 keV proton beam ranges 0.1171 μm inside copper and 10 keV electron beam which is the lowest energy of NIST ESTAR database that is much more higher than the our calculated secondary electron energy is found as 0.514 μm. However, a much larger Copper thickness was chosen, 8 mm, to make the assembly easier, and to distribute the heat load due to incident beam through the FC. The secondary electron emissions were simulated by using IBSIMU and CST [<cit.>,<cit.>]. IBSIMU performs simulations independent from material and CST has different secondary electron emission models depending on the absorber material. Secondary electron yield data for copper was taken from literature for a proton beam of 5 to 20 keV energy and imported in CST for simulation studies<cit.>. It was planned to use the same Faraday cup in different locations of the proton line which are at the exit of the ion source, in the Measurement box and at the end of the LEBT line. Therefore to account for the the possibility that the beam diameter may be different at different locations of proton line, simulations with pencil beams with diameters varying from 10 to 40 mm with a 5 mm increments were performed. Backscattered protons were not taken into account in these simulations. Proton backscattering from the copper surface was examined by using TRIM code <cit.> and it was found out that backscattered protons are increasing with increasing beam angle with respect to surface. A conic shape of 35 degree cone angle was chosen to limit the current reading errors related with backscattered protons. This setup was found to correspond to 5.23% backscattered protons for a 20 keV incident pencil proton beam. The maximum percentage of the backscattered protons should be limited as it constitutes a source of error. To retain most of these backscattered protons in the Faraday cup, various scenarios have been considered in IBSIMU and CST simulations, and the design has been optimized accordingly. Simulation results of IBSIMU and CST can be seen for different biasing voltages in Figure <ref>. Results from these two independent software programs are in agreement with each other. The final design of the FC can be seen in Figure <ref>. As the average beam power is expected to be in the order of 4 W, a cooling system was not considered for this FC. Additionally its multiple planned locations in the beamline would make multiple cooling setups. Therefore, this FC is designed for making measurements during brief periods. The FC is constructed using copper of 8 mm thickness, with an inner radius of 27.5 mm and length of 120 mm including the bias ring. Faraday cup and the bias ring are separated electrically using a Teflon disc of 2 mm thickness. In the final setup the FC is to be at ground and the bias ring voltage can vary between -50 V and -200 V. Also a Faraday cup vacuum cover was designed to use Faraday cup outside the Measurement box. FC's copper body was covered with Teflon to prevent the contact with metal surfaces of the MBOX and also with the vacuum cover. During data taking the FC output, at the right side of Figure <ref>, was connected to an oscilloscope over a 10 kOhm resistor for beam current measurements. §.§.§ Scintillation Screen The scintillation or phosphorus screens (SS) provide a direct method for observing and recording transverse beam profiles along the accelerator beam lines. The single image captured on these screens provide both X and Y profile information and when coupled with the Pepper Pot plate it yields beam emittance. The two most commonly used phosphor screen types for image intensifiers are P43 and P46. Surface-coated phosphor screens are reusable and are not as sensitive as plastic scintillator screens: they can be repeatedly exposed to luminescence. The phosphorus screens are also preferred in terms of cost. The bill of materials required for constructing a phosphorus screen is relatively simple: a glass base, fluorescent powder, and isopropyl alcohol (99.9% purity). The fluorescent powder can be obtained from pre-made fluorescent lamps such as Philips Master TL-D840 and TL-D830. These lamps are preferred due to the type of phosphor coating used on their surfaces. The warm white and cool white phosphor colors have a color temperature range of 3000-4000 K. The spectral irradiance values align with a standard Bell curve, centered around 530-630 nm. These wavelength ranges correspond to those needed in the beam line. The final scintillation screen was therefore made locally in the laboratory, following a well defined procedure, on a 300 μm thick glass square of 60×60 mm using fluorescent powder. Such a detector can be seen in Figure <ref> left side before its installation in the MBOX. A mirror, mounted at a 45-degree angle behind the phosphorus screen, projects the image of the beam through the vacuum window into the camera. The data taking setup can be seen in Figure <ref> right side. During the experiments the external lights are turned off to eliminate background noise light to the camera. This detector setup is also used in the emittance measurement which will be discussed in the next subsection. §.§.§ Pepperpot Plate The main components of the emittance measurement are the pinhole (pepperpot) plate, a phosphorus screen, a plane mirror at 45 degree angle, and a camera attached to the vacuum window. The location of the PP with respect to the SS, the number, spacing and the diameter of the holes were optimized using python simulations discussed in the measurements and analysis section. The optimization goal in the simulated experiments was to reduce the emittance measurement error. Although the ideal pinhole plate material is tungsten, a 250 μ m thick stainless steel was selected for its lower cost of mask production. 21x21 Pinholes with a diameter of about 100 μm are spaced 2 mm horizontally and vertically and cover an area of 50 × 50 mm. The holes are made by the use of a fiber laser producing bursts of pulses at 50 GHz repetition rate (20 ps in between subsequent pulses). The burst duration and pulse energy were varied but roughly 10 nJ and 100 ns . The burst repetition rate was about 1 MHz. The pulse duration was in the range of 100-200 fs. [ The authors would like to thank the UFO lab, Bilkent University, Ankara, Turkey for their invaluable contribution in building the pepperpot mask.] The uniformity of the hole diameter has been checked on various production runs to optimize the laser parameters. Figure <ref> contains such control measurements for 3 different samples. For the final production, the so called "sample-4" has been used as it has the most uniform diameter distribution. In order to prevent thermal deformation of the mask, it is sandwiched between two aluminum frames of 500 μm thickness each. These support frames are used to prevent any deformations that may occur on the surface of the perforated plate. The distance between the SS and the the pepper pot mask is set as 12.35 cm after simulations meanwhile that distance can be arranged by simulations for less or more divergent beams since the mask is mounted on a rail in order to be slide. §.§ Control and Readout The KAHVELAB proton beamline control and monitoring system is responsible a high-voltage power supply, four current sources, two mechanical vacuum pumps, two turbo-molecular vacuum pumps, various vacuum gauges, the vacuum gate valve separating the MBOX from the upstream of the beamline, temperature sensors and three pneumatic actuators devices. In the first version of the setup, a PC running a LabVIEW program written in the G language was used to control and monitor the equipment, while the low level hardware access was achieved using Arduino micro-controller cards <cit.>. Although the system worked in principle, there were stability issues with this version: The LabVIEW graphical user interface (GUI) was to sluggish to handle user interactions, the Arduino boards often lost their firmware allowing connection to LabVIEW etc. To alleviate these problems a second version of the control and readout software is being written. In this new version of the setup, all the hardware devices (such as the vacuum pumps, vacuum gauges, the MBOX actuators etc) are now controlled and monitored by a Siemens S7-1200 1215C model PLC. The SP7 LabVIEW Toolkit auxiliary plugin is used to enable communication between the PLC and the PC. The PC runs a new LabVIEW GUI which interacts with the user and exchanges with the PLC. A screen shot from this GUI is given in the Figure <ref>. As one can notice, the GUI displays pump status, graphically represents pressure inside the MBOX and the ionization chamber, displays all temperature, current and voltage values and naturally the MBOX actuator positions. § BEAM MEASUREMENTS AND ANALYSIS After being produced and subjected to vacuum training, the MBOX was installed at the KAHVELab proton line in early 2022, and subsequent beam measurements were conducted. In the next paragraphs these measurements are presented and discussed. §.§ Beam Current The beam current measurement is a destructive one achieved using the FC, lowered into the beamline using the relevant actuator. An example from such a measurement is presented in Figure <ref>. The signals are obtained using an oscilloscope that reads the FC located at z = 87 cm (see Fig. <ref>), through a voltage divider circuit. The measurement reveals a pulse width of 8 ms and a pulse period of 20 ms. Consequently, the duty factor can be calculated as the ratio of active time to the period which is 8/20 = 0.4. Similarly, the instantaneous current is determined to be 0.03 mA, since the relevant resistor in the voltage divider circuit is 10 kOhm. The average current is calculated at 0.012 mA, below the value reported by the high voltage power supply. This beam, with a lower current with respect to the design value of 1 mA, is obtained by tuning three parameters at the same time: 1) reduction of the magnetron input voltage using a manually controlled variac, 2) adjustment of the very first tuner after the magnetron 3) reduction of the Hydrogen gas flow rate to a minimum of 0.01 sccm. The rational behind this “pilot” beam is to test the whole system using a small charge and, once verified, to increase the beam current gradually. The measurements with this increase are planned for after the completion of the RFQ installation. However since PIXE applications require a beam current of few nA, even this pilot beam would be sufficient to achieve this goal. §.§ Beam profile For this measurement the SS and the attached 45 degree slanted mirror are lowered to intercept the beam. The image is recorded with a digital camera and is processed later using a locally developed Python program. The raw recording format was selected to avoid any data loss which might arise from JPEG data compression algorithms. The Figure <ref> contains such an image on the left side. This image is read in using the Python OPENcv library and converted into a two dimensional brightness histogram from which X and Y projections are obtained. The calibration i.e. the conversion from pixel counts to length in mm is achieved by using the known lengths of the plane mirror frames. In Figure <ref>, the right plot shows the beam profiles in the X (blue) and Y (orange) directions. It can be observed that the distributions are slightly off axis and the X has a larger beam width. As in the literature, the Full Width at Half Maximum (FWHM) is used to estimate the beam width: 14 and 13 mm are obtained for X and Y directions respectively. The beam size, σ is usually obtained from a fit to the beam profile, however the fitted Gaussian function undershoots and the hyperbolic secant function overshoots the data in various locations. We therefore use the linear relation between the FWHM and σ for these distributions by taking the average of the correlation constants, 2.355 for the Gaussian and 2.634 for the hyperbolic secant functions. The beam size estimation is therefore 5.6 (5.2) mm for X (Y) axes. A simulation of the proton beam in DemirciPRO, yields the expected RMS beam envelope at the SS as 5.1 mm in both directions. Given that DemirciPRO does not consider space charge effects, thus underestimates the beam size, one can conclude that the experimental results and simulations are consistent with each other. A similar setup was also used to obtain the beam profile of the focused beam at the end of the LEBT line. In this location the photo is taken without a mirror and is presented in Figure <ref> left side together with the projection histograms on the right side. Notice that the maximum height of the signal is at the center position (0mm) as compared to the background noise originating from impurities on the Fluorescent Screen. However the beam is asymmetrical in both directions towards the positive side (up and right). The reason for this asymmetry is under investigation. The FWHM of the beam is measured as about 1.5 mm for both X and Y directions. The beam is focused by Sol-2 at the measurement location which represents the RFQ input. §.§ Beam emittance The beam emittance measurement is performed by lowering both the PP plate and the SS into the beamline. As previously discussed, the mask allows only a small portion of the proton beam to reach the SS. The image formed by the scintillation light of the particles surviving the PP is recorded by the camera. Such a photograph, after color inversion, can be seen in Figure <ref> left side. The image file is read in using Python OPENcv library and converted into a two dimensional intensity histogram. The X and Y projections of the intensity histogram yield the beamlet intensity distributions, also containing some background light. There are multiple methods to estimate and subtract the background and to find the peak positions of the beamlets. The background finding method applied in this note is to calculate the dip position equidistant to two adjacent peaks and define a segment of line between consecutive peaks. These line segments define the background that will be subtracted to find the signal distributions. In Figure <ref> center plot shows such an example. The original distribution is shown in green, the estimated background is shown with the purple line and resulting signal only distribution is drawn in blue. Although there are multiple true beamlet candidates, only 21 of those should be considered as there are 21 holes per dimension in the PP. The procedure is to consider the central and brightest peaks, marked with a red star in the same image. Therefore the center position of the hole through which protons pass, is accepted as the position of the relevant beamlet. The simulated macro particles with coordinates (x_i, y_i) will pass through the pepperpot mask where each pinhole located at (x_j^ph, y_k^ph), if the individual macroparticle yields: (x_i-x^ph_j)^2 + (y_i-y^ph_k)^2 < r^2. Then the assumption for the simulated macroparticle having the coordinate (x_i, y_i) would be having the coordinate of the pinhole center (x_j^ph, y_k^ph), then the angle associated to that macroparticle would be calculated as: x'_i, new = arctan(x_i + x'_i L/L), which is valid for the vertical counterpart, as well. The marked beamlets therefore yield the divergence angles since the distance between the PP and SS is known, i.e. L=123.5 mm. Using the angles (x'_i, new) and positions (x_j^ph), the geometrical emittance and the Twiss parameters can be substituted into the rms emittance formula: ϵ_x, g^2 = <(x^ph_j)^2> <(x'_i, new)^2 > - < (x_j^ph) (x'_i, new)>^2. The result from a calculation in the X direction is shown in Figure <ref> right side. Using the same beamline settings as in the previous section, the normalized emittance in the X (Y) direction is found to be 0.029 (0.033) πmm.mrad . This value is compatible with the IBSIMU expectation of 0.0297 πmm.mrad in both directions. The same analysis also yielded the Twiss parameter β as 6.63 (5.75) mm/πmrad in the X (Y) direction. These values are in agreement with the predicted value of 6.5 mm/πmrad obtained in a simulation with 1M particles created by IBSIMU and tracked in the LEBT line by DemirciPRO which does not incorporate space charge effects. Another cross-check can be performed using the beamsize (σ) estimated in the previous section and the measured geometrical emittance (ϵ_g). The geometrical emittance value can be easily found since the proton beam energy is known as 20 keV. Using the relation σ^2 / ϵ_g = β, the Twiss parameter beta can be estimated as 7.05 for X and 5.35 mm/πmrad for Y directions. These estimations are compatible with the direct measurements using the PP with a relative error of about 10% level. Moreover, their average aligns with the simulation expectation value with an error better than 5%. Given the fact that only a small portion of the beam survives the PP mask, the compatibility level across different measurements and simulations is remarkable. The image analysis, emittance and Twiss parameter calculation software was initially developed locally using a beam file, created with IBSIMU program. The program is written in Python and does also beam tracking along the LEBT line, without considering the space charge effects. The LEBT parameters are given in a simple text based configuration file. In this software, the position-angle vectors of the beam particles were transported along the LEBT line using the transfer matrices of the drift and solenoid spaces. The PP is also introduced into this program and the particles surviving it were identified and later used to calculate the beam emittance on the SS. The results from this code were compared to the output from the DemirciPRO program and found to be mutually compatible. One of its major advantages over DemirciPRO is the ability to directly work with image files. Further details of this software can be found elsewhere <cit.>. Multiple photos were taken at different current values of Sol-1 and analyzed as described above. These results, together with expectations from IBSIMU and DemirciPRO simulations are presented in Table <ref>. As shown, the solenoid current is scanned between 17.3 and 18.3 Amps corresponding to a 0.01 T field change. For lower and higher currents, the pepper pot method looses its applicability: For higher currents, the beam becomes over focused, whereas for lower currents the beam becomes too divergent and beam losses occur at the beampipe walls. The average values of the normalized emittance in both X and Y directions, along with the corresponding standard deviations, are also presented in the same table. As one can notice the average emittance value agrees with the expectation within 1 (2) sigma in X (Y) direction. These measurements can be further improved by a better estimation of the light background and a faster camera. Nevertheless, despite the home-made nature of the detectors results of significant merit are obtained. § CONCLUSIONS In this report, we have introduced a compact and cost-effective beam diagnostics station designed for ion beams. The station includes detectors for beam current, profile, and emittance measurements, all of which have been locally designed and constructed. Additionally, we have developed our own control and data analysis software programs. The entire setup has undergone successful testing using the low-energy proton beamline at the KAHVE Laboratory in Istanbul, Turkey. The values obtained for the proton beam current, profile and emittance were in agreement with simulations and with crude readings from other professional instruments. These reported measurements were done on the beamline without the RFQ and with the MDIS which used electromagnets. While the RFQ was being manufactured, the MDIS was upgraded to use permanent magnets. This upgrade removed the cooling and EM solenoid requirements and simplified the setup. The commissioning of the new ion source is ongoing. The production of the two RFQ modules is also completed and the RFQ is assembled. Its vacuum and electromagnetic tests are ongoing. The first runs of the proton beam line including the RFQ is scheduled for Q4 of 2023. We plan to use the MBOX, in the coming years, at the KAHVELab proton beamline with higher beam currents and with possible minor upgrades such as the further automation of the data taking. Given our satisfaction with its performance, we particularly recommend this approach of developing complete, in-house solutions for beam diagnostics to educational laboratories, rather than purchasing a ready-made system with off-the-shelf components. This approach not only supports local industry but also provides valuable hands-on experience for physics and engineering students. One significant advantage of this approach is its low cost. However, it is worth mentioning that building a measurement station from scratch does require time and effort, as opposed to acquiring readily available components. Nonetheless, the benefits of local manufacturer development and student involvement make it a worthwhile endeavor. § ACKNOWLEDGEMENTS The main project is being supported by by TÜBİTAK grant 119M774, as well as İstanbul University BAP grant 33250 and 36823. The authors would like to thank the UFO lab, Bilkent University, Ankara, Turkey for their invaluable contribution in building the pepperpot mask. 7 SPPpaper Yildiz, H., et al, “Compact measurement station for low energy”, JINST 12 T02006, 2017. KAHVELab<http://kahvelab.boun.edu.tr/index.php/projects/> Tessa T. K. Charles, A. Castilla, “Preliminary Investigation into Accelerators for In-Situ Cultural Heritage Research”, 12th International Particle Accelerator Conference (IPAC 2021), Online, 24 - 28 May 2021, pp.2605-2608. TARLA<https://en.tarla.org.tr/> Linac4 Arnaudon, L., et al, “Linac4 Technical Design Report”, CERN-AB-2006-084. KAHVELabMDISA. Adiguzel, et. al, “Ion source and LEBT of KAHVELab proton beamline”, <https://arxiv.org/abs/2208.00529>, 2022. SPProjectG. Turemen, et. al, “Project PROMETHEUS: Design and Construction of a Radio Frequency Quadrupole at TAEK”, <https://arxiv.org/abs/1310.0790>, 2013. demircipro Cakir, O., Celebi, E., Cetinkaya, H., Kolenoglu, H., Turemen, G., et al., “DemirciPro’s tools for completing the Linac: Ion source and LEBT line”, arXiv:2103.11829[physics.acc-ph]. ISLEBT-PTAK A. Adıgüzel et al, “Ion source and LEBT of KAHVELab proton beamline", JINST 18 T01002, 2023. RFQpaperA. Adiguzel, et al., “Design and Construction of a Proton Testbeam at KAHVELab”, In Preparation, 2022. ibsimu T. Kalvas, et. al., IBSIMU: A three-dimensional simulation software for charged particle optics, Rev. Sci. Instrum. 81, 02B703, 2010. toutatisR. Duperrier, “TOUTATIS: A radio frequency quadrupole code”, Phys. Rev. Vol.3, 124201, 2000. cstCST Microwave Studio Suite Users Manual, 2013. superfish J.Billen and L.M.Young, POISSON,SUPERFISH reference manual, LA-UR-96-1834. travelPerrin, A., et al., “TRAVEL v4.07 User Manual”, CERN, April 2007. vacuum https://cas.web.cern.ch/sites/cas.web.cern.ch/files/lectures/platjadaro-2006/sonderegger.pdf QStech U. Raich, Accelerator Beam Diagnostics, USPAS, Albuquerque NM, June 23-26, 2009. SStech P. Lu, et al, “Transverse Emittance Measurement by Slit-scan Method for an SRF Photo Injector”, TUPSO44 Proceedings of FEL2013, New York, NY, USA. LWtech S. M. Gibson, et al, “Overview of laserwire beam profile and emittance measurements for high power proton accelerators”, 2nd International Beam Instrumentation Conference, Oxford, UK, 16 - 19 Sep 2013, TUPF15. PPtech M. Stockli, “Measuring and Analyzing the Transverse Emittance of Charged Particle Beams”, AIP Conf.Proc. 868 (2006) 1, 25-62. SPP_box H. Yildiz, et al, “Compact measurement station for low energy proton beams”, Journal of Instrumentation 12 (02), T02006. FC1 A. K. Naieni, F. Bahrami, N. Yasrebi, and B. Rashidian, “Design and study of an enhanced Faraday cup detector”, Vacuum, vol. 83, no. 8, pp. 1095–1099, May 2009, doi: 10.1016/j.vacuum.2009.01.005 FC2 E. Ebrahimibasabi, S.A.H. Feghhi, "Design and construction of a secondary electron suppressed araday Cup for measurement of beam current in an electrostatics proton accelerator", International Journal of Mass Spectrometry, Volume 386, 2015, Pages 1-5, ISSN 1387-3806, https://doi.org/10.1016/j.ijms.2015.05.006. SRIM J. Ziegler, “Particle Interactions with Matter”. Available from: http://www.srim.org, 2006. J. Ziegler. Stopping and range of ions in matter (SRIM) & transport of ions in matter (TRIM). http://www.srim.org/, 2010. ESTAR National Institute of Standards and Technology, 2010, ESTAR, “Stopping and Range Tables for Electrons”, (http://physics.nist.gov/PhysRefData/Star/Text/ESTAR.html Electrons" ). FC7 P.C. Zalm, L.J. Beckers, “Ion-induced secondary electron emission from copper and zinc”, Surface Science, Volumes 152–153, Part 1, 1985, Pages 135-141, ISSN 0039-6028, https://doi.org/10.1016/0039-6028(85)90136-0. duyguMS D. Halis, Yildiz Teknik U. M.Sc. thesis, in preparation, 2023.
http://arxiv.org/abs/2307.00863v1
20230703090441
Thompson Sampling under Bernoulli Rewards with Local Differential Privacy
[ "Bo Jiang", "Tianchi Zhao", "Ming Li" ]
cs.LG
[ "cs.LG", "cs.CR" ]
[ Thompson Sampling under Bernoulli Rewards with Local Differential Privacy equal* Bo Jiangequal,to Tianchi Zhaoequal,to Ming Lito toDepartment of Electrical and Computer Engineering, University of Arizona, Arizona, USA Bo Jiangbjiang.email.arizona.edu Tianchi Zhaotzhao7.email.arizona.edu Ming Lilim.email.arizona.edu Machine Learning, ICML 0.3in ] This paper investigates the problem of regret minimization for multi-armed bandit (MAB) problems with local differential privacy (LDP) guarantee. Given a fixed privacy budget ϵ, we consider three privatizing mechanisms under Bernoulli scenario: linear, quadratic and exponential mechanisms. Under each mechanism, we derive stochastic regret bound for Thompson Sampling algorithm. Finally, we simulate to illustrate the convergence of different mechanisms under different privacy budgets. § INTRODUCTION The multi-armed bandit (MAB) problem addresses the balancing of the trade-off between exploration and exploitation and has been widely applied in many real-world scenarios, from recommendation systems and information retrieval to healthcare and finance. In the settings of a MAB model, there are N arms available to the agent, and each arm’s reward subjects to a particular distribution with an unknown mean. At each time step, the agent selects one arm. Then a reward is observed by the agent. The agent's ultimate goal is to gather as much cumulative reward as possible or minimize the total regret, i.e., designing a strategy that can explore different arms and exploit well-rewarded arm(s). Nevertheless, personalized MAB implementations such as recommendation system is a double-edged sword: the gained utility also comes with the risk of privacy violation. Comparing to the offline learning models, online learning methods directly interact with sensitive user data, e.g., user clicks or purchasing history, and timely update the models to adjust their output, which makes privacy an even more serious concern. For example, physicians want to test the effect of different treatments, and he collects patients' health conditions after a certain treatment. However, one's heart-beat data may compromise one's living routine, such as daily exercise time, sleeping time, etc. Another example is stock recommendation. The system (agent) periodically suggests different stocks (arms) to the user. After the suggestion, he wants to learn the feedback about how many shares (can be 0) the user has bought. However, directly revealing may leak the user’s buying power, personal preference, and what kind of risks he is hedging against. In this paper, we leverage privacy protection in the MAB problem, i.e., the MAB problem where the observable reward at each time satisfies certain privacy constraints. Among all the privacy notions, Differential Privacy<cit.> has been accepted as the de facto standard for quantifying privacy leakage in the privacy community. The advantage of DP is that it provides a rigorous privacy guarantee against the worst-case adversaries, and is amenable for mechanism design. Recently, the local version of DP, local differential privacy (LDP), has gained significant attention. The server/aggregator who collects data is considered untrusted by the users, who perturb their data before sending it to the server. LDP based mechanisms have been successfully adopted by Google's RAPPOR <cit.> for collecting web browsing behavior, and Apple's MacOS to identify popular emojis and media preferences in Safari <cit.>. LDP also enables a variety of privacy-preserving mechanisms for both discrete and continuous-valued data. Such as randomized response mechanism <cit.>, Laplacian mechanism <cit.>, Gaussian mechanism <cit.>. Non-private MAB problems have been studied for decades, among which, either frequentist methods like UCB (Upper Confidence Bound) or Bayesian methods like Thompson Sampling <cit.> have been shown to achieve optimal regret performance (up to constant factors). There is also a line of works related to the regret bound of MAB algorithm <cit.>. A privacy-preserving MAB can be described as, in each round, a privatized/perturbed version of the reward(s) is (are) observable, and each perturbed reward satisfies certain privacy requirements. The earliest work that studied LDP bandits is <cit.>, which proposed an LDP bandit algorithm that works for arms with Bernoulli rewards. In <cit.>, for bandits with bounded rewards, a Laplace mechanism and a Bernoulli mechanism are proposed, and corresponding UCB algorithms are developed. The upper and lower bound are derived based on UCB algorithms. In <cit.>, the statistical regret bounds are derived under either DP or LDP for collaborative LinUCB algorithm in the context of collaborative MAB. However, seldom of these works try to derive theoretically regret bound for privacy-preserving Thompson Sampling algorithm. The challenge is that TS is a Bayesian approach that involves the posterior updating at the agent by observing the reward. However, under the privacy-preserving framework, the observable reward is noisy, and the posterior distribution is not fixed but depends on the concrete mechanism (noisy distribution). In this paper, we consider different noisy models providing LDP guarantees. Then under each mechanism, we derive the posterior distribution and bound the corresponding probabilities causing the sub-optimal selection. In this paper, for a given privacy budget for LDP, we derive upper regret bounds for the Thompson Sampling algorithm. The main contributions of this work are summarized as follows: 1). We propose different privacy-preserving MAB mechanisms under Thompson Sampling satisfying Local Differential Privacy; 2). We derive Cumulative Regret Bounds (CRB) for these mechanisms; 3). Simulate with synthetic data to support our analysis and compare the performance with UCB. § MODEL SETUP AND PROBLEM FORMULATION In this problem, we consider N arms in the system, and each arm's reward R∈ℛ follows a sub-Gaussian distribution with mean μ. We use μ_i to denote the mean value of R^i, where i∈{1,2,...,N} is the index of an arm. The agent, at each timestamp, selects one specific arm to play. The selected arm and the corresponding reward at time k are denoted as I_k∈{1,2,...,N} and R_k respectively. Note that, in this problem, we assume the user (all the arms belong to one user) wants to cooperate with the agent to minimize the cumulative regret (in terms of TS algorithm, help the agent better learn the mean value). On the other hand, the user also wants each of his instantaneous reward's privacy to be protected. Therefore, the reward at each time is protected by a privacy-preserving mechanism M (M is assumed to be time-invariant). We define different kinds of privacy-preserving mechanisms later. Denote Y_k∈𝒴 as the privatized version of R_k, which is also the output of M. The agent, after observing Y_k, can further update his belief state and the corresponding strategy for the next time. In this work, we investigate the Thompson Sampling algorithm, which is a very classic algorithm for MAB problems. The algorithm can be summarized as follows: The agent has an estimated reward distribution for each arm (usually starting from a uniform distribution), denote μ̂_i^k as the estimated mean reward for arm i at time k. At each time, he randomly samples a reward for each arm from his estimated distribution and selects the arm that has the maximal sampled reward. After each round, by observing Y_k, he updates his belief state accordingly (the distribution of the reward of the arm that just played). To provide strict privacy protection to every instantaneous reward at each time, we let M satisfy ϵ-LDP, which can be defined as, for any r,r'∈ℛ, y∈𝒴: Pr(Y_k=y|R_k=r)≤e^ϵPr(Y_k=y|R_k=r'), where ϵ is the privacy budget, the smaller ϵ, the stronger privacy guarantee the mechanism provides. Note that the privacy-preserving mechanism is protecting the privacy of each instantaneous reward (sampled from a fixed distribution), not the distribution itself. On the contrary, TS algorithm requires an estimation of the posterior distribution, which tries to infer the data distribution by nature. From the privacy perspective, the privacy leakage of each instantaneous reward is guaranteed to be upper bounded by ϵ, and the leakage of the distribution after observing k samples from the same arm is upper bounded by kϵ by the composability of LDP. The system model is depicted in Fig.<ref>. § LDP-BASED BINOMIAL MECHANISMS In this section, we first introduce three LDP-based Binomial privacy-preserving mechanisms, including linear mechanism, quadratic mechanism, and exponential mechanism. Then, we discuss how to implement these mechanisms into the TS algorithm. In the following of this paper, we assume that the reward of each arm is bounded, and the first arm I_1 is the optimal arm (μ_1=max_iμ_i). Bernoulli mechanism converts a bounded reward to a Bernoulli distributed one, i.e., Y_k=Bernoulli(p(r)), where p(r) denotes the probability that Y_k=1 given the reward is r. Algorithm <ref> shows the detail of the algorithm for LDP based TS algorithm with Bernoulli mechanism. Next, in the following theorem, we present the sufficient condition to satisfy the ϵ-LDP. For a bounded Bernoulli mechanism, to satisfy ϵ-LDP, the following conditions must hold: (1) p(0)≥1/e^ϵ+1; (2) p(1)≤e^ϵ/e^ϵ+1; (3) p(r) is monotonically increasing. In this paper, we consider three different probability functions under the Bernoulli mechanism. Linear probability function, Quadratic probability function, and Exponential probability function. Based on the sufficient conditions described in Theorem <ref>, p(r) for each mechanism are stated in the following Corollary. For linear probability function, p(r)=1/1+e^ϵ[(e^ϵ-1)r+1]; For quadratic probability function, p(r)=1/1+e^ϵ[(e^ϵ-1-b)r^2+br+1], where b∈[0,2(e^ϵ-1)]; For exponential probability function, p(r)=e^ϵ r/e^ϵ+1. It is worth noting that the non-linear probability functions are preferable to the linear under certain circumstances. One scenario is that the mean rewards of different arms are very close to each other, the non-linear probability functions provide a better convergence rate compared to the linear model (it discriminates the optimal arm faster than the linear model). § CUMULATIVE REGRET BOUNDS Next, we derive CRB for TS-LDP-B, and we consider problem-dependent regret, where the regret at each time t depends on the distance between the mean reward between arms 1 and i; §.§ Problem-dependent regret bound for linear probability function The Problem-dependent regret bound for linear probability function is stated in the following theorem with proof provided in Appendix <ref>. Given any non-zero Δ_i=μ_1-μ_i, the cumulative regret for the linear probability function is upper bounded by (ϵ >0): (1+γ)^2(e^ϵ+1/e^ϵ -1)^2{∑_i≠ i^*log(T)/2Δ_i+O(N/2Δ_min)}. Where Δ_min=min_i∈ [N]Δ_i, γ∈(0,1) is a threshold which helps to prove the regret bound. Proof outline for Theorem 2: The basic idea to prove the algorithm is similar to <cit.>. The difference of the proof for TS-LDP-B (linear probability function) is that we change the term in the denominator (from d(μ_i,μ_1) to Δ_i, which is based on the Pinsker's inequality). In this way, we can express how the Bernoulli mechanism affects the regret bound. Remark: Note that, the regret bound of non-private TS is (1+γ)^2{∑_i≠ i^*log(T)/2Δ_i+O(N/2Δ_min)}, wherein we change the denominator from d(μ_i,μ_1) to Δ_i (loose version of the regret bound in <cit.>). Compared to non-private TS, the regret has the term (e^ϵ+1/e^ϵ -1)^2, which can be viewed as the cost for preserving privacy. When ϵ approaches infinity, this factor approaches 1, and the regret approaches that of the non-private version. When ϵ approaches zero, the regret becomes O(T). §.§ Problem-dependent regret bound for non-linear probability function The Problem-dependent regret bound for the non-linear probability function is stated in the following Theorem with proof provided in Appendix <ref> and <ref>. Given any non-zero Δ_i=μ_1-μ_i, the cumulative regret for the non-linear probability function is upper bounded by (ϵ >0): (1+γ)^2∑_i≠ 1log(T)+1/2Δ_i,ϵ^2Δ_i +O(N), where Δ_i,ϵ is the noisy difference between optimal arm and selected arm i. Proof outline for Theorem 3: The proof structure of the non-linear probability function follows the similar idea of the linear probability function by changing the linear probability function to a non-linear probability function. Remark: We can apply quadratic probability function and exponential probability function into (3). In quadratic probability function, Δ_i,ϵ=μ_1,ϵ,quad-μ_i,ϵ,quad={(e^ϵ-1-b)(μ_1+μ_i)+b}(μ_1-μ_i)+(e^ϵ-1-b)(σ_1^2-σ_i^2)/e^ϵ +1, μ_i,ϵ,quad is the expected mean value of arm i after performs quadratic probability function and σ_i is arm i's reward variance. we can see that it reduces to (2) when e^ϵ-1-b=0. This result conforms to our expectation because the linear probability function is a special case for the quadratic probability function. In exponential probability function, Δ_i,ϵ,exp=μ_1,ϵ,exp-μ_i,ϵ,exp=e^ϵμ_i(e^ϵΔ_i-1)+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1, μ_i,ϵ,exp is the expected mean value of arm i after performs exponential probability function and τ_i(ϵ) is the Jensen's gap between e^ϵμ_i and E[e^ϵ r]. To make comparison with TS-LDP-B, we also provide UCB-LDP-B under non-linear probability function. It is, R(T)≤∑_i≠ 1{8log(T)/Δ_i,ϵ^2 +1+π^2/3}Δ_i. Our analysis is based on Ren et al. <cit.> proof structure. However, the difference between our algorithm and their algorithm (LDP-UCB-B linear) is that we change the linear probability function to a non-linear probability function. § NUMERICAL ANALYSIS In this section, we illustrate the numerical results of our algorithms. Due to computation limitations, we only present the results for bandits with the Bernoulli mechanism. In the comparison, we compare LDP-TS-B studied in this paper to that of LDP-UCB-B <cit.>. We also include the performance of the non-private UCB and TS algorithm (ϵ=∞) as a baseline to see the cost for preserving ϵ-LDP. The numerical results are illustrated in Fig. 2. In each experiment, we set the number of arms N=20. The optimal arm with a mean reward of 0.9; five arms with 0.8; another five arms with 0.7; another five arms with 0.6; the other four arms with 0.5. We also let the rewards of arms follow different types of distributions: arms with mean rewards of 0.9 or 0.6 generate rewards from Bernoulli distributions; arms with mean rewards of 0.8 generate rewards from Beta(4, 1) distribution; arms with mean rewards of 0.7 generate rewards from {0.4, 1} uniformly at random; and arms with mean rewards of 0.5 generate rewards from [0, 1] uniformly at random. Curves in each figure are averaged over 50 independent trials. Fig. <ref> and Fig. <ref> show the effect of different ϵ under linear model. We can see that the regret increases when ϵ decreases. This result is consistent with our theoretical results. Small ϵ has much more privacy, and the regret becomes large. Meanwhile, LDP-TS-B (linear) has lower regret than LDP-UCB-B (linear) under the same ϵ. Fig. <ref> and Fig. <ref> show the effect of different ϵ under quadratic model. Same to the linear model, We can see that the regret increases when ϵ decreases, and LDP-TS-B (quadratic) has lower regret than LDP-UCB-B (quadratic) under the same ϵ. Fig. <ref> and Fig. <ref> show the effect of different ϵ under exponential model. Similar to the previous observation, We can see that the regret increases when ϵ decreases and LDP-TS-B (exponential) has lower regret than LDP-UCB-B (exponential) under the same ϵ. § CONCLUSION AND FUTURE WORK In this paper, we studied the Thompson Sampling algorithm with local differential privacy guarantee. We consider three privatizing mechanisms under the Bernoulli rewards and proved a regret upper bounds for the Thompson Sampling algorithm. Numerical results also confirmed our theoretical results. For future work, we are planning to derive a lower regret bound for general mechanisms. plainnat § PROOF FOR THEOREM <REF> The ϵ-LDP of LDP-TS-B follows from the ϵ-DP of CTB stated in Lemma 5 <cit.>. Problem-dependent bound. According to Lemma 5 <cit.>, for any arm i, the private response generated by CTB(i,ϵ) follows the Bernoulli(μ_i,ϵ) distribution, where <cit.> μ_i,ϵ:=1/2+(2μ_i-1)(e^ϵ-1)/2(e^ϵ+1) We define μ_ϵ^*=max_i∈ [N]μ_i,ϵ Δ_i,ϵ:=μ_ϵ^*-μ_i,ϵ=e^ϵ-1/e^ϵ +1Δ_i Next, we begin to derive regret of LDP-TS-B. The proof follows <cit.>. We fix a suboptimal arm i≠ i^*. Split according to μ̂_i,ϵ(t) and p_i,t E[N_i(T)]=∑_t=1^TPr(I_t=i) = ∑_t=1^T{Pr(I_t=i ,μ̂_i,ϵ(t)> x_i) + Pr(I_t=i,μ̂_i,ϵ(t)≤ x_i, p_i,t≥ y_i) + Pr(I_t=i,μ̂_i,ϵ(t)≤ x_i, p_i,t≤ y_i)}, Where we choose μ_i,ϵ<x_i<y_i<μ_ϵ^* as follows μ_i,ϵ<x_i<μ_ϵ^* s.t. d(x_i,μ_ϵ^*)=d(μ_i,ϵ,μ_ϵ^*)/1+γ x_i<y_i<μ_ϵ^* s.t. d(x_i,y_i)=d(x_i,μ_ϵ^*)/1+γ=d(μ_i,ϵ,μ_ϵ^*)/(1+γ)^2. Where 0<γ <1. Eq. <ref> ensures that the relevant divergences are only a constant factor different from d(μ_i,ϵ,μ_ϵ^*). For the first probability: denote τ_i,k as the k-th time when I_t=i, then ∑_t=1^TPr(I_t=i ,μ̂_i,ϵ(t)> x_i) ≤ 1+E∑_k=1^T-11[μ̂_i,ϵ(t)> x_i] ≤ 1+ ∑_k=1^T-1exp(-kd(x_i,μ_i,ϵ)) ≤ 1+1/d(x_i,μ_i,ϵ)(a)≤ 1+1/d(x_i,y_i) ≤ 1+(1+γ)^21/d(μ_i,ϵ,μ_ϵ^*) (b)≤ 1+(1+γ)^21/2(μ_i,ϵ-μ_ϵ^*)^2, Inequality (a) is based on the fact that divergence is monotonically increasing function under y_i>x_i when we fix x_i. Inequality (b) is based on the Pinsker's inequality. For the second probability: ∑_t=1^TPr(I_t=i,μ̂_i,ϵ(t)≤ x_i, p_i,t≥ y_i) ≤ ∑_t=1^TPr(I_t=i,p_i,t≥ y_i |μ̂_i,ϵ(t)≤ x_i) ≤ m+∑_t=τ_i,m+1^TPr(I_t=i,p_i,t≥ y_i |μ̂_i,ϵ(t)≤ x_i), where we choose m=log(T)/d(x_i,y_i) so that ∑_t=1^TPr(I_t=i,μ̂_i,ϵ(t)≤ x_i, p_i,t≥ y_i) ≤ m+∑_t=τ_i,m+1^TPr(I_t=i,p_i,t≥ y_i |μ̂_i,ϵ(t)≤ x_i) ≤ m+E{∑_t=τ_i,m+1^Texp(-N_i(t)d(x_i,y_i))} (a)≤ log(T)/d(x_i,y_i)+E{∑_t=τ_i,m+1^T1/T} ≤ log(T)/d(x_i,y_i)+1≤ 1+ (1+γ)^2log(T)/d(μ_i,ϵ,μ_ϵ^*) ≤ 1+(1+γ)^2log(T)/2(μ_i,ϵ-μ_ϵ^*)^2. Inequality (a) is based on the fact that exp(-N_i(t)d(x_i,y_i))≤1/T when t≥log(T)/d(x_i,y_i). For the third probability, according to <cit.>, we have: ∑_t=1^TPr(I_t=i,μ̂_i,ϵ(t)≤ x_i, p_i,t≤ y_i)=O(1). Combining, we get: E[N_i(T)]≤ (1+γ)^2log(T)/2(μ_i,ϵ-μ_ϵ^*)^2+(1+γ)^21/2(μ_i,ϵ-μ_ϵ^*)^2 + O(1), The regret is R(T)≤ (1+γ)^2(e^ϵ+1/e^ϵ -1)^2{∑_i≠ i^*log(T)/2Δ_i+O(N/2Δ_min)}, § PROOF FOR THEOREM 3 UNDER QUADRATIC PROBABILITY FUNCTION Problem-dependent bound. Assumption, each arm i's reward variance is σ_i. μ_i,ϵ:=(μ_i^2+σ_i^2)(e^ϵ-1-b)+bμ_i+1/e^ϵ+1 We define μ_ϵ^*=max_i∈ [N]μ_i,ϵ Δ_i,ϵ:=μ_ϵ^*-μ_i,ϵ= {(e^ϵ-1-b)(μ_1+μ_i)+b}(μ_1-μ_i)+(e^ϵ-1-b)(σ_1^2-σ_i^2)/e^ϵ +1 We put the Eq. <ref> value into Eq. <ref> The regret is R(T)≤(1+γ)^2∑_i≠ i^*log(T)/2Δ_i,ϵ^2Δ_i+ (1+γ)^2∑_i≠ i^*1/2Δ_i,ϵ^2Δ_i+O(N) = (1+γ)^2∑_i≠ i^*log(T)+1/2({(e^ϵ-1-b)(μ_1+μ_i)+b}(μ_1-μ_i)+(e^ϵ-1-b)(σ_1^2-σ_i^2)/e^ϵ +1)^2 Δ_i+O(N), To prove the regret of UCB version, we use the expected time of sub-optimal arm in A.4 in <cit.>. E[N_i(T)]≤8log(T)/Δ_i,ϵ^2+1+π^2/3, Then, the regret of UCB algorithm R(T)≤∑_i≠ i^*{8log(T)/({(e^ϵ-1-b)(μ_1+μ_i)+b}(μ_1-μ_i)+(e^ϵ-1-b)(σ_1^2-σ_i^2)/e^ϵ +1)^2 +1+π^2/3}Δ_i, and if we assume all the arm i has same variance σ: Δ_i,ϵ:=μ_ϵ^*-μ_i,ϵ=(e^ϵ-1-b)(μ_1+μ_i)+b/e^ϵ +1Δ_i We put the Eq. <ref> value into Eq. <ref> The regret is R(T)≤(1+γ)^2∑_i≠ i^*log(T)/2Δ_i,ϵ^2Δ_i+(1+γ)^2∑_i≠ i^*1/2Δ_i,ϵ^2Δ_i +O(N)= (1+γ)^2∑_i≠ i^*log(T)+1/2((e^ϵ-1-b)(μ_i^*+μ_i)+b/e^ϵ +1)^2Δ_i^2Δ_i +O(N) ≤ (1+γ)^2(e^ϵ +1/(e^ϵ-1-b)μ_i^*+b)^2∑_i≠ i^*log(T)+1/2Δ_i +O(N), To prove the regret of UCB version, we use the expected time of sub-optimal arm in A.4 in <cit.>. E[N_i(T)]≤8log(T)/Δ_i,ϵ^2+1+π^2/3, Then, the regret of UCB algorithm R(T)≤ (e^ϵ +1/(e^ϵ-1-b)μ_i^*+b)^2∑_i≠ i^*8log(T)/Δ_i + ∑_i≠ i^*(1+π^2/3)Δ_i, We assume all the arm's reward satisfies Bernoulli distribution, the variance of arm i is (1-μ_i)μ_i, then the instance regret is Δ_i,ϵ:=μ_ϵ^*-μ_i,ϵ =(e^ϵ-1-b)(μ_1^2-μ_i^2+σ_1^2-σ_i^2)+b(μ_1-μ_i)/e^ϵ +1 = (e^ϵ-1-b)(μ_1^2-μ_i^2+μ_1(1-μ_1)-μ_i(1-μ_i))+b(μ_1-μ_i)/e^ϵ +1 = (e^ϵ-1-b)(μ_1-μ_i)+b(μ_1-μ_i)/e^ϵ +1=e^ϵ-1/e^ϵ +1(μ_1-μ_i) =e^ϵ-1/e^ϵ +1Δ_i § PROOF FOR THEOREM 3 UNDER EXPONENTIAL PROBABILITY FUNCTION Problem-dependent bound. μ_i,ϵ:=E[e^ϵ r]/e^ϵ+1=e^ϵμ_i+τ_i(ϵ)/e^ϵ+1 Where Eq. <ref> is based on the Jensen's inequality (e^ϵμ_i≤ E[e^ϵ r]). We define τ_i(ϵ) as the Jensen's gap between e^ϵμ_i and E[e^ϵ r]. Then, Δ_i,ϵ:=μ_ϵ^*-μ_i,ϵ=e^ϵμ_1-e^ϵμ_i+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1 =e^ϵ(Δ_i +μ_i)-e^ϵμ_i+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1 = e^ϵμ_i(e^ϵΔ_i-1)+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1 We put the Eq. <ref> value into Eq. <ref> The regret is R(T)≤ (1+γ)^2∑_i≠ i^*log(T)/2Δ_i,ϵ^2Δ_i+(1+γ)^2∑_i≠ i^*1/2Δ_i,ϵ^2Δ_i +O(N)= (1+γ)^2∑_i≠ i^*log(T)+1/2(e^ϵμ_i(e^ϵΔ_i-1)+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1)^2Δ_i +O(N) , Similarly, the regret of UCB algorithm R(T)≤∑_i≠ i^*{8log(T)/(e^ϵμ_i(e^ϵΔ_i-1)+τ_1(ϵ)-τ_i(ϵ)/e^ϵ +1)^2 +1+π^2/3}Δ_i ,
http://arxiv.org/abs/2307.01725v1
20230704135301
RRCNN: A novel signal decomposition approach based on recurrent residue convolutional neural network
[ "Feng Zhou", "Antonio Cicone", "Haomin Zhou" ]
cs.LG
[ "cs.LG", "eess.SP", "68T10", "I.5.1" ]
gdufe]Feng Zhoucor [gdufe]School of Information Sciences, Guangdong University of Finance and Economics, Guangzhou, 510320, China [email protected] [cor]Corresponding author antonio1,antonio2,antonio3] Antonio Cicone [antonio1]Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, L'Aquila, 67100, Italy [antonio2]Istituto di Astrofisica e Planetologia Spaziali, INAF, Rome, 00133, Italy [antonio3]Istituto Nazionale di Geofisica e Vulcanologia, Rome, 00143, Italy [email protected] gatech]Haomin Zhou [gatech]School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, United States [email protected] The decomposition of non-stationary signals is an important and challenging task in the field of signal time-frequency analysis. In the recent two decades, many signal decomposition methods led by the empirical mode decomposition, which was pioneered by Huang et al. in 1998, have been proposed by different research groups. However, they still have some limitations. For example, they are generally prone to boundary and mode mixing effects and are not very robust to noise. Inspired by the successful applications of deep learning in fields like image processing and natural language processing, and given the lack in the literature of works in which deep learning techniques are used directly to decompose non-stationary signals into simple oscillatory components, we use the convolutional neural network, residual structure and nonlinear activation function to compute in an innovative way the local average of the signal, and study a new non-stationary signal decomposition method under the framework of deep learning. We discuss the training process of the proposed model and study the convergence analysis of the learning algorithm. In the experiments, we evaluate the performance of the proposed model from two points of view: the calculation of the local average and the signal decomposition. Furthermore, we study the mode mixing, noise interference, and orthogonality properties of the decomposed components produced by the proposed method. All results show that the proposed model allows for better handling boundary effect, mode mixing effect, robustness, and the orthogonality of the decomposed components than existing methods. Empirical mode decomposition; Adaptive signal decomposition; Signal local average; Convolutional neural network; Residual network § INTRODUCTION With the development of technology, many every day–signals that exhibit nonlinearity and non-stationarity, such as human speech, radar systems, and seismic waves, can be accurately captured. It is well known that decomposing and exploring features of this kind of signals is quite challenging due to their nonlinear and non-stationary characteristics. In the past two decades, many studies have emerged for processing non-stationary signals. One of the most representative works is the empirical mode decomposition (EMD) algorithm along with the Hilbert spectrum analysis proposed by Huang et al. in 1998 <cit.>. Because EMD is fully data-driven, and can adaptively decompose a signal into several intrinsic mode functions (IMFs), it has already shown its usefulness in a wide range of applications, including semantic recognition <cit.>, alcoholism identification <cit.>, and stock trend prediction <cit.>. Despite its remarkable success, it still lacks mathematical foundations and is sensitive to noise and sampling. This sparked many efforts to improve the EMD. The improvements share the same feature: a signal is decomposed into several simpler components, and then a time-frequency analysis method is applied to each component separately. These signal decomposition methods can be mainly achieved in two ways: by iteration or by optimization. Methods based on iteration include many techniques, such as moving average, partial differential equation (PDE) and filter. For instance, Smith presented a new iteration method, based on the local average, to decompose the non-stationary signals into a set of functions <cit.>. Deléchelle et al. proposed a new approach that resolves one major problem in the EMD, that is, the mean envelope detection of a signal, in virtue of a parabolic PDE <cit.>. Hadji et al. used the differential calculus on envelopes, which makes them prove that iterations of the sifting process are well approximated by the resolution of PDE <cit.>. Hong et al. introduced a novel sifting method based on the concept of the local integral mean of a signal <cit.>. And Cicone et al. studied the method based on iterative filtering to compute the local average, which is utilized to replace the mean of the upper and lower envelopes in the sifting procedure of the EMD <cit.>. Tu et al. proposed the iterative nonlinear chirp mode decomposition (INCMD) <cit.> under the framework of the variational nonlinear chirp mode decomposition. On the other hand, there are methods based on optimization. Peng et al. designed an adaptive local linear operator-based optimization model to decompose a signal into several local narrow band signals <cit.>. Oberlin et al. proposed an optimization model in computing the mean envelope to replace the original one in EMD <cit.>. Inspired by the compressed sensing theory, Hou et al. studied a new adaptive data analysis method, which can be seen as a nonlinear version of compressed sensing and provides a mathematical foundation of the EMD method <cit.>. Flandrin et al. proposed a convex optimization procedure in order to replace the sifting process in the EMD, which follows the idea of texture-geometry decomposition with further specific EMD features such as quasi-orthogonality and extrema-based constraints <cit.>. Dragomiretskiy et al. put forward the variational mode decomposition (VMD), whose goal is to decompose a signal into a discrete number of modes, that have specific sparsity properties while reproducing the input <cit.>. Rehman et al. generalized the VMD method to multivariate or multichannel data <cit.>. And Zhou et al. presented a new mathematical framework by finding the local average based on the local variational optimization model <cit.>. In addition, there are some methods that cannot be classified into the above two categories. For instance, Daubechies et al. proposed the method, called synchrosqueezed wavelet transforms, by combining the wavelet analysis and reallocation method <cit.>. Gille presented the approach, called empirical wavelet transform (EWT), to build adaptive wavelets <cit.>, whose main idea is to extract the different modes by designing an appropriate wavelet filter bank. Singh et al. studied the adaptive Fourier decomposition method (FDM) based on the Fourier theory, which decomposes any data into a small number of “Fourier intrinsic band functions" <cit.>. And Wang et. extended the adaptive FDM to the multi-channel case <cit.>. According to the works described above, we find that whether the method is based on iterative or optimization, calculating the local average of a given signal is very critical. For example, in EMD <cit.>, the mean of the upper and lower envelopes are used to measure the local average of the signal; the local variational optimization model is constructed to compute the local average in <cit.>; and in the iterative filtering method <cit.>, the low-pass filter is employed to find the local average. Although there exist many studies on the characterization of the local average, it is basically impossible to find a method suitable for all signals from a practical point of view. Discussing the local average customized according to the type of signal, it not only provides a new research perspective, but also is likely to become the trend in the near future in signal processing for non-stationary data. In recent years, thanks to the remarkable results obtained in fields of research like image and natural language processing, the usage and application of deep learning methods have spread widely in an ample variety of research fields, like image processing <cit.> and natural language processing <cit.>. In signal processing, deep learning models have been used, so far, to achieve various goals, such as: noise removal <cit.>, forecasting <cit.>, and detection <cit.>. However, to the best of our knowledge, not a single method has been proposed so far in the literature, which allows to decompose a given non-stationary signal into simple oscillatory components, like the IMFs, which is solely based on deep learning techniques. For this reason, in the current work we propose an innovative signal decomposition algorithm, named recurrent residual convolutional neural network (RRCNN), which is based on deep learning models. In the RRCNN method, in fact, we first characterize the local average of a signal under the framework of deep learning, and then use it to handle signal decomposition. Specifically, the 1-Dimensional (1-D) convolutional neural network is primarily designed to adaptively compute the local average of the input signal, which is similar to the moving average method and the filter operation in the iterative filtering method, except that the weights in 1-D convolutional neural network are not fixed, but are learned adaptively during the training phase according to the input signals. Moreover, both the residual and recurrent structures are employed to amend the computed local average, which is consistent with the role of the inner loop process in many existing iterative-based signal decomposition methods. After the local average model is derived, it is cascaded in series to realize the decomposition model of the signal, whose function is equivalent to the outer loop structure of the existing iterative-based signal decomposition methods. Although the proposed method looks similar to those iterative-based decomposition methods that contain a two-loop structure, the use of the deep learning techniques makes this method have the following peculiarities: (1) Unlike the moving average method and the filter operation in the iterative filtering method, the convolutional filter weights that appear in the proposed RRCNN model, are not fixed in advance, but are learnt adaptively in the training phase according to the inputs. (2) Since the proposed RRCNN model is constructed under the framework of deep learning, it makes RRCNN more flexible and adaptive in finding the local average and achieving the decomposition for a given signal. In particular, the nonlinear activation function can be added after the convolutional operation to increase the expression ability. The loss function also can be customized according to the requirements that the ground truths usually have in the specific application. (3) Several artificial signals are constructed to verify the performance RRCNN in terms of local average characterization, noise interference, mode mixing, and orthogonality. Furthermore, we compare the RRCNN model with the state-of-the-art methods. In addition, we also use the solution of the Duffing and Lorenz equations, and the real data of the length of day (LOD) to evaluate the approximation ability of RRCNN to the existing models. (4) Generally speaking, the RRCNN model takes a certain amount of time in the training phase, which is a commonality of deep learning-based models. However, once the model training is completed, the computational efficiency in the prediction stage is relatively fast, especially it can use the parallelization mechanism to predict multiple signals at the same time, which is not available in most existing methods. (5) RRCNN has the limitations brought from the supervised model. For example: in the training phase, each input signal needs to know its label in advance. The rest of the paper is organized as follows. We review the iterative filtering method and provide its algorithm in Section <ref>. And the concept of β-smooth function and its properties are given in Section <ref>, which are used for proving the convergence of the proposed model. In Section <ref>, the new local average method and the derived signal decomposition method, collectively called the RRCNN, are proposed. Moreover, the training process and convergence analysis of RRCNN are given in this section. In Section <ref>, we study a series of examples to evaluate the performance of RRCNN compared with the existing methods. Finally, we give the conclusion in Section <ref>. § IF AND Β-SMOOTH FUNCTION §.§ IF The iterative filtering (IF) <cit.> is a recurrent algorithm that decomposes a nonlinear and non-stationary signal into a number of IMFs. The main idea of IF is the subtraction of local moving averages from the signal iteratively, where the local moving averages are calculated through convolutions with low-pass filters. Alg. <ref> shows the detailed steps, where the parameter l_n, called the filter length, is important in the IF method, and is determined by the information contained in the signal itself; w_n(·) represents the low-pass filter function. §.§ β-smooth function and some of its properties We first introduce the concepts of L-Lipschitz continuous and β-smooth for a function from <cit.>. A function f is said to be L-Lipschitz continuous if for all x,y∈𝒳, f(x)-f(y)≤ Lx-y, where 𝒳 denotes the convex domain of f, and L is called the Lipschitz constant. A continuously differentiable function f is β-smooth if the gradient ∇ f is β-Lipschitz, that is if for all x,y∈𝒳, ∇ f(x)-∇ f(y)≤βx-y, where 𝒳 is the convex domain of f. Then, for a unconstraint optimization problem, if its objective function is β-smooth, we can prove that the sequence generated by the gradient descent algorithm converges to a stationary point when the learning rate is small enough. The details can be found in Theorem <ref>. Let f be a β-smooth function and f^*=min f(x)>-∞. Then the gradient descent algorithm with a constant learning rate λ<2/β, i.e., x^(k+1)=x^(k)-λ∇ f(x^(k)), converges to a stationary point, i.e., the set {x:∇ f(x)=0}. According to the gradient descent algorithm, i,e., x^(k+1)=x^(k)-λ∇ f(x^(k)), as f is β-smooth, we have f(x^(k+1)) (a)≤ f(x^(k))+∇ f(x^(k))(x^(k+1)-x^(k))+β/2x^(k+1)-x^(k)^2 (b)= f(x^(k))-λ∇ f(x^(k))^2+βλ^2/2∇ f(x^(k))^2 =f(x^(k))-λ(1-βλ/2)∇ f(x^(k))^2, where the inequality (a) follows from Lemma 3.4 in <cit.>, and the equality (b) is obtained from Eqn. (<ref>). Due to λ<2/β, it becomes ∇ f(x^(k))^2≤f(x^(k))-f(x^(k+1))/λ(1-βλ/2). Next, we have ∑_k=0^K∇ f(x^(k))^2≤1/λ(1-βλ/2)∑_k=0^K(f(x^(k))-f(x^(k+1)))=f(x^(0))-f(x^(K+1))/λ(1-βλ/2)≤f(x^(0))-f(x^*)/λ(1-βλ/2), where x^* denotes the global optimization point. Taking the limit as K→ +∞, we have ∑_k=0^+∞∇ f(x^(k))^2≤ +∞. Hence, lim_k→ +∞∇ f(x^(k))=0 is obtained. § RRCNN INNER LOOP BLOCK AND RRCNN §.§ RRCNN inner loop block The main operation in IF is the computation of moving average, which is essentially realized by the convolution operation, where the filter length depends on the given signal, and the filter weights are mainly given by some empirical functions selected artificially a priori. Therefore, it is very natural to convert the convolution operation into a 1-D convolutional neural network model, where both the filter length and the filter weights can be learnt adaptively according to the input signals given in advance. Furthermore, some ingenious mechanisms in deep learning, such as the nonlinear activation function, the residue learning <cit.>, etc., can be adopted to make it more flexible. The structure we design to mimic the inner while loop of Alg. <ref>, is graphically depicted in Fig. <ref>. Since it mainly contains the recurrence mechanism, the convolutional layer and the subtraction operation, we call it the recurrent residual convolutional neural network (RRCNN) inner loop block. As shown in Fig. <ref>, the inner loop mainly consists of a judgment-loop structure and a residue operation, and the judgment-loop structure is formed of two convolutional layers and a residual operation. Suppose X∈ℝ^N (the vectors in this article are column vectors by default unless otherwise specified.) denote the input, the output of the RRCNN inner loop block, called Ŷ∈ℝ^N, is computed as the following Alg. <ref>. And X-Ŷ is the local average of the input signal obtained by the RRCNN inner loop block. Mathematically, the output of the RRCNN inner loop block can be expressed as: Ŷ=F(X, W), where W denotes the undetermined weights in the RRCNN inner loop block, and the function F represents the structure of RRCNN inner loop block, which is composed of the operators including convolution, nonlinear activation and subtraction. The detailed process of F can be formulated as: F(X, W)=f(X^(S-1), W^(S-1)), where S represents the number of recursion in the RRCNN inner loop block, W^(S-1) is the undetermined weights in the (S-1)-th recursion, and the function f and X^(S-1) are defined as: { f(X^(i), W^(i))=tanh(X^(i)∗ W_1^(i))∗W̃_2^(i), X^(i+1)=X^(i)-f(X^(i), W^(i)), . where X^(0)=X, W^(i) is the undetermined weights in the i-th recursion that it includes the weights, denoted as W_1^(i)∈ℝ^K_1 and W_2^(i)∈ℝ^K_2, W̃_2^(i):=softmax(W_2^(i))={exp(W_2^(i)_l)/∑_k exp(W_2^(i)_k)}_l=1^K_2, ∗ is the 1-D convolution operation, and i=0, 1, …, S-2. It is worth pointing out that the roles of the two 1-D convolutional layers in each judgment-loop are different. The role of the first convolutional layer, which is configured with a non-linear activation function (we select tanh in this work), is to enhance the nonlinear expression ability of the method. Whereas, the purpose of the second convolutional layer is to make the result more reasonable to describe the local average of the signal. Therefore, the non-negativity and normalization restrictions of its weights are added; and there is no nonlinear activation function configured with it. The use of padding in the two layers is to ensure that the length of the output is consistent with the input. We will discuss the training details of the RRCNN inner loop block in the following section. §.§ RRCNN After the RRCNN inner loop block for the identification of the local average of a given signal is constructed, we can cascade a finite number of RRCNN inner loop blocks together to derive the signal decomposition, which is called RRCNN, and is shown in Fig. <ref>. According to it, an input signal X∈ℝ^N (also denoted as X_0) can be decomposed into M IMFs Ŷ:={Ŷ_i}_m=1^M (each Ŷ_i∈ℝ^N) and a residue X_M∈ℝ^N. The detailed steps of RRCNN are listed in Alg. <ref>. The output of RRCNN can be formulated as: { Ŷ_m=F(X_m-1, W_m), X_m=X_m-1-Ŷ_m, . where m=1,2,…,M, X_0=X, F(X_m-1, W_m) is the m-th RRCNN inner loop block whose purpose is to extract an IMF from X_m-1, and W_m denotes the undetermined weights of the m-th RRCNN inner loop block. All the generated IMFs are concatenated as the outputs of the RRCNN model. The errors between the outputs, i.e., Ŷ, and the labels that are composed of the true first M IMFs, denoted as Y∈ℝ^N× M, are computed by the loss function. For example, the loss function can be expressed as: L(Ŷ, Y)=Ŷ- Y_F^2, where the errors are measured by mean square error (MSE), and ·_F denotes the Frobenius norm. In the RRCNN model equipped with the loss function as in Eqn. (<ref>), the computational complexity of the forward process of RRCNN is mainly attributed to the computation of the convolutional layer, which is O(N· K), where N and K denote the length of the signal and the size of the convolutional filter, respectively. The loss can be customized according to the characteristic of the decomposition task. For example, if the 3rd IMF are smooth, the quadratic total variation term, expressed as QTV(Ŷ_Ω_1):=∑_m∈Ω_1∑_t=1^N-1(Ŷ_(t+1),m-Ŷ_t,m)^2, can be added to the loss function, where Ω_1 represents the set of subscripts of those smooth components (here Ω_1={3}), i.e., L(Ŷ, Y)=Ŷ- Y_F^2+η QTV(Ŷ_Ω_1), where η≥ 0 is a penalty parameter, Ŷ is the dependent variable of the function F(·, ·), and its independent variables are X and W respectively. Moreover, if the 2nd and 3rd IMFs are orthogonal, an orthogonal constraint can be added to the loss function to ensure the orthogonality of the resulting components, i.e., L(Ŷ, Y) =∑_i∈Ω̂_2^cŶ_i-Y_i_2^2+∑_(i,j)∈Ω_2 W^o_ijŶ_i-Y_i_2^2+ W^o_ijŶ_j-Y_j_2^2, s.t. W^o_ij W^o_ij^⊤= I, where W^o_ij∈ℝ^N× N stands for the orthogonal matrix to be determined by min_{ W, W^o_ij}L(Ŷ, Y), Ω_2 (here Ω_2={(2,3)}) denotes the subscript pairs of those orthogonal components, Ω̂_2 (here Ω̂_2={2,3}) represents the set consisting of all subscripts that appear in Ω_2, and Ω̂_2^c={1,2,…,M}-Ω̂_2. In specific execution process, the orthogonal transformation W^o_ij of Ŷ_i and Ŷ_j can be regarded as adding a fully connected layer after the outputs of Ŷ_i and Ŷ_j. The two fully connected layers share weights, i.e., W^o_ij, and satisfy orthogonality, i.e., W^o_ij W^o_ij^⊤= I. In this case, the result of any IMF whose subscript meeting i∈Ω̂_2 is updated from Ŷ_i to W^o_ijŶ_i, and the results of other components remain unchanged. Compared with IF, RRCNN also contains two loops. The outer loop successively acquires M IMFs and one residue. And the purpose of the inner loop, i.e., the RRCNN inner loop block, is mainly to compute the local average by several iterations, and finally get the IMF and local average of the current input signal. On the other hand, these two methods have important differences, which can be summed up as: (1) The number of outer loop of the IF method is data-driven, which might be different for different signals. While it is given in advance for the proposed RRCNN approach. Therefore, RRCNN is suitable for decomposing the non-stationary signals containing the same number of mono-components. (2) In the IF method, the filter length that is a key parameter, is determined only by the signal under decomposition. However, its filter weights lack of adaptability, and they are basically determined by the filter length. For the RRCNN approach, the filter length in each convolutional layer is adaptive, and can be selected by hyper-parameter optimization. Moreover, its filter weights are data-driven, which makes RRCNN flexible in the characterization of the local average. (3) In addition to more flexibly computing the local average of the signal, the proposed RRCNN method has all the advantages of deep learning, such as: the non-linear expression ability, the customized loss function according to specific decomposition tasks, etc., which are missing in the traditional signal decomposition methods. Although the proposed RRCNN method is developed based on the IF, its ambition is not to replace the IF, or even to replace any existing signal decomposition method. The RRCNN is essentially a supervised deep learning model. This not only provides all the advantages that the existing signal decomposition methods are lacking, but also gives all the limitations of the supervision model itself. Such as, for any given signal, regardless of the decomposition performance, the existing signal decomposition methods can basically decompose it; however, the RRCNN model does not work without any training based on a set of input signals for which is known the ground truth. In addition, the training process of the RRCNN model can be regarded as the process of exploring the potential patterns of the input signals. If the patterns of the input signals are unique, their decomposition performance will also be greatly reduced. §.§ Training process and convergence analysis of RRCNN In the training process, the gradient-based back-propagation method is used to learn the filter weights that appear in the convolutional layers by using the training data. For the hyper-parameters in the RRCNN, including the filter lengths and the numbers of neurons in the convolutional layers, Hyperopt[Github website: https://github.com/hyperopt/hyperopt] is adopted to select their values by using the validation data, which is a Python library for serial and parallel optimization over awkward search spaces for hyper-parameters. Since when the parameter η in Eqn. (<ref>) is equal to 0, it degenerates to the case of Eqn. (<ref>), we only discuss the training processes and convergences of the models when their loss functions are given in Eqns. (<ref>) and (<ref>), respectively. We first discuss the situation of the RRCNN model equipped with the loss function (<ref>). For convenience, we consider that the output of the model only produces one IMF, and the number of recursion is also limited to 1, that is, M=1 and S_1=1 in Fig. <ref>. Suppose that (X^j,Y^j)_j=1^J⊂ℝ^N×ℝ^N is a given set of training samples, where J denotes the number of samples. The processe and the loss function of RRCNN are as follows: X_C1^j=σ(X^j∗W_1), X_C2^j=X_C1^j∗W̃_2, Ŷ^j=X^j - X_C2^j, L(Ŷ, Y)=∑_j=1^JŶ^j-Y^j_2^2+η QTV(Ŷ^j), where W̃_2=softmax(W_2), σ=tanh in our model, η is a non-negative parameter, W_1∈ℝ^K_1 and W_2∈ℝ^K_2 are the undetermined convolution filters. According to the gradient descent method and the chain derivation rule, W_1 and W_2 are learned following the back propagation method, that is, W_h^(n+1)=W_h^(n)-λ∇_W_hL, (h=1,2) where λ denotes the learning rate, ∇_W_hL=(∂/∂W_h_1,…,∂/∂W_h_K_1)^⊤L, ∂ L/∂W_2_i=-∑_j=1^J∑_t,l=1^N∂ L/∂Ŷ^j_t∂X^j_C2_t/∂W̃_̃2̃_l∂W̃_̃2̃_l/∂W_2_i, ∂ L/∂W_1_i=-∑_j=1^J∑_t,l=1^N∂ L/∂Ŷ^j_t∂X^j_C2_t/∂X^j_C1_lσ^'(R^j_l)∂ R^j_l/∂W_1_i, ∂ L/∂Ŷ^j_t=2(Ŷ^j_t-Y^j_t), ∂X^j_C2_t/∂W̃_̃2̃_l=X^j_C1_(t-⌊K_1/2⌋+l), R^j_l=(X^j∗W_1)_l, ∂ R^j_l/∂W_1_i=X^j_l-⌊K_2/2⌋+i, ∂W̃_̃2̃_l/∂W_2_i= exp(W_2_i+W_2_l)/(∑_kexp(W_2_k))^2,  if  l≠ i; exp(W_2_i)∑_k≠ iexp(W_2_k)/(∑_kexp(W_2_k))^2,  otherwise,  and ∂X^j_C2_t/∂X^j_C1_l= W̃_̃2̃_(l-t+⌊K_1/2⌋),  if  1≤ l-t+⌊K_1/2⌋≤ K_1; 0,  otherwise. Next, we discuss the convergence of the training process of the RRCNN model expressed in Eqn. (<ref>). For the RRCNN model defined in Eqn. (<ref>), the sequences {W_1^(n)} and {W_2^(n)} are generated by the gradient descent algorithm. Then, there exists a positive constant L independent of the input data, so that when the learning rate λ is less than L, {W_1^(n)} and {W_2^(n)} converge to their corresponding stationary points, i.e., the sets {W_1: ∇_W_1L= 0} and {W_2: ∇_W_2L= 0}, respectively. From the Lagrange's mean value theorem, it is obvious to find that the composition function composed by two functions, the β_1- and β_2-smooth respectively, is still β-smooth (β≤β_1β_2) under the condition that it can be composited. According to Eqn. (<ref>), the RRCNN model can be seen as a composition function composed of a series of functions. Then, combining with Theorem <ref>, we have that if the function of the i-layer in the RRCNN model is β_i-smooth, it can be proved that the weights sequences obtained by the gradient descent method, converge under the condition that the learning rate satisfies λ≤2/Πβ_i. For the function W̃_2=softmax(W_2) that is included in the RRCNN model, the second partial derivative of each W̃_̃2̃_l (l=1,… K_2) with respect to its independent variable vector W_2 exists and satisfies: |∂^2 W̃_̃2̃_l/∂W_2_i∂W_2_j|=exp(W_2_i)/(∑_kexp(W_2_k))^3∑_k≠ iexp(W_2_k)|∑_kexp(W_2_k)-2exp(W_2_i)|, if  l=i=j, exp(W_2_l)| 2exp(W_2_i)-∑_kexp(W_2_k)|, if  l≠ i=j, exp(W_2_i+W_2_l+W_2_j), if  l≠ i≠ j, ≤ 2. Hence, it states that each W̃_̃2̃_l is β-smooth (and β≤ 2) according to the Lagrange's mean value theorem. For the rest functions involved in RRCNN include quadratic function, tanh, and 1-D convolution operation, these functions can be easily proved to be β-smooth (β here is a general representation, and the β value of each function may not be the same.) by judging that their second derivative functions exist and bounded. Therefore, the conclusion is proved. For the case of the model with an orthogonal constraint in the loss function L in Eqn. (<ref>), the orthogonal constraint is a new obstacle compared to the previous model. However, in the field of optimization, the study of optimization problems with orthogonal constraints has become very common. And the gradient-based projection method can be used to find W^o with convergence guarantees <cit.>. Furthermore, under the idea of back propagation used in updating the weights of the neural networks, the solutions of problem (<ref>) can be obtained according to Alg. <ref>. Since the convergence analysis of W calculated based on the gradient descent method is consistent with Theorem <ref>, and that of W^o calculated based on the gradient projection method has been discussed in the literature <cit.>, so the convergence of Alg. <ref> can be obtained intuitively. Similar to the case with loss function in Eqn. (<ref>), we assume that in the RRCNN model, there are only two RRCNN inner loop blocks, i,e., M=2 in Fig. <ref>. Furthermore, the IMFs obtained by the two blocks satisfy orthogonality, i.e., Ω_2={(1,2)}, Ω̂_2={1,2} and Ω̂_2^c=∅ in Eqn. (<ref>). In this case, the RRCNN model can be reduced to a simpler formula, which looks close to the expressions of some orthogonal constraint algorithms, which reads: min_ W, W^o W^oŶ- Y_F^2, s.t. W^o W^o^⊤= I, where Ŷ depends on W. § EXPERIMENTS To evaluate the performance of the proposed RRCNN inner loop block and RRCNN models, we test them against seven aspects, which are: 1) Can RRCNN inner loop block be used to find the local average of the non-stationary signal? 2) Is RRCNN inner loop block still effective on noisy signal? 3) Can RRCNN be used to decompose the non-stationary signals? 4) Can RRCNN be effective on noisy signals? 5) Can RRCNN be effective on the signals composed of orthogonal mono-components? 6) Can RRCNN be effective on solutions of differential equations? 7) Is RRCNN capable of processing real signals? Furthermore, we discuss the computational time of RRCNN, and compare it with other methods. In addition, limited by the length of the paper, we submit some experiments as supplementary materials. In the experiments, we divide each of the constructed input data into the training and validation datasets with ratio 7:3. Moreover, our proposed signal average method, i.e., the RRCNN inner loop block, is compared with the existing signal average methods based on cubic spline envelopes (CSE) <cit.>, optimization model (OP), a segment power-function based envelopes (SPFE) <cit.> and iterative filtering (IF) <cit.>, respectively. For simplicity, we denote the averages obtained from the CSE, OP, SPFE, and IF as CSA, OPA, SPFA, and IF respectively. And the RRCNN[Code of RRCNN is available at https://github.com/zhoudafa08/RRCNN] method will be compared with the state-of-the-art methods, including EMD, IF[Code of IF: http://people.disim.univaq.it/∼antonio.cicone/Software.html], VMD[Code of VMD: https://www.mathworks.com/help/wavelet/ref/vmd.html]<cit.>, continuous wavelet transform based synchrosqueezing (SYNSQ_CWT) <cit.>, short time Fourier transform based synchrosqueezing (SYNSQ_STFT[Codes of SYNSQ_CWT and SYNSQ_STFT: https://github.com/ebrevd o/synchrosqueezing]) <cit.>, INCMD[Code of INCMD: https://github.com/sheadan/IterativeNCMD] <cit.>, EWT[Code of EWT: https://ww2.mathworks.cn/help/wavelet/ug/empirical-wave let-transform.html] <cit.>, FDM[Code of FDM: https://www.researchgate.net/publication/2745702 45_Matl ab_Code_Of_The_Fourier_Decomposition_Method_FDM] <cit.>, and its variant called DCT_GAS_FDM[Code of DCT_GAS_FDM: https://www.researchgate.net/publication/32629 4577_MATLABCodeOfFDM_DCT_DFT_FIR_FSASJuly2018] <cit.>. In addition, in the experiments for verifying the robustness of noise interference, the proposed RRCNN method is compared with the ensemble EMD (called EEMD[Code of EEMD: http://perso.ens-lyon.fr/patrick.flandrin/emd.html]) model <cit.>; and it is compared with the M-LFBF[Code of M-LFBF: http://perso.ens-lyon.fr/nelly.pustelnik/]<cit.> model to verify the orthogonality of the decomposed components. The results are measured by the metrics listed in Tab. <ref>, where Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are used to measure the errors between the predicted results Ŷ∈ℝ^N and the ground truth Y∈ℝ^N, and ρ(c_1, c_2) is used to evaluate the orthogonality between the resulting components c_1∈ℝ^N and c_2∈ℝ^N. §.§ Can RRCNN inner loop block be used to find the local average of the non-stationary signal? We first evaluate the performance of the proposed RRCNN inner loop block in solving the local average of the non-stationary signal. Several signals composed of the linear function and the mono-component function, or just the mono-component function, are constructed as the inputs, where the mono-component function can generally be expressed as a(t)cosθ(t), which meets a(t),θ^'(t)>0 ∀ t, and the changes in time of a(t) and θ^'(t) are much slower than that of θ(t). Ideally, the average of the mono-component signal is zero. The input signals and the corresponding labels we construct in this part are listed in Tab. <ref>. It should be pointed out that the label here represents the first IMF of the corresponding input signal, not the local average. The local average can be computed by subtracting the label from the input signal. After the RRCNN inner loop block is trained with the inputs in Tab. <ref>, we select three mono-component signals with different instantaneous frequencies and instantaneous amplitudes, discussed in Examples 1-2, which are not included in the inputs, to test the performance of the RRCNN inner loop block, respectively. Example 1: x(t)=(3+2cos(2t))cos(2t^2), t∈[0,3]. Example 2: x(t)=(2t+cos(2t^2))cos(12t+t^2+2cos(t)), t∈[0,3]. Example 3: x(t)=(3+2cos(3t))cos(5t^2), t∈[0,3]. The moving averages of the signals in Examples 1-3 obtained from different methods are shown in Fig. <ref> (a)-(c), respectively, and the errors between the obtained moving averages and the true average are listed in Tabs. <ref>-<ref>, respectively. According to the results, we can observe the following phenomena: (1) The existing methods are prone to boundary effects, which can be seen from the left boundaries of Fig. <ref> (a)-(c). However, the RRCNN inner loop block method can avoid this problem to a certain extent. (2) When the signal is in a situation where the amplitude changes quickly and the frequency changes slowly, the RRCNN inner loop block performs best among all models according to the left parts of Fig. <ref> (a)-(c). When the amplitude change is reduced and the frequency change is accelerated, its performance may even be inferior to other models, which can be seen from the right half of Fig. <ref> (c). (3) Even though the RRCNN inner loop block has some dazzling and bleak performance compared with other local average methods, it can be seen from Tabs. <ref>-<ref> that the MAE and RMSE of RRCNN are significantly reduced compared to other models. The reason for the phenomenon above can be attributed to: (1) The averages obtained by the comparison methods are basically determined by the local information of the signal, which makes the results reasonable when the information is sufficient (e.g., the part of the amplitude change is reduced and the frequency change is accelerated); and the results differ greatly when the information is insufficient (e.g., the part of the amplitude changes quickly and the frequency changes slowly). (2) The filter weights of each convolutional layer in the RRCNN are shared, they are determined by all the information contained in the whole signal. Therefore, the average obtained by the RRCNN is relatively stable, and it is not easy to cause interference due to the large difference in the changing of the signal amplitude and frequency. §.§ Is RRCNN inner loop block still effective on noisy signal? In this part, we consider the robustness of the RRCNN inner loop block model to noise based on the constructed inputs and labels in Section <ref>. Specifically, each input signal is perturbed with additive Gaussian noise with the signal-to-noise ratio (SNR) set to 15dB, and the corresponding label remains unchanged, as detailed in Tab. <ref>. Similar to the section above, we select a noisy signal, which is essentially the signal in Examples 2-3 with additive Gaussian noise, and is described in Examples 4-5, to test the performance. Example 4: x(t)=(2t+cos(2t^2))cos(20t+t^2+2cos(t))+ε(t), where t∈[0,3] and SNR=15dB. Example 5: x(t)=(3+2cos(3t))cos(5t^2)+ε(t), where t∈[0,3] and SNR=15dB. The results of Examples 4-5 are shown in Fig. <ref> (a)-(b) and Tabs. <ref>-<ref>. From the results, we can find that the RRCNN inner loop block performs the the most robust among all models for the signal with additive Gaussian noise. Specifically, in Example 4, the RRCNN inner loop block reduces the MAE and RMSE from 0.1936, 0.2420, for the second the best method (i.e., SPFA), to 0.1190, 0.1500, respectively; and in Example 5, the RRCNN inner loop block reduces the MAE and RMSE from 0.2004, 0.2520, for the second the best method (i.e., SPFA), to 0.1290, 0.1623, respectively. §.§ Can RRCNN be used to decompose the non-stationary signals? After evaluating the proposed RRCNN inner loop block model in calculating the local average, the next task is to assess the decomposition performance of RRCNN for non-stationary signals. To demonstrate the problem, we only consider decomposing the signals consisting of two components. The input signals in the part can be divided into two categories: one is composed of a mono-component signal and a zero signal, and the other is of two mono-components with close frequencies. The former is to train RRCNN describe the zero local average of the mono-component signal; and the latter is to enable RRCNN to decompose signals with close frequencies, which is the main factor causing the mode mixing effect. The inputs and labels are constructed and shown in Tab. <ref>. To challenge the proposed model, we hereby choose the signal composed of two cosine signals with frequencies of 2.5Hz and 3.4Hz respectively, described in Example 4, that is more prone to introduce the mode mixing effect in the existing methods. Example 6: x(t)=cos(5π t)+cos(6.8π t), t∈[0,6]. In Example 6, the components predicted by the trained RRCNN model are compared with those obtained from the state-of-the-art methods in signal decomposition. The metrics of the errors between the obtained components and the labels, measured by MAE and RMSE, are shown in Tab. <ref>. In addition, to compare RRCNN and the existing methods more intuitively, we select the top three methods with comprehensive performance from Tab. <ref>, i.e., RRCNN, EMD, and INCMD, and plot their obtained components and the corresponding time-frequency-energy (TFE) representations in Fig. <ref> (a), (b), respectively. It should be noted that the identification of an optimal TFE representation is a research topic on its own, and it is out of the scope of this work. Here, we set as TFE representation the Fourier quadrature transforms that was proposed in <cit.>. According to the results, we have the following conclusions: (1) The mode mixing problem is indeed a big challenge for some of the existing methods. For example, the maximum value of x(t) is 2, but the MAEs of the two components obtained by the SYNSQ_STFT method are as high as 0.6620 and 0.6778, respectively, which basically does not separate the cosine signals with frequencies of 2.5Hz and 3.4Hz from x(t). (2) Many methods achieve satisfactory decomposition for x(t). For example, it can be seen from the left plots in the 2nd and 3rd rows of Fig. <ref> (a) that the components obtained by the EMD, INCMD, and RRCNN methods have relatively consistent oscillation modes with the ground truths. This viewpoint can also be drawn from Fig. <ref> (b), although there are some obvious fluctuations, the TFE representations of the two components, obtained by EMD, INCMD, and RRCNN methods, are basically separated just like the those of the real components. (3) Nonetheless, a closer look at the right plots in the 2nd and 3rd rows of Fig. <ref> (a) reveals the subtle advantage of the RRCNN model at the boundaries. Due to the incompleteness of the waveform at the boundary, many existing methods are deeply affected by it, as are EMD and INCMD. However, the weights of the convolution filters in the RRCNN model depend on the entire waveform of the whole training samples, which reduce consistently the boundary effects. §.§ Can RRCNN be effective on noisy signals? Similar to the RRCNN inner loop block, we verify the robustness of RRCNN against additive Gaussian noise in this part. The constructed inputs and labels are listed in Tab. <ref>, where the inputs are generated by introducing additive Gaussian noise with the SNR set to 25dB to the signals in Tab. <ref>. After the RRCNN model is trained, we choose the signal consisting of two mono-components and additive Gaussian noise with a SNR of 15dB as the test data, which is given in Example 7. Since the smaller SNR value, the greater the noise, the noise of x(t) is larger than that in the training data. Example 7: x(t)=cos(5π t)+sin(8π t+2t^2+cos(t))+ε(t), t∈[0,6], SNR=15dB. The errors between the ground truths and the components obtained by different methods, measured by MAE and RMSE, are reported in Tab. <ref>. Furthermore, the components, errors, and TFE representations of the three best performing methods, i.e., FDM, DCT_GAS_FDM, and RRCNN, are shown in Fig. <ref> (d), (e), respectively. According to the results, we can find that RRCNN works for the signals with additive Gaussian noise, although there is no overwhelming advantage, especially over FDM. Specifically, from the left plots in the 2nd-3rd rows of Fig. <ref> (d), RRCNN basically separates the two mono-components, and the resulting components are consistent with the ground truths in the oscillation mode. Moreover, as shown in the right plots in the 2nd-3rd rows of Fig. <ref> (d), the errors of RRCNN are relatively evenly dispersed in the entire time period, while those of the FDM and DCT_GAS_FDM methods are both small in the middle and large at the boundaries, which is consistent with the observations in Section <ref>. At last, since RRCNN is a method designed with the help of CNN in the time domain, it can obtain an effect comparable to the existing methods in the time domain. Due to the lack of a prior information of RRCNN in the frequency domain, the effect of this method might be slightly reduced from the time-frequency domain. According to the results in Fig. <ref> (e), the TFE distributions of the two mono-components obtained by the FDM, DCT_GAS_FDM, and RRCNN, are obviously spaced apart, but the TFE distribution of the component c_1 obtained by RRCNN has a much more severe jitter than the true component sin(8π t+2t^2+cos(t)) in the interval t∈[2, 4]. However, from the TFE representations, we can also see that the RRCNN method is able to reduce boundary effects compared to other methods. §.§ Can RRCNN be effective on the signals composed of orthogonal mono-components? In this part, we test the proposed RRCNN model on the signal composed of the orthogonal components. As discussed in Section <ref>, the RRCNN model in this part should have been equipped with the loss function with an orthogonal constraint, we directly add an inner product term to the loss function to control the orthogonality, i.e., γ|<c_1, c_2>|/c_1_2c_2_2, where γ denotes the positive penalty parameter, and c_1 and c_2 are two components that are orthogonal. Then we train the model by the back propagation method based on the loss function. Although there is no guarantee of convergence in this case, it is simple, computationally efficient, and basically meets the expectation of orthogonality from the experimental results. The Fourier basis is adopted for the construction of the inputs of RRCNN because they are orthogonal. The constructed inputs and labels are given in Tab. <ref>. After the RRCNN is trained, we use it to decompose the signal given in Example 8, which is composed of two mono-components that are orthogonal and close in frequency. Example 8: x(t)=cos(7t)+sin(9t), t∈[0, 2π]. The orthogonality and the errors between the resulting components and the corresponding ground truths are reported in Tab. <ref>. According to the results, the EWT and DCT_GAS_FDM methods perform best in terms of orthogonality, which is attributed to the fact that the former is based on the wavelet transform and the latter is based on the discrete cosine transform, both of which have strong orthogonal constraint on the decomposition results. For RRCNN, its orthogonality is promoted by minimizing the loss function. Therefore, on one hand, the results of RRCNN tend to find a balance in each item of the loss function. On the other hand, the limited iterative process cannot ensure that the results are completely converges to the true solution. Combined with orthogonality and error metrics, the overall performance of the RRCNN model is still satisfactory. Specifically, it is not the best in terms of orthogonality, but it is also very close to orthogonality, outperforming the other models except EWT and DCT_GAS_FDM; in terms of error, its two components are also the closest to the true components. Moreover, we select the top three methods in terms of the error metrics from Tab. <ref>, that is, FDM, DCT_GAS_FDM, and RRCNN, and plot their obtained components, errors, and TFE representations in Fig. <ref> (c), (f), respectively. From the plots, we can draw a conclusion similar to Example 6, that is, the RRCNN model can obtain the performance that is comparable to the state-of-the-art methods in the time domain, especially at the boundary, which can weaken the the impact of incomplete waveforms at the boundary to a certain extent. However, due to the lack of a priori information in the frequency domain in the design of RRCNN, its advantage in terms of TFE distribution might be reduced when compared with other methods especially in the middle of the signal. However its performance are overall better than existing methods at the boundaries, also in the time-frequency domain. §.§ Can RRCNN be effective on solutions of differential equations? To address this question, and following what has been done in the seminal work by Huang et al. <cit.>, we test RRCNN against the solutions of a Duffing and Lorenz equations. The labels are necessary in the training process of the RRCNN model, however, they are not known in this instance. Since the EMD method works well for these two types of signals, we decided to use the results of EMD as the labels to verify the learning and generalization capabilities of the RRCNN model. Example 9: We consider the following Duffing equation: ẍ+α x+β x^3=γcos(ω t). We focus our attention on the decomposition of ẋ(t). We first construct the inputs by solving the equation using the Euler method with the parameters set to α∈ [-1, 0), β∈[0.8, 1.2], ω∈[1.0, 1.4] with step size 0.1, and γ∈[0.04, 0.06] with step size 0.005, respectively, where t∈[0, 300] and the initial conditions: {x(0)=1, ẋ(0)=1}. And then, the first two IMFs of each input obtained by the EMD are collected as the labels. After the RRCNN is trained, we apply it to decompose the signal ẋ(t) with α=-0.85, β=1.05, γ=0.045, w=1.3, which was not included in the training data. The results are depicted in Fig. <ref>. Both the resulting components produced by RRCNN are very close to those of EMD. Example 10: The Lorenz equation is mathematically expressed as: ẋ=-σ(x-y), ẏ=rx-y-xz, ż=-bz+xy. We take into account the decomposition of x(t). The inputs are the x(t) achieved by the ode45 (code: https://www.mathwo rks.com/help/matlab/ref/ode45.html) method by changing the parameters σ∈[9, 11] with step size 0.2, r∈[19, 21] with step size 0.2, and b∈[2,5] with step size 1, where t∈[0, 50] and initial conditions: {x(0)=-10, y(0)=0, z(0)=0}. Similarly to Example 10, we treat the first two IMFs of each input produced by the EMD as the labels. The results for signal x(t) with parameters set to σ=10.5, r=20.5, b=3.5, predicted by the trained RRCNN model are shown in Fig. <ref>. The results show that the RRCNN has good learning and generalization capabilities for the solution to Lorenz equation, and can basically achieve the performance of EMD. §.§ Is RRCNN capable of processing real signals? We hereby employ the RRCNN model to process the real data, i.e., the length of day (LOD, data source: http://hpiers.obspm.fr/eoppc/eop/eopc04/eopc04.62-now, start date: Jan. 1,1962, end date: May 24, 2022), which is widely used in the verification of the signal decomposition methods. To generate the signals from LOD data for use as the inputs to train the RRCNN model, we first split the LOD into a series of length 720 (about two years) segments with a stride of 180 (about half year). And then, similar to the situation in which we were dealing with the solutions of differential equations, the main challenge in training the RRCNN model on real signals is the ground truth calibration. We use the EWT method to produce the labels for each segment, because it can produce an artificially pre-set number of components. Example 11: After the RRCNN model is trained, we apply it to the input, which is the LOD data ranging from Dec. 12, 2018 to Nov. 30, 2020, and is excluded in the training dataset. Example 12: We also apply the trained RRCNN model to another input, which is the LOD data ranging from Dec. 7, 2019 to Nov. 25, 2021, and is excluded in the training dataset. The components obtained from EWT and RRCNN for the input are depicted in Fig. <ref> (a)-(b). The plots prove that RRCNN can approximate EWT well. §.§ Computational time of different methods To compare the computational time of the proposed method with other methods, we apply them to decompose the signal in Example 4, and record the corresponding runtime for comparison. Since in the prediction phase, RRCNN has the advantage of processing the decomposition of multiple signals in parallel, we also compare the running time of these methods when decomposing multiple signals. In this case, we no longer construct new signals to be decomposed, but just repeat the processing of the signal in Example 4, which does not affect the comparison of running time. All the experiments are performed in Python 3.8.12 or Matlab R2020b on a Dell Precision 5820 tower with Intel(R) Xeon(R) W-2102 processor (2.90 GHz), 64G memory, and Ubuntu 18.04.3 operating system. We depict the running time of different methods when decomposing different numbers of signals in Fig. <ref>. From it, we have the following findings: When decomposing only one signal, the computational efficiency of RRCNN ranks in the middle among the 11 methods compared in this work. However, as the number of signals increases to 10, the computational efficiency of RRCNN begins to improve, and its ranking rises to 4th, surpassing the IF and VMD methods; when the number increases to 100, its computational efficiency ranks 2nd, beyond the EMD and DCT_GAS_FDM methods. Although there is still a gap between RRCNN and the EWT method that ranks 1st in computational efficiency, it can be seen that as the number of signals increases, the gap between them shrinks significantly. § CONCLUSION In the paper, we use deep learning techniques to tackle the issue of decomposing a non-stationary signal into oscillatory components. Specifically, we first construct the RRCNN inner loop block for obtaining the local average of a given signal, and then these blocks are cascaded into a deeper network, called RRCNN, which is used to decompose a given signal. Since the proposed RRCNN model is based on the deep learning framework and is a supervised model, it has the advantages of these two types of models. First, the convolutional filter weights in the model are learnt according to the input signals, which makes the proposed method more adaptive. Second, some common tools in deep learning, like the residual structure and the nonlinear activation function, can be added to increase the expressive ability and flexibility of the proposed model. Third, the proposed model can be customized according to the application. For example, when processing signals composed of orthogonal components, an inner product term can be added to the loss function to enhance the orthogonality of the derived components. To verify the performance of the proposed model, we compare it with other existing models from seven aspects. And the artificial and real data are used in the experiments. All results show that the proposed method works better in handling the boundaries, mode mixing effects and the orthogonality of the decomposed components, and is more robust than the existing methods. On the other hand, RRCNN has the classical limitations of supervised models. For example, the labels must be given in advance in the training phase. Therefore, the goal of this work is not to replace any existing method, but to propose a completely new kind of approach which is based on deep learning and supervised learning and to add it to the existing methods, to derive a more flexible and adaptive processing approach for signals. In the future, we plan to work on the extension of this work to multivariate signals. Furthermore, given that the main limitation of the RRCNN approach is that it requires other decomposition methods to calibrate the ground truths before the training phase, we plan to work in the future on the development of an unsupervised learning method to produce a new technique with similar performance to the RRCNN algorithm, but that does not require any training based on other techniques. § DECLARATION OF COMPETING INTEREST No conflict of interest exits in the submission of this manuscript , and manuscript is approved by all authors for publication. We declare that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. § ACKNOWLEDGE F. Zhou research was partially supported by the National Natural Science Foundation of China [grant number 11901113], the Guangdong Basic and Applied Basic Research Foundation [grant number 2020A1515110951], and the China Scholarship Council [grant number 202008440024]. A. Cicone is a member of the Italian GNCS of the INdAM. H. Zhou research was partially supported by the National Science Foundation of US [grant number DMS-1830225], and the Office of Naval Research [grant number N00014-18-1-2852]. § REFERENCES 39 natexlab#1#1 [#1],#1 [Huang et al.(1998)Huang, Shen, Long, Wu, Shih, Zheng, Yen, Tung, and Liu]huang1998empirical authorN. E. Huang, authorZ. Shen, authorS. R. Long, authorM. C. Wu, authorH. H. Shih, authorQ. Zheng, authorN. C. Yen, authorC. C. Tung, authorH. H. Liu, titleThe empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis, journalProceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences volume454 (year1998) pages903–995. [Aksoy et al.(2017)Aksoy, Orhan, and Wörgötter]aksoy2017semantic authorE. E. Aksoy, authorA. Orhan, authorF. Wörgötter, titleSemantic decomposition and recognition of long and complex manipulation action sequences, journalInternational Journal of Computer Vision volume122 (year2017) pages84–115. [Thilagaraj and Rajasekaran(2019)]thilagaraj2019empirical authorM. Thilagaraj, authorM. P. Rajasekaran, titleAn empirical mode decomposition (EMD)-based scheme for alcoholism identification, journalPattern Recognition Letters volume125 (year2019) pages133–139. [Zhou et al.(2019)Zhou, Zhou, Yang, and Yang]zhou2019emd2fnn authorF. Zhou, authorH.-M. Zhou, authorZ. Yang, authorL. Yang, titleEMD2FNN: A strategy combining empirical mode decomposition and factorization machine based neural network for stock market trend prediction, journalExpert Systems with Applications volume115 (year2019) pages136–151. [Smith(2005)]smith2005local authorJ. S. Smith, titleThe local mean decomposition and its application to EEG perception data, journalJournal of the Royal Society Interface volume2 (year2005) pages443–454. [Deléchelle et al.(2005)Deléchelle, Lemoine, and Niang]delechelle2005empirical authorE. Deléchelle, authorJ. Lemoine, authorO. Niang, titleEmpirical mode decomposition: an analytical approach for sifting process, journalIEEE Signal Processing Letters volume12 (year2005) pages764–767. [El Hadji et al.(2009)El Hadji, Alexandre, and Boudraa]el2009analysis authorS. D. El Hadji, authorR. Alexandre, authorA.-O. Boudraa, titleAnalysis of intrinsic mode functions: A PDE approach, journalIEEE Signal Processing Letters volume17 (year2009) pages398–401. [Hong et al.(2009)Hong, Wang, and Tao]hong2009local authorH. Hong, authorX. Wang, authorZ. Tao, titleLocal integral mean-based sifting for empirical mode decomposition, journalIEEE Signal Processing Letters volume16 (year2009) pages841–844. [Cicone et al.(2016)Cicone, Liu, and Zhou]cicone2016adaptive authorA. Cicone, authorJ. Liu, authorH. Zhou, titleAdaptive local iterative filtering for signal decomposition and instantaneous frequency analysis, journalApplied and Computational Harmonic Analysis volume41 (year2016) pages384–411. [Cicone and Pellegrino(2022)]cicone2022multivariate authorA. Cicone, authorE. Pellegrino, titleMultivariate fast iterative filtering for the decomposition of nonstationary signals, journalIEEE Transactions on Signal Processing volume70 (year2022) pages1521–1531. [Tu et al.(2020)Tu, Dong, Chen, Zhao, Hu, and Peng]tu2020iterative authorG. Tu, authorX. Dong, authorS. Chen, authorB. Zhao, authorL. Hu, authorZ. Peng, titleIterative nonlinear chirp mode decomposition: A Hilbert-Huang transform-like method in capturing intra-wave modulations of nonlinear responses, journalJournal of Sound and Vibration volume485 (year2020) pages115571. [Peng and Hwang(2010)]peng2010null authorS. Peng, authorW.-L. Hwang, titleNull space pursuit: An operator-based approach to adaptive signal separation, journalIEEE Transactions on Signal processing volume58 (year2010) pages2475–2483. [Oberlin et al.(2012)Oberlin, Meignen, and Perrier]oberlin2012alternative authorT. Oberlin, authorS. Meignen, authorV. Perrier, titleAn alternative formulation for the empirical mode decomposition, journalIEEE Transactions on Signal Processing volume60 (year2012) pages2236–2246. [Hou and Shi(2011)]hou2011adaptive authorT. Y. Hou, authorZ. Shi, titleAdaptive data analysis via sparse time-frequency representation, journalAdvances in Adaptive Data Analysis volume3 (year2011) pages1–28. [Pustelnik et al.(2012)Pustelnik, Borgnat, and Flandrin]pustelnik2012multicomponent authorN. Pustelnik, authorP. Borgnat, authorP. Flandrin, titleA multicomponent proximal algorithm for empirical mode decomposition, in: booktitle2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), organizationIEEE, year2012, pp. pages1880–1884. [Pustelnik et al.(2014)Pustelnik, Borgnat, and Flandrin]pustelnik2014empirical authorN. Pustelnik, authorP. Borgnat, authorP. Flandrin, titleEmpirical mode decomposition revisited by multicomponent non-smooth convex optimization, journalSignal Processing volume102 (year2014) pages313–331. [Dragomiretskiy and Zosso(2013)]dragomiretskiy2013variational authorK. Dragomiretskiy, authorD. Zosso, titleVariational mode decomposition, journalIEEE Transactions on Signal Processing volume62 (year2013) pages531–544. [ur Rehman and Aftab(2019)]ur2019multivariate authorN. ur Rehman, authorH. Aftab, titleMultivariate variational mode decomposition, journalIEEE Transactions on Signal Processing volume67 (year2019) pages6039–6052. [Zhou et al.(2016)Zhou, Yang, Zhou, and Yang]zhou2016optimal authorF. Zhou, authorL. Yang, authorH. Zhou, authorL. Yang, titleOptimal averages for nonlinear signal decompositions—another alternative for empirical mode decomposition, journalSignal Processing volume121 (year2016) pages17–29. [Daubechies et al.(2011)Daubechies, Lu, and Wu]daubechies2011synchrosqueezed authorI. Daubechies, authorJ. Lu, authorH.-T. Wu, titleSynchrosqueezed wavelet transforms: An empirical mode decomposition-like tool, journalApplied and Computational Harmonic Analysis volume30 (year2011) pages243–261. [Gilles(2013)]gilles2013empirical authorJ. Gilles, titleEmpirical wavelet transform, journalIEEE Transactions on Signal Processing volume61 (year2013) pages3999–4010. [Singh et al.(2017)Singh, Joshi, Patney, and Saha]singh2017fourier authorP. Singh, authorS. D. Joshi, authorR. K. Patney, authorK. Saha, titleThe Fourier decomposition method for nonlinear and non-stationary time series analysis, journalProceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences volume473 (year2017) pages20160871. [Singh(2018)]singh2018novel authorP. Singh, titleNovel Fourier quadrature transforms and analytic signal representations for nonlinear and non-stationary time-series analysis, journalRoyal Society open science volume5 (year2018) pages181131. [Wang et al.(2022)Wang, Wong, Rosa, Qian, and Wan]wang2022adaptive authorZ. Wang, authorC. M. Wong, authorA. Rosa, authorT. Qian, authorF. Wan, titleAdaptive fourier decomposition for multi-channel signal analysis, journalIEEE Transactions on Signal Processing volume70 (year2022) pages903–918. [Hemanth and Estrela(2017)]hemanth2017deep authorD. J. Hemanth, authorV. V. Estrela, titleDeep learning for image processing applications, volume volume31, publisherIOS Press, year2017. [Li(2018)]li2017deep authorH. Li, titleDeep learning for natural language processing: advantages and challenges, journalNational Science Review volume5 (year2018) pages24–26. [Rudy et al.(2019)Rudy, Kutz, and Brunton]rudy2019deep authorS. H. Rudy, authorJ. N. Kutz, authorS. L. Brunton, titleDeep learning of dynamics and signal-noise decomposition with time-stepping constraints, journalJournal of Computational Physics volume396 (year2019) pages483–506. [Zhu et al.(2019)Zhu, Mousavi, and Beroza]zhu2019seismic authorW. Zhu, authorS. M. Mousavi, authorG. C. Beroza, titleSeismic signal denoising and decomposition using deep neural networks, journalIEEE Transactions on Geoscience and Remote Sensing volume57 (year2019) pages9476–9488. [Zheng et al.(2020)Zheng, Chen, You, Jiang, Li, and Zhang]zheng2020ensemble authorX. Zheng, authorW. Chen, authorY. You, authorY. Jiang, authorM. Li, authorT. Zhang, titleEnsemble deep learning for automated visual classification using EEG signals, journalPattern Recognition volume102 (year2020) pages107147. [Sahoo et al.(2022)Sahoo, Dash, Mishra, and Sabut]sahoo2022deep authorS. Sahoo, authorP. Dash, authorB. Mishra, authorS. K. Sabut, titleDeep learning-based system to predict cardiac arrhythmia using hybrid features of transform techniques, journalIntelligent Systems with Applications volume16 (year2022) pages200127. [Ding et al.(2022)Ding, Chen, Pan, and Song]ding2022mine authorL. Ding, authorZ. Chen, authorY. Pan, authorB. Song, titleMine microseismic time series data integrated classification based on improved wavelet decomposition and ELM, journalCognitive Computation (year2022) pages1–21. [Barz et al.(2019)Barz, Rodner, Garcia, and Denzler]barz2018detecting authorB. Barz, authorE. Rodner, authorY. G. Garcia, authorJ. Denzler, titleDetecting regions of maximal divergence for spatio-temporal anomaly detection, journalIEEE Transactions on Pattern Analysis and Machine Intelligence volume41 (year2019) pages1088–1101. [Zhang et al.(2021)Zhang, Ding, Zhang, Liu, Han, and Doermann]zhang2021learning authorD. Zhang, authorW. Ding, authorB. Zhang, authorC. Liu, authorJ. Han, authorD. Doermann, titleLearning modulation filter networks for weak signal detection in noise, journalPattern Recognition volume109 (year2021) pages107590. [Bubeck et al.(2015)]bubeck2015convex authorS. Bubeck, et al., titleConvex optimization: Algorithms and complexity, journalFoundations and Trends in Machine Learning volume8 (year2015) pages231–357. [He et al.(2016)He, Zhang, Ren, and Sun]he2016deep authorK. He, authorX. Zhang, authorS. Ren, authorJ. Sun, titleDeep residual learning for image recognition, in: booktitleProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, year2016, pp. pages770–778. [Ablin et al.(2018)Ablin, Cardoso, and Gramfort]ablin2018faster authorP. Ablin, authorJ.-F. Cardoso, authorA. Gramfort, titleFaster ICA under orthogonal constraint, in: booktitle2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), year2018, pp. pages4464–4468. [Gao et al.(2018)Gao, Liu, Chen, and Yuan]gao2018new authorB. Gao, authorX. Liu, authorX. Chen, authorY.-x. Yuan, titleA new first-order algorithmic framework for optimization problems with orthogonality constraints, journalSIAM Journal on Optimization volume28 (year2018) pages302–332. [Huang et al.(1999)Huang, Shen, and Long]huang1999new authorN. E. Huang, authorZ. Shen, authorS. R. Long, titleA new view of nonlinear water waves: the Hilbert spectrum, journalAnnual Review of Fluid Mechanics volume31 (year1999) pages417–457. [Wu and Huang(2009)]wu2009ensemble authorZ. Wu, authorN. E. Huang, titleEnsemble empirical mode decomposition: A noise-assisted data analysis method, journalAdvances in Adaptive Data Analysis volume1 (year2009) pages1–41.
http://arxiv.org/abs/2307.01045v1
20230703142205
Cloud Native Software Engineering
[ "Brian S. Mitchell" ]
cs.SE
[ "cs.SE" ]
Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning Yu-Chin HsuInstitute of Economics, Academia Sinica, 128, Section 2, Academia Road, Nankang, Taipei 115, Taiwan. E-mail: . Academia Sinica Martin HuberUniversity of Fribourg, Department of Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland. E-mail: . University of Fribourg Yu-Min YenDepartment of International Business, National Chengchi University, 64, Section 2, Zhi-nan Road, Wenshan, Taipei 116, Taiwan. E-mail: . National Chengchi University August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Cloud compute adoption has been growing since its inception in the early 2000's with estimates that the size of this market in terms of worldwide spend will increase from $700 billion in 2021 to $1.3 trillion in 2025<cit.>. While there is a significant research activity in many areas of cloud computing technologies, we see little attention being paid to advancing software engineering practices needed to support the current and next generation of cloud native applications. By cloud native, we mean software that is designed and built specifically for deployment to a modern cloud platform. This paper frames the landscape of Cloud Native Software Engineering from a practitioners standpoint, and identifies several software engineering research opportunities that should be investigated. We cover specific engineering challenges associated with software architectures commonly used in cloud applications along with incremental challenges that are expected with emerging IoT/Edge computing use cases. § INTRODUCTION AND CONTEXT Delivering managed computing services on hosted infrastructure started in the late 1990's with the introduction of the Software-as-a-Service (SaaS) model. One of the early pioneers of this model was Salesforce.com<cit.>, which launched in 1999. Unlike other companies that licensed software deployed on customer-owned equipment, SaaS companies provide a pay-as-you-go subscription model. In this model, they manage all of the software and compute infrastructure, you pay a monthly charge that entitles access to the solution from any device at any time. While SaaS solutions marked the start of shifting software license spend to usage-based spend, public cloud computing as we know it today can be attributed to the launch of AWS (Amazon) <cit.> in early 2006, with Azure (Microsoft)<cit.> and GCP (Google)<cit.> following in 2008. The primary early adopters of cloud computing were technology companies that innovated patterns, practices, and open sourced tools and frameworks that have become best practices for running resilient and scalable business services in the public cloud. Over the past 10 years cloud computing has been growing in organizations of all sizes across many different industries. Larry Wall, the creator of Perl, once stated – There is a saying in the software design industry: “Good. Fast. Cheap. Pick two.". Software engineering involves making difficult decisions based on informed tradeoffs. For example, it would not be hard to argue that in order to move faster and build things cheaper, compromises on software features, software quality, and/or security would be required. Using the utility of the cloud, coupled with modern cloud computing tooling, one can now argue that you can build better software faster and cheaper. It's not that Larry Wall's insights were incorrect, but we can now have the technologies and practices to redefine good in terms of fast using the cloud. When computing components are deployed to the cloud, the simplest way (and thus the most popular way) to do this is via automation<cit.>. The automate everything practice embraced by cloud computing not only allows deployments to be fast, but it also favors ephemeral computing components. These components by their nature are easier to test<cit.> and can be started, stopped, paused, or replaced at any time. This combination of capabilities enables software engineers to rapidly deploy software to a known state at any time. With these building blocks new well-tested features can be quickly and consistently rolled out to users in very small batches. Goodness of the solution can now be validated via feedback from users, either directly, or via monitoring and instrumentation of their behavior. These cloud enabled capabilities have the potential to advance software engineering practices in many ways, but transforming these practices across the entire community comes with many challenges. We think this represents a significant opportunity for the software engineering field given the likelihood that most industrial systems moving forward will be deployed on cloud runtimes[By cloud runtime we include public, private and hybrid cloud infrastructure]. Specifically: * Helping Software Engineers manage the expanded cognitive load required to design, build, deploy and operate at scale applications in the cloud. We will discuss this throughout the remaining sections of this paper. * Identify opportunities to accelerate and scale software engineering skillsets needed to deploy a broader suite of applications to the cloud. Many organizations will want to move beyond deploying externally facing web and mobile applications to the cloud using their top engineers. This will require developing new skills for the broader engineering organization as more of their core business moves to cloud computing. * Investigate how software engineering and computer science education can expand to address the demands of industry to create new, and retool existing software engineers for the cloud.[We will talk about cloud certifications later, but they are targeted towards using the services of a cloud provider, not on the design and architecture of cloud native applications] Most cloud-proficient software engineers appear to build their skillsets on the job and with online resources versus in formalized academic programs. * Understand software engineering needs for new architectures enabled by the cloud. For example, IoT and smart devices that run at the edge increase the complexity of software engineering given the distributed nature of these platforms. These challenges will be discussed in Section <ref>. * Address non-technical challenges that organizations face with adopting cloud-centric engineering best practices. Consider Google who published in 2021<cit.> that they ran over 700K experiments in production that resulted in over 4K search product changes. Netflix open sourced tools<cit.> that they use for chaos testing to validate platform resiliency. Comfort with strategies that involve testing and randomly breaking things in production are embedded in the DNA of technical companies, but are often met with caution in traditional organizations. We will address a number of these opportunities in the subsequent sections of this paper. The next section will introduce Cloud Native from a software engineering vantage point. Throughout this paper, by cloud native, we are referring to systems designed specifically to favor managed cloud platform services (PaaS/Faas)<cit.>, and not systems that are lifted and shifted<cit.> from an on premise virtual machine to a virtual machine that runs in the cloud (IaaS). § WHAT IS CLOUD NATIVE COMPUTING? Before we explore the software engineering landscape for the cloud, we need to address exactly what we mean by cloud native computing. According to the Cloud Native Computing Foundation (CNCF)<cit.> “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.". Amazon's definition is “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds". Google offers the definition “Cloud native means adapting to the many new possibilities—but very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.". The primary theme in these definitions centers around the role that cloud platforms play in enabling the creation of cloud native applications. They also don't clearly define “Cloud Native", which we consider any application that is specifically designed to be deployed on a cloud platform. We think a better definition of cloud native computing that focuses more on software engineering is “Cloud native applications are well architected systems, that are “container" packaged, and dynamically managed". Specifically: Well Architected Systems - By this we mean systems that adhere not only to established software engineering best practices but also embrace specific functional and non-functional capabilities offered by the cloud. For example, how the computing components such as services/APIs are identified, how they work with each other, how security requirements are met, and how the system is designed for resiliency and scale? Container Packaged - The term container is overloaded in the cloud computing terminology landscape. In many places its equated to a standardized package<cit.> that is managed by Docker<cit.> technologies - aka “a docker container". We take a more generic view of container packaging. Specifically, we think container packaging is a mechanism to package and deploy code that is ephemeral, can operate across a variety of different hardware architectures (e.g., Intel, ARM, microcontrollers, etc), and at runtime is supervised. Supervision includes full lifecycle management associated with version identification, startup, shutdown, health checks, security scanning, and monitoring. Examples of container packaging and supervision include Docker, Docker Compose, Kubernetes, and serverless <cit.>. We also include in this category the emerging interest with using server-side web assembly<cit.> as a way to package and deploy cloud native application services. Dynamically Managed - Consider the cloud as a large, highly distributed, special purpose operating system. Just like any operating system, there are a number of resources like storage, compute, network and security services that are needed by applications. The job of an operating system is to dynamically manage and optimize the allocation of these resources to the realtime computing demand on the overall system. When done well, every process being managed by the OS will perceive that it has access to the resources it needs, when it needs it. In a similar context, a cloud service provider, via Application Programming Interfaces (APIs), provides and manages resources to cloud native applications dynamically. Classical operating systems manage physical resources on a single system, whereas cloud resources are virtualized and distributed, while also being resilient and scalable. For example, block storage that supports virtual machine reads and writes are automatically replicated across servers in different special purpose data centers. Outside of initial configuration, the user does not worry about how durability is provided given its dynamically managed by the cloud service provider. Other examples include using auto scaling of virtual machines with health checks, or more advanced services like Kubernetes<cit.> that can scale up or down dynamically based on demand. Function as a service (FaaS) solution's take this a step further by running code on demand when a certain event happens. AWS even open sourced a micro VM called Firecracker<cit.> they created to support dynamically managing serverless workloads at scale. Now that we have provided a definition, we describe next a number of interesting software engineering problems that warrant investigation. Managing cloud native technical assets. In 2011 Adam Wiggins authored a set of technical principals that enable software engineers to create, manage and release code in support of cloud native applications. These principals were branded “The Twelve-Factor App"<cit.>. Over the years they were updated and revalidated<cit.>, but consistently hold up as recommended engineering practices. One of the key challenges is that while none of these practices is overly complex, they require mastery and discipline. Also, existing standards enforced by enterprises might act as blockers to some or all of these practices. For example, some organizations might not have comfort or necessary controls to ensure that all deploys, to all environments, are driven through a source code control system. Identifying an appropriate cloud native software architecture. The technical principals associated with 12 factor apps is a good start; however, these only focus on how to manage cloud native technical assets. Good cloud native solutions are generally architected as a suite of horizontally composable components. This model introduces several interesting challenges for cloud native software engineers that must be addressed. What are these components? How do identify them? What will guide how they should be built? We think some of the newer concepts around app meshes<cit.> and data meshes<cit.> provide a good start for shaping the overall architecture. As far as identifying the architecture components themselves, we think concepts from Domain Driven Design (DDD)<cit.> can be refreshed to support this activity. Upskilling Software Engineers on the use of Quality Attributes to make informed technology tradeoff decisions. Most cloud providers offer many different technology choices to create cloud native components. Applying discipline around quality attributes should guide making technology stack decisions. For example, Figure<ref> shows three different ways to deploy cloud native APIs along with some associated quality attributes. It should be noted, that the values of these quality attributes will change for different APIs, the figure assumes these hypothetical APIs are very light weight, event triggered, and manage their state via an elastic cache. Thus for the scenario shown in the figure, the VM option on the left does not make sense given the high cost, large scale up/down times, and complex engineering effort. The container option in the middle and the serverless option on the right both seem like good options. The ultimate decision will be driven by the desire to have a high degree of control over cost, and the nature of the workload. For example, if the expected traffic has massive near realtime spikes, a serverless solution might be preferred. If the workload is not event-driven, or has many runtime dependencies such as database connections, then a container based solution might be a better choice. As we move more towards computing at the edge, containers might not be possible since they depend on Linux kernel, so emerging lighter weight alternatives such as WebAssembly (WASM) with WASI<cit.> might be a good choice. Cloud native software engineers must be able to make these types of choices and resist over-standardizing based on personal or organizational preference and let software architecture practices using quality attributes guide cloud native platform choices. Organizing teams for cloud native success. In 1967, Mel Conway published a paper called “How Do Committees Invent" - Fred Brooks cited this paper in the Mythical Man Month<cit.> calling it Conways law<cit.>. Conways law states “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure". In the early days of Amazon, Jeff Bezos introduced the idea of the “2 pizza team rule"<cit.> where a team size should be no larger than can be fed by two pizzas. The basic ideas are rooted in concepts that productive teams should be small, and independent. This aligns nicely with cloud native concepts in that one way to ensure that components are independent and interoperate only via their published interfaces, is that teams are also organized this way. Over the years various organizational models and changes have been debated. Full-stack teams bring together front-end, back-end, testing and infrastructure professionals to a common team where they have full responsibility for their technical assets. Shifting left along with the emergence of DevOps brings a testing and automation focus to teams that allows them to increase quality and productivity. One problem is that while these concepts work well for driving individual team productivity, they are difficult to scale to larger organizations with multiple products. Several companies have also published their attempts to scale their practices. One popular model was published by Spotify<cit.>, where “Squads" represent full-stack teams, but they also introduce concepts like “Tribes" for coordinating squads, and “Guilds" to address cross-cutting technical concerns. Commercial models have also been created to drive organizational changes to support cloud native architectures. One such example is the SAFe<cit.> framework, which is interesting in that it favors prescriptive organization and process rigor over streamlining software engineering practices in order to increase scale. We think there are some interesting problems in this space given the lack of alignment on the best ways to organize teams to work on cloud native applications at scale. Specifically, just copying a model used in one organization and using it in another organization does not address many of the challenges associated with things like politics and culture. Software engineering demands for API based technical products. Historically, most applications are built to solve a targeted user or business problem end-to-end. In Mark Richards book entitled “Software Architecture Patterns"<cit.>, Chapter 1 is focused on the Layered Architecture Pattern, which is shown in Figure <ref>. Richards layered pattern expands on the 3-tier architecture pattern<cit.> that calls for the isolation of the presentation, business logic, and database layers. An important attribute of these patterns is that the end-user interfaces only with the presentation layer, allowing the remaining layers to be hidden and secured against direct access. One of the foundational enablers of cloud computing is the plethora of first-class integration services provided by the platform. These capabilities open the door for new software architectures based on offering API-enabled services as a product. We propose a conceptual model for this type of architecture in Figure <ref>, while it is layered, the layers have different responsibilities than the ones discussed by Richards. Several well known examples of API products are Google Maps (for location services), Twilio (for messaging), and Stripe (for payment). These all represent API enabled capabilities designed specifically for embedding into other applications. As this model expands in popularity software engineers will have to become familiar new architecture patterns for designing API-centric products. For example, some of the best AI models are offered as a service by OpenAI<cit.>, and the healthcare industry is being mandated to offer API services to support interoperability<cit.> regulations. Some of these companies are even taking the opportunities to use APIs as a competitive differentiator, one example is the developer portal provided by Cigna<cit.>. Prior to the mainstream adoption of the cloud native applications it was not possible to work with your bank, healthcare provider, auto insurer, and favorite retail stores with custom developed software. § THE CLOUD IS EXPANDING TO THE EDGE The underlying services that the major cloud providers offer to their customers continues to expand and evolve. These advancements provide significant innovation capabilities to customers, but also put pressure on how to effectively engineer solutions that take advantage of these services cost effectively. Figure <ref> shows the high level view of the computing services in the modern cloud. While it was once easy to identify the boundary of where the cloud started and ended, it is no longer easy to define the edge of the cloud. The following section will provide an overview of the evolution of the cloud, along with highlighting some of the challenges that have to be overcome by software engineers. §.§ The Basic Cloud Architecture At a high-level the basic cloud architectures deployed by major providers exhibit many similarities. The foundational infrastructure building block of cloud compute is called an Availability Zone (AZ). An AZ is a custom designed data center that hosts cloud infrastructure (compute, storage, and network) and runs cloud provider services on behalf of their customers. A Region is a physical location where a collection of 2 or more AZs are located. Each AZ within a region are connected together with a fully redundant high bandwidth low latency network. The goal of a region is to have AZs close enough so that they can behave like a single cluster, but also separate them by enough distance to isolate them from issues associated with power failure, earthquakes, tornados, and so on. AWS, as an example, uses the general guideline of 100km (60 miles)<cit.> for placing AZs within a region. The global footprint of a cloud provider is defined by the number of regions and locations they have across the globe, along with the purpose-built underlying network they use to interconnect them together. Given the cost and complexity of deploying cloud regions around the globe, cloud providers expand their network reach through the use of edge locations. Edge locations are useful for a couple of reasons. First, they serve as a point of presence to lower connection latency to the cloud, and second, they can run services at the edge which offers additional benefits. One of the classical applications to run at the edge is a content delivery network (CDN). CDNs speed up web and mobile applications. For digital web and mobile applications the combination of Regions, AZs, and Edge Locations could be considered the boundary of the cloud. These are shown on the right side of Figure<ref>. The major cloud providers continue to focus on expanding the number of services they support at the edge to drive even better performance and reduce network latency. From a software engineering perspective, the basic cloud architecture described above introduces additional cognitive load on software engineers: * Planning application deployment starts with the design of a virtual data center (VDC). VDCs logically carve out storage, compute, network and security policies from the cloud provider for customer usage. Traditional software engineers are not trained to think of the start of the software design process begins with the need to design a virtual data center and all of the complexity that comes with it. Historically, data centers are designed by specialty engineers and inherited “as is" into the final software architecture. The data center topology is now a critical software engineering concern. * Quality attributes such as privacy, resiliency, reliability and scalability are foundational concepts that software architects use to reason about systems. These now move out of the conceptual realm and require a deeper understanding of technical constructs that now become part of the software design itself. For example, deploying microservices across different subnets, where each subnet is in a different AZ within a region. Also, declarative definition of the security policies that govern access to these microservices along with their entitlements to access other cloud resources becomes part of the software product itself. * Although the cloud itself provides an infrastructure model to create resilient solutions that run at scale, its up to the software engineer to architect things properly to take advantage of these capabilities[Some patterns such as rehosting<cit.> a.k.a. “lift-and-shift" should not be considered cloud native patterns.]. For example, it's still possible to deploy an application to a single virtual machine instance in the cloud, which without additional controls will not elastically scale, nor will it be resilient to failure. Thus, to enable the creation of cloud services software engineers must have mastery of newer patterns for distributed applications ( e.g., especially asynchronous event-based architectures). * While cybersecurity has always been an important consideration of software engineers, the cloud materially expands these responsibilities. Everything in the cloud is secured by policy, but as mentioned earlier, software engineers now need to deal with security requirements across the entire OSI model<cit.> stack in addition to some unique cloud requirements. Generally software engineers are comfortable with security at Layer 7 (the Application Layer) of the OSI model, using techniques such as OAuth 2<cit.> to secure different types of digital assets. These responsibilities now expand to authoring and deploying policies to govern network access across subnets, and for attaching policies necessary to use managed services. In addition, software engineers must deploy and ensure proper configuration of virtualized security appliances such as web application firewalls (WAFs) and traditional firewalls that are now virtualized and software-defined. As attacks get more sophisticated, software engineers must also make decisions around introducing additional security capabilities into their solutions such as bot-detection, and defenses against credential-stuffing via MFA and supply chain attacks. The complexity of properly configuring, keeping track of, and managing cloud resources is also a new cloud-specific concern for software engineers. These problems themselves are also being addressed by software which needs to be deployed, configured and managed; for example Cloud Custodian<cit.> that was open sourced by Capitol One and donated to the CNCF. * One clear benefit of deploying to the cloud is that the easiest path to do so requires everything to be automated. While software engineers are comfortable with automation associated with software tasks like testing, they are not accustomed to automating infrastructure deployment. This becomes even more challenging given many of the existing infrastructure automation tools were designed for non-programmers, relying on verbose, complex and error-prone configuration formats like YAML and JSON. Some progress in this space has been accomplished via DSLs like Terraform<cit.> and tools that use real programming languages like Pulumi<cit.>. * Another foundational software engineering cloud concern that design decisions have material influence on operational runtime costs. This is often referred to as finops, short for financial operations. At its core, the cloud transforms compute, network, storage, and security into a pay-as-you-go utility. Software engineers generally don't factor in things like programming language selection, database platforms, processor hardware architecture, frameworks, fully-managed services and so on into their design from the perspective of cost and carbon footprint impact. We will explore this topic more in Section <ref>. §.§ The Emergence of Edge Computing With the rapid growth of devices that are connected to the internet, we are now entering the era of edge computing<cit.>. Edge computing is different architecturally from traditional cloud computing. Consider a cloud-enabled web or mobile application. The architecture of these applications is often based on calling cloud-hosted APIs and then using the data returned from these to power the user experience. This architecture will not scale or meet the needs of all of the smart devices that connect to the internet. As its name implies, edge computing moves more computing services to the edge, with requirements not found in web or mobile applications: Specifically: * They must be able to work autonomously. The cloud would not scale to support all device events, local processing is used to filter important events from less important events. * They must be able to work fully disconnected, or with unreliable network connectivity. * They must be able to perform compute locally, either independently, or in local clusters. These added capabilities essentially extend the edge of the cloud all the way back to the client devices themselves as shown in Figure <ref>. The overall architecture of the modern cloud that extends to the edge is shown in Figure <ref>[Figure copied from <https://www.spiceworks.com/tech/cloud/articles/edge-vs-fog-computing/>]. According to research by IoT Analytics, there were 14.4 million smart devices connected to the internet in 2022, expected to rise to almost 30 million by 2025<cit.>. This many deployed devices could not be supported if they required connectivity to the large cloud data centers discussed earlier. Processing will need to move to the devices themselves supported by a new layer of cloud compute that is closer to the devices. This new layer of compute is often referred to as fog computing. The term comes from a play on the word cloud, given clouds are high up in the sky and fog is closer to the ground. As cloud providers expand their footprint across the globe, creating new regions with multiple availability zones represents a major strategic decision because of the time, expense and other factors that go into rolling out multiple large data centers. Creating new regions is required to increase compute capacity as global cloud adoption expands, and to meet specific compliance requirements associated with conducting business in the cloud. For example, many countries are adopting data residency laws, which place controls over where data is stored at rest. The trend of cloud providers searching for strategic locations for new Regions, or to expand the number of AZs within a region will continue as cloud demand increases. Given the massive investments required, it is likely that the number of major cloud providers will continue to be small. A November 2022 TechTarget report<cit.> highlights that the big 3 providers – AWS, Microsoft, and Google – account for 62% of the overall cloud market. While its unlikely that there will be significant disruption in the major cloud providers, the fog layer is likely to be federated across many different players. This layer needs to be deployed close to the edge devices themselves, and will likely be addressed by existing last mile internet service providers (ISPs), and by telecommunication companies offering 5G services who already have deployed infrastructure to meet these needs. § THE EDGE AND EXPANSION OF SOFTWARE ENGINEERING CONCERNS The evolution of cloud computing over the past decade has increased the decision landscape for software engineers. This section will highlight some of the new concerns that software engineers must address in cloud and edge computing design. §.§ Processor Hardware Architecture Diversity Ten years ago we did not have cloud providers creating custom processors for compute, special purpose AI applications, nor did we have all of the microcontrollers running at the edge of the Internet. Familiarity with making informed hardware architecture choices now becomes an important concern of software engineers. Some examples include: * On May 23, 2023, AWS announced the third generation of their custom ARM-based microprocessor called Graviton 3. AWS claims that workloads running on Graviton 3 are 50% faster than Intel/AMD processors, consume 60% less power, and are 20% cheaper. From a software engineering perspective these benefits seem like a no brainer to take advantage of until you start to factor in other requirements such as being able to maintain ARM-based builds of your software, including all dependencies which may not be available or optimized for ARM. Additionally, organizations may impose other requirements such as running certain security products on VMs, these also must be available and certified. * Since the realization that GPUs can improve the performance of training AI models, cloud providers have innovated further with custom AI microprocessors. In 2016 Google introduced the Tensor Processing Unit (TPU) to accelerate training deep learning models, and in 2018 AWS created the Inferentia chip to accelerate inference. AWS also entered the training space to compete with Google's TPU with the Trainium chip in 2020. With all of these new AI hardware choices, software engineers must be savvy with aligning hardware choices with software training and inference library requirements. For example, AWS announced a SDK for Trainium called Neuron to enable engineers to use popular AI frameworks such as Tensorflow and PyTorch. * As we move to the edge, software engineers now more routinely have to create solutions for microcontollers, and other devices that have additional constraints. These devices might be battery powered, compute constrained, difficult to access or update, and/or have unreliable network connectivity. Programming frameworks and tools routinely available on modern servers might not be available or viable for these devices. Consider the popular trend of deploying code in containers. To a large extent, containers make assumptions that there is an underlying linux kernel, which might not be possible in these purpose-built devices. Instead of falling back to creating alternative versions of their software in lower level systems programming languages like C/C++, software engineers must become familiar with emerging solutions in this space. Consider TinyGo<cit.>, which is an alternative Go compiler specifically created to bring the Go ecosystem, which is popular in the cloud, to microcontrollers. Another example, is WasmEdge<cit.>, which brings the power of Web Assembly(WASM) to the server and to edge devices. WasmEdge can run embedded WASM code created by modern compilers that are popular for creating cloud native applications such as Rust, Go, and Javascript. * Managing tool chains for multiple hardware architectures. The Java programming language introduced the concept of “Write Once, Run Anywhere". It accomplished this by creating Java Virtual Machines (JVMs) for different hardware platforms, and running compiled bytecode consistently across these platforms. While this works well, the Java ecosystem has some challenges in the broader cloud native space. Specifically, to use Java all dependencies must be Java-based, and although the JVM itself is an impressive, its size and compute requirements might be challenging to support on edge devices. Newer programming languages like Rust and Go have been adopting an open cross-compiler philosophy so that any compiler on any platform can create binaries for any other platform. Containers are another popular cloud native technology. With the need to support diverse processor architectures container packaging becomes more complex, and containers might not be practical on the edge given they assume the presence of a linux kernel. Docker recently released a technical preview to support web assembly that may help address this issue<cit.>. §.§ Polyglot Programming We think the move to cloud native architectures requires software engineers to rethink the criteria for how programming languages are selected. In 2013 Meyerovich and Rabkin<cit.> reported on empirical human factors that impact programming language selection. Their findings cite reasons such as open source libraries, existing code, and programmer experience as the primary drivers for selecting programming languages for new projects. To complicate matters further, in some organizations approved programming languages are standardized removing the software engineering community from the decision making loop. The general criteria to evaluate programming languages often examines attributes like object-oriented vs functional; high-level vs low-level; type safety vs dynamic; general purpose or domain specific, and so on. While these are good attributes to categorize programming languages they don't factor in criteria aligned to cloud native computing objectives. For example<cit.> did a comparative analysis of Java vs Kotlin. Kotlin has been increasing in popularity within the Java community because it is less verbose, introduces modern programming language features, while interoperating well with existing Java code. One of the criteria for good cloud native software discussed earlier is being able to move fast, thus adopting a language like Kotlin that is more productive and easier to test represents a good engineering tradeoff for organizations with significant investments and skillsets in Java. We think an approach for programming language selection should be based on a careful tradeoff analysis using cloud native computing architecture decisions to guide the selection. This will often lead to a polyglot outcome, where more than one programming language is selected. One interesting study in this space was conducted by Cordingly et. al.<cit.> where they examined the Java, Python, Go, and Node.js against a collection of different Function as a Service (FaaS) workloads. We like their strategy using drivers such as performance and cost as the evaluation criteria. They also used specific FaaS concerns such as cold and warm start times in their analysis. We think the approach used by Cordingly should be expanded to other cloud native architecture options. For example, with container based solutions, languages like Java tend to produce very large containers, and require significant resources associated with bringing along the JVM. Modern languages like Go, designed with the cloud in mind[Go is the primary language used to build significant cloud native platforms like Docker and Kubernetes], produce very small containers, and have a robust and modern runtime. Languages like Javascript and Typescript coupled with the Node.js runtime is highly optimized to support asynchronous event-based architectures. Languages like Rust provide C/C++ performance, but have a modern runtime and provide compiler-enforced memory safety. In addition to the languages themselves, additional factors such as the maturity and completeness of cloud provider supplied language-specific SDKs should also be considered in the selection criteria. §.§ Multi-Region and Multi-Cloud The major cloud providers run massive infrastructures that have historically have provided availability that any individual enterprise would envy. However, while rare, cloud providers have had outages that have resulted in significant impact to customers. As cloud adoption continues to grow, the impact of these periodic outages will also continue to grow. Additionally, the major cloud providers compete with each other via their innovation investments into fully managed services. This has consequences of cloud vendor lock-in, making it hard for a customer to migrate from one cloud provider to another based on having to redevelop and redeploy their software on another providers platform. Software engineers need to be well versed in the options and consequences from a cost and scale perspective with respect to making Multi-Region and Multi-Cloud decisions. For example: * As mentioned in Section <ref> cloud providers offer resiliency and scale using the concept of a Region comprised of multiple Availability Zones. Most cloud providers handle the complexity of enabling services to run across AZs in a way that is transparent to customers. Little engineering effort is required by customers to continue to operate correctly when a single AZ fails within a region. However, if an entire region fails, additional engineering is required to continue correct operation. These efforts increase the complexity of the software, software testing, and cost as the customer becomes responsible for data replication strategies and redundant storage costs to operate across regions. Netflix had authored engineering notes on the strategy that they use on their techblog<cit.>, and also created open source modules to perform chaos testing to validate their platforms ability to survive various types of failures. * While cloud providers offer many services that are similar, they also compete by trying to differentiate their suite of fully managed PaaS/FaaS offerings. Software engineers can lower the blast radius of failures by diversifying services across different cloud providers. For example, if a software engineer decided for their workloads that AWS has a superior FaaS solution with Lambda, and GCP has the best managed Kubernetes service GKE, consideration could be given to a best-of-breed strategy. While this would reduce risk of individual cloud failure exposure, it also comes with risk and complexity. Latency could suffer as intra-application calls have to leave one cloud and enter another. Also, many cloud providers provide tiered pricing models, so maximizing use on one cloud provider drives the best discounts. * Even with being able to successfully run across different cloud providers or multiple regions software engineers must design applications that continue to operate with reduced function in the face of failure. These tradeoffs would need careful consideration and be domain focused. For example, on an e-commerce platform, favoring services that will allow customers to make and pay for orders can be prioritized over the ability to fulfill orders until normal cloud operations are restored. * Being able to run applications across cloud providers, or to port to another cloud provider requires software engineers to make careful product selection decisions. For example, consider AWS Dynamo and Azure Cosmos. These are both database solutions that are often compared to each other with respect to resilience, hyper-scalability and performance. From a software interface perspective they are very different and would require a significant rewrite to port from one solution to another. Choosing an alternative technology like managed PostgreSQL for databases would be easier to support across different cloud providers. Kubernetes is another example of a platform that will be more similar to support across different cloud providers. In all cases, the processes to deploy and secure cloud assets across different cloud providers is not the same, so adopting a multi-cloud strategy, even with similar technologies, will still introduce cost and complexity. § CLOUD COMPUTING EDUCATION One area we think is underserved by the research community is investigation into software engineering specific cloud education. Queries of Google Scholar<cit.> for research on Cloud Native Software Engineering produce very few results of relevance. Most of the results focus on work to advance specific technical areas within the field of software engineering, for example, running AI/ML workloads, automation, software testing, or supporting microservice based architectures. So how do software engineers learn cloud computing? The cloud providers themselves have done a good job filling part of the need via education and certification programs. Cloud certifications are highly valued by industry and software engineers themselves, as they often post their certification accomplishments on personal LinkedIn pages. While cloud certifications are valuable, they focus on how to accomplish activities on a cloud platform versus how to engineer good software products using cloud services. We have discussed FaaS as an important cloud software engineering opportunity earlier. Certifications would enable engineers to understand how to deploy a FaaS component on AWS Lambda, Azure Functions, or Google Cloud Functions, but they would not cover important other considerations such is your architecture event-driven, how will state be managed, SDK robustness, component granularity (e.g., is a function too small) and so on. We think an important research question is to address if academic institutions play a leadership or partnership role to address the educational needs of software engineers working in the cloud. It appears that most of the collective knowledge in this space is acquired with on-the-job experience and via publications authored by industry professionals. § CONCLUSION This paper brings attention to the need for additional focus on expanding software engineering practices given the trend of moving to cloud and edge computing. We examined many cloud computing architectural concepts from the lens of a software engineering practitioner and summarized many opportunities that would benefit the community. Many organizations transition to the cloud by taking their top engineers and focusing them on moving existing digital assets like web and mobile applications to the cloud. Early successes, coupled with the attractive technical capabilities liked by engineers, and a pay-as-you-go model liked by managers continue to drive acceleration of the cloud. We think this next wave of cloud will require more software engineering rigor as we need to scale the number of qualified engineers to work in the cloud, while at the same time, needing to support moving core enterprise applications to the cloud that don't have the same architectural characteristics as digital applications. We also discussed new business opportunities that could only emerge with the cloud such as API-based products, and edge computing. These are complex architectures that will require software engineers to master additional skills. l0.35 < g r a p h i c s > Brian S. Mitchell is an accomplished technologist, engineer, educator, software engineering researcher, speaker, strategist, leader, and enterprise-scale change agent. Brian is currently a member of the Department of Computer Science at Drexel University. His career has spanned both industry and academia, including holding the Distinguished Engineer role at a Fortune 15 company. He provided technical thought leadership and directed teams responsible for driving disruptive digital innovation that led to the creation of multiple generations of products that help millions of people every day. Brian also has more than 20 years of teaching experience in a variety of areas including Software Engineering, Software Architecture, Operating Systems, Networks, Computer Architecture, Programming Languages, and Distributed Systems. His recent research interests include exploring several interesting problems at the intersection of Software Engineering, Software Architecture and Cloud Native Computing. Previously he was one of the founders of the Search-Based Software Engineering research space, publishing many influential papers focused on recovering software architecture insights directly from source code. Dr. Mitchell holds BS, MS and PhD degrees in Computer Science, and a ME in Computer & Telecommunication Engineering.
http://arxiv.org/abs/2307.02099v2
20230705081924
The Predictability of Stock Price: Empirical Study onTick Data in Chinese Stock Market
[ "Yueshan Chen", "Xingyu Xu", "Tian Lan", "Sihai Zhang" ]
math.NA
[ "math.NA", "cs.NA" ]
a,b]Yueshan Chen [email protected] b]Xingyu Xu [email protected] b]Tian Lan [email protected] b,c]Sihai Zhangcor1 [email protected] [a]School of Cyberspace Science and Technology, University of Science and Technology of China [b]Key Laboratory of Wireless-Optical Communications, CAS [c]School of Micro-Electronics, University of Science and Technology of China [cor1]Corresponding author Whether or not stocks are predictable has been a topic of concern for decades. The efficient market hypothesis (EMH) says that it is difficult for investors to make extra profits by predicting stock prices, but this may not be true, especially for the Chinese stock market. Therefore, we explore the predictability of the Chinese stock market based on tick data, a widely studied high-frequency data. We obtain the predictability of 3, 834 Chinese stocks by adopting the concept of true entropy, which is calculated by Limpel-Ziv data compression method. The Markov chain model and the diffusion kernel model are used to compare the upper bounds on predictability, and it is concluded that there is still a significant performance gap between the forecasting models used and the theoretical upper bounds. Our work shows that more than 73% of stocks have prediction accuracy greater than 70% and RMSE less than 2 CNY under different quantification intervals with different models. We further take Spearman's correlation to reveal that the average stock price and price volatility may have a negative impact on prediction accuracy, which may be helpful for stock investors. Predictability High-frequency financial data Time series forecasting § INTRODUCTION The stock market has gradually become an important part of modern society, which has a fundamental impact on human life. Stock ownership may help companies access capital more quickly and investors enjoy the benefits of corporate development, which implements more efficient resource allocation and ultimately drives faster social development. But due to its high risk and high return characteristics, stock investments require effective analysis and reliable prediction which has extremely important theoretical significance and application value. The efficient market hypothesis (EMH) assumes that the stock market has perfect law, good function, high transparency, full competition, where all valuable information is timely, accurate, fully reflected in the share price, including current and future value, unless there is market manipulation. Thus investors can not obtain excess profits higher than the market's through the analysis of previous price <cit.>. Unfortunately, such strict conditions make semi-strong efficiency and strong efficiency almost impossible, even in well-capitalised markets <cit.>. For example, the American market is verified to be weakly efficient in both ordinary and special times <cit.>, while the Chinese stock market is not yet an efficient market <cit.>. So this implies that current stock markets in the real world are somehow predictable. Before the 1990s, most studies on financial time series were conducted on daily, weekly or monthly data, which is usually called low-frequency data in the field of financial econometrics <cit.>. In recent years, with the development of computing tools and methods, the cost of data recording and storage has been greatly reduced, making it possible to study financial data with much higher frequency <cit.>. In the financial markets, data collected in hours, minutes or seconds is high-frequency data, also known as intra-day data. Tick data is one type of high-frequency data that is sampled in seconds, which is in fact a snapshot of stock market trading. Stock exchanges send each snapshot to the market in real time, including current price, highest, lowest, trading volume, etc.<cit.>. Up to now, there have been many kinds of prediction models for these types of stock price series data. But the question of how predictable a certain stock market, or all stock markets, which motivates many researchers to spend numerous energy on this attracting issue. The real entropy is adopted to measure the predictability of daily stock prices data, which is verified to be around 2.75, while the entropy of randomly shuffled variants of the original data achieve 2.90 <cit.>. This validates that the stock time series is not completely random, but weakly time-dependent. The stock price data with one minute granularity is further validated to have higher predictability than that with one day granularity <cit.>. Thus we raise two questions: (1) Does higher-frequency data have higher predictability? (2) What is the maximum prediction accuracy of this kind of data? To explore these two issues, this paper adopts the concept of real entropy from the field of human location prediction into the tick financial data prediction. Up to now, real entropy has been used to measure the uncertainty in many forecasting fields, such as wireless user mobility prediction <cit.>, wireless traffic prediction <cit.>, taxi demand prediction <cit.>, electronic health records <cit.>, cyberattacks <cit.> and human-interest dynamics <cit.>. All these studies help relevant industries to solve the theoretical problem of time series predictability. Thus we use this concept to investigate the predictability in the field of financial stock price prediction based on Chinese stock data. The contributions of this work are as follows: * We use real entropy to calculate the predictability of tick data. By using different quantification interval settings, the state spaces of stock time series are established based on which the entropy along with predictability are obtained and analysed. Our result based on the three-second tick data including 3,834 stocks with 230 million records shows that 74% of stocks have a real entropy of less than 2, which means most stocks in Shanghai and Shenzhen markets have high predictability. * We evaluate the Markov Chain model and Diffusion Kernel model based on tick data and demonstrate that the accuracy of both algorithms still have significant gaps of 0.090 and 0.126 to the upper bound, respectively. We also provide the trade-off of price quantification in T = 0.01 and T = 0.05 using predictability which helps evaluate the prediction precision and accuracy of stock price. * We find that the predictability and prediction accuracy of all stock prices shows severe inherent differences. We discuss the six features related to each stock to do a correlation analysis with accuracy, including the industry of the company to which the stock belongs, region, company scale, and lifetime of the stock, as well as the average price and historical volatility of the stock price series. The correlation coefficients of stock price and volatility with accuracy are -0.72 and -0.58, which are closely and negatively correlated. The remainder of this paper is organized as follows. Sec.<ref> summarizes the stock price forecasting models in recent years. Sec.<ref> briefly introduces the fundamental concepts and methods used in this research. In Sec.<ref>, we describe the data set used in this experiment. Then, we use two models for prediction, and the results and analyses are presented in Sec.<ref>. Besides, the experimental results are further discussed and we make supplementary experiments and analyses in Sec.<ref>. Finally, the conclusion is drawn in Sec.<ref>. § RELATED WORK In recent years, there has been an increasing amount of literature on stock price forecasting. According to these modeling theories, these prediction models can be divided into two categories, one is time series analysis based on statistical principles, and the other is the new artificial intelligence prediction method <cit.>. The time series model mainly uses statistical software to extract the favorable information of historical stock price to construct the prediction model, such as generalized autoregressive conditional heteroskedasticity (GARCH) and Autoregressive Integrated Moving Average model (ARIMA). GARCH is widely used for time series prediction <cit.>, while ARIMA is proved to be particularly effective in the cases of short-term forecasts <cit.>. Such models are based on the assumption that the investigated financial time series are generated by a linear process and model the time series in order to predict the future value of the series. However, stock time series data are inherently highly noisy, nonlinear, complex, dynamic, nonparametric and chaotic <cit.>. As such, it is not suitable for traditional statistical techniques to model the complexity and non-stationarity of stock markets, so that many researchers use nonlinear models of artificial intelligence (AI) or combine statistical methods with these models. These AI models include Support Vector Machine (SVM), Genetic Algorithm (GA), Convolutional Neural Network (CNN), Artificial Neural Network (ANN), Long Short Term Memory Network(LSTM), Fuzzy Logic (FL) etc. Among them, ANN is one of the most frequently used models to deal with nonlinear problems, an ANN model combined with a nonlinear autoregressive model has been shown to have better predictive performance than Exponential GARCH (E-GARCH, a competing asymmetric conditional variance model with the superior predictive performance), but the training of neural network is time-consuming and easy to overfitting <cit.>. CNN is also widely used as a strong benchmark for any innovative machine learning model, but LSTM may be a better choice for prediction accuracy <cit.>. This conclusion is also confirmed by work called LSTMLI and SSACNN that uses options and futures along with historical stock data as input to LSTM and CNN for prediction <cit.>. And the work concluded that LSTMLI has higher accuracy than SSACNN and other models like SVM <cit.>. However, even with the improvement of LSTM framework, one single model can obtain the accuracy of predicting the rise and fall of low-frequency data in the stock market to only 55%-60%. Therefore, most research on stock price prediction models are aimed at the improvement of LSTM or the mixture of various machine learning models, and all have achieved better results than single model <cit.>. However, multi-model hybrid prediction methods require a large number of hyperparameters and considerable effort to optimize, while Automated Machine Learning (autoML) can solve this problem by automatically finding the suitable machine learning model and optimizing it <cit.>. There has been some research on time series forecasting problems on autoML systems such as GluonTS <cit.>, AutoAI-TS <cit.> etc., which enables multiple AI models to automate the time series forecasting process and can achieve better performance than deep learning models on most non-financial datasets. However, due to the drawback of limited search space in the financial data processing pipeline <cit.>, autoML does not perform well on datasets in the financial domain, e.g., an AutoML approach named BOHB can obtain higher precision than the manual DL method on the voltage dataset and the methane dataset, but not as good on the New York Stock Exchange (NYSE) and Johannesburg Stock Exchange (JSE) datasets <cit.>. AutoML for time series is still in the evolution stage and requires the efforts by researchers to model more adaptable AutoML frameworks that can accommodate datasets in different domains and conduct empirical studies. Despite the variety of methods available today for forecasting stock data, there is still a lack of theoretical guidance for quantitative analysis of prediction model performance bounds in this field. Therefore, it is necessary to analyse the time series of stocks and calculate the maximum value that can theoretically achievable in terms of forecasting performance, which can be used to guide the improvement of advanced forecasting models. § CONCEPTS AND METHODS In this section, the fundamental concepts and methods used in this research are briefly introduced. §.§ Real Entropy Entropy generally refers to the degree to measure the uncertainty of one random variable. But neither the Shannon entropy nor random entropy can capture the characteristics of the time series representing the moving positions. However, the real entropy measurement not only considers the occurrence frequency of states, but also digs the occurrence order of states and the time of each state stay, so that the complete spatial and temporal information in the complete time series are excavated <cit.>. For easy reading, the definition of real entropy is briefly introduced as follows, and for more details, please refer to <cit.>. Suppose a historical sequence T={X_1, X_2, …, X_n}, then the real entropy S is defined as S = -∑_T^'⊂T p(T^')log_2(T^') where p(T^') represents the probability of finding a subsequence T^' in the trajectory T. For the time series with length n, the real entropy can be estimated by Lempel-Ziv data compression method as follows S^est = (1/n∑_i^nΛ_i)^-1ln n where Λ_i is the shortest length k such that the subsequence starting from position i with length k does not appear previously as a continuous subsequence of {X_1, X_2, …, X_i-1}. §.§ Predictability Predictability indicates the probability that one optimal prediction algorithm (theoretically) predicts the user’s future state correctly <cit.>. If the predictability can be calculated, the gap between the existing prediction methods and the theoretical optimal algorithm can be recognized. From the estimation of the real entropy by Eq.<ref>, the upper bound of predictability can be obtained using the Fano inequality, as follows: <cit.> S=H()+(1-)log_2(N-1) H() = -log_2()-(1-)log_2(1-) where N denotes the number of distinct states appeared in sequence T. §.§ Prediction Models In this paper, two models, second-order Markov Chain model and second-order Diffusion Kernel model, are used to predict the stock price. The Markov Chain (MC) model is a stochastic process in which the historical state has nothing to do with the future state under the given current information <cit.>. The k-order Markov Chain model has higher complexity and lower accuracy when k>2, so first-order and second-order MC are usually selected to realize the location prediction tasks. The second-order Markov model means that the state of the model at time t is only related to the state at time t-1 and time t-2, and independent of other states. The Diffusion Kernel (DK) model is similar to the neural network prediction model and gradient descent operations are involved in the calculation without requiring large amounts of training data. The concept of DK model is to map the mobility trajectory into diffusion process in a continuous space which embeds the visited locations. The model is illustrated in Algorithm.<ref>, and for more details, please refer to <cit.>. Generally speaking, the prediction performance of the DK model increases with the higher orders, but with more computation complexity and costs. In this work, in order to ensure the timeliness of prediction results of 3-seconds tick data, the second-order DK model is adopted. § DATA SETS This section describes the dataset used in this work and the pre-processing steps for the dataset. The architecture workflow is shown in Fig.<ref>. We use two datasets in this work, and they are the tick data of Shanghai Stock Market and Shenzhen Stock Market in January 2021, and Shenzhen Stock Market in July 2022, which are called dataset A and B, respectively. Tick Data is the buy and sell orders that the exchange sends to the stock market in the active Order Book for each stock. The domestic exchange takes a snapshot in each three seconds, which is actually like a snapshot of each stock with current price, highest, lowest, trading volume, transaction amount and other market features in the latest three seconds. Daily continuous bidding periods in the two stock markets include two hours in the morning and two hours in the afternoon, which leads the number of snapshots to be about 3,800. There is more than 2 GB of market-wide snapshot data about the whole stock per day. The dataset A has a total of 4,147 stocks, and each stock is gathered in a time granularity of three seconds. Among these stocks, 1,800 stocks are from the Shanghai stock market, while the rest are from the Shenzhen stock market. The Dataset A includes 18 trading days, and each record includes 56 features such as stock code, time, open price, close price and last-price, etc. The last price of the stock means the newest real-time price of the stock in the process of real-time change. The stock price fluctuates ceaselessly, so the last price is also dynamic, which can reflect the stock change situation. Thus, the last-price of each record is extracted as the last-price time series of all stocks for the following analysis. The dataset B can be downloaded in <https://www.kaggle.com/datasets/chenyueshan/shenzhen-tick-data>. We pick 2,324 stocks in the validation dataset that have the same code as the dataset A in the Shenzhen stock market, and construct a time series for each stock. Due to the difference in data collection platforms, the validation dataset was not collected as frequently as the main dataset, at around 10-second intervals. This dataset has 21 trading days with 27 features, including time, turnover, volume, etc. In this dataset we chose turnover as the target for prediction. The prediction target is the last-price and turnover at the next moment. Since tick data takes seconds as time granularity, we can approximately regard the prices as continuously changing prices. Consequently, an quantification interval should be adopted to quantify the stock price, making the stock price within a range into a state. We can set different quantification intervals according to different predicted demands. Since the precision of stock price in the data set is 0.01 CNY, the minimum quantification interval we can set is 0.01 CNY, that is, not to quantify the stock sequence. We initially set two quantification intervals, 0.01 CNY and 0.05 CNY, respectively. We then take two preprocess operations on the quantifized data, filtering stocks whose sequence length is too short and state space is less than 10. Because these stocks have missing data or they have been delisted, there is no actual predictive significance. And then we make predictions for the remaining stocks. Fig.<ref> shows the distribution of the number of valid states for the complete sequence of these stocks, including only stocks with a state space size of [0,1000]. Here, a valid state is defined as the state in which the times of such a state occur at least once. It indicates that the distribution of the number of valid states and real entropy of stocks calculated from the two data sets is similar, so as to avoid verbosity, we will mainly analyse dataset A. Obviously, stocks have smaller state space when they are quantified by larger intervals. Although the X-axis ranges from 0 to 1,000, in fact, there are also 209 stocks with more than 1,000 states at 0.01 CNY quantification intervals, while 58 stocks at 0.05 in dataset A. Such stocks are relatively few and their number of valid states is discretely distributed. In order to clearly show the number of state space of most stocks, we do not show these stocks in Fig.<ref>, but the data set used in the experiment still includes these stocks. While modeling, we set the first day as a training set, and the remaining days as a test set. After training the model with the training set, we update the model after each prediction during the testing phase, so changing the ratio between the training set and the test set has no influence on the prediction result. We use the Limpel-Ziv data compression method to calculate the real entropy of each stock, and the real entropy distribution of the last price is exhibited in Fig.<ref>. In dataset A, about 74% of stocks (T=0.01) and 87% stock (T=0.05) have very low real entropy less than 2, respectively, which indicates that based on the existing historical price series, the next three seconds price of most stocks can be found in fewer than 2^2 states. This observation implies that the uncertainty of the forecast on the tick data is quite low, thus the stock price may be easy to predict. § PERFORMANCE EVALUATION In this section, we first introduce the performance metric. Then we use the MC and DK models to predict the stock price with different intervals, observe the gap between the predictability upper bound and the prediction accuracy of models. We also present the prediction error measured by RMSE and analyzed the relationship between prediction accuracy and RMSE. Finally, considering the impact of stock price, we perform a supplementary experiment to divide the quantification interval according to the high and low price of the stock price series, then evaluate the results by accuracy and the ratio of DK-RMSE to stock price. §.§ Performance Metric As for performance metrics, we use accuracy ACC and Root Mean Squared Error (RMSE), which are introduced as follows: ACC = c_test/n_test RMSE = √(1/n∑_t=1^n (y_t-y_t^')^2) where c_test is the number of states predicted correctly in test set, n_test is the length of test set sequence, y_t is the actual stock price at time t, and y_t^' is the predicted price at time t. §.§ Accuracy Once we have the real entropy of each stock, we can obtain upper boundaries on predictability. At the same time, we use the two models to forecast the tick data's last price and observe the gap between the two models’ prediction accuracy and the upper bound of predictability in this section. Fig.<ref> and Table.<ref> demonstrates the ACC distribution results based on MC and DK models and the maximum predictability under T = 0.01 and T = 0.05. The most important finding is that neither MC nor DK model achieves the theoretical upper bound of ACC, Π_max when T = 0.01 and T = 0.05. To be specific, when T = 0.01 the arithmetic mean ACC of all stocks are 0.732 and 0.768 with DK and MC models respectively, while Π_max is 0.858 in Dataset A. Note that, in this work only MC and DK models are verified, but according to the existing reports, other prediction models, such as BoF <cit.>, LSTM <cit.>, are also not able to achieve such high accuracy. So in summary, our finding indicates that current prediction performance of existing models on tick data in the Chinese market still have room to increase. Then the ACC of MC and DK are compared, which indicates that DK's ACC is slightly higher than MC's both when T = 0.01 and T = 0.05. That is because the DK model uses latent representation rather than the transition matrix, thus overcomes the limitation of the MC caused by the high complexity transition matrix in this case of a large state space <cit.>. We also note here that, when T = 0.05, the ACC and prediction upper bound are higher than that when T = 0.01. The reason for this is quite straightforward, because the size of state space is smaller when T = 0.05 and the entropy will be lower, therefore accuracy and Π_max will be higher. §.§ RMSE RMSE is the popular metric to evaluate the performance of stock price prediction models, so we explore the RMSE performance under the influence of T = 0.01 and T = 0.05 on precision, which is shown in Fig.<ref> and Table.<ref>. Clearly, the RMSE is smaller in T = 0.01 than in T = 0.01 for both MC and DK models, which is easy to understand, because increasing the quantification interval can improve accuracy, although it leads to a decrease in precision. However, the results of the comparison between the two models are the opposite of what we expected. As can be seen from Fig.<ref>, DK's result is more accurate than MC's. So theoretically its RMSE should be smaller than MC, but the actual results shown in Fig.<ref> are the opposite. Here we focus our analysis on dataset A. Just as it shows, the RMSE of all stocks in both quantification intervals is less than 0.64 when using the MC model, while the DK model performs significantly worse than MC. Moreover, the statistically significant number of stocks with RMSE greater than 1 in the DK model is 215 and 134 at T = 0.01 and T = 0.05, respectively. But statistically, there are 215 and 134 stocks whose RMSE is bigger than 1 with the DK model at T = 0.01 and T = 0.05, respectively. Among these, two stocks, IMEIK and Kweichow Moutai, have quite big RMSEs, reaching 57 and 89 CNY respectively at T = 0.01. Upon analysis, we noticed that these two stocks have a high stock price in common of 700 CNY and 2,000 CNY. We then checked other stocks with big RMSEs and found that they are also traded at relatively high prices. Therefore, we think that the prediction performance of the DK model is related to stock price, and it is verified in section<ref>. This explains why DK is more accurate than MC, but the RMSE is bigger, because DK is affected by the stock price but MC is not. And when we adjust the quantification to T = 0.05, the RMSE of the high-priced stocks all decrease, such as IMEIK and Kweichow Moutai's RMSE decrease to about 41 and 66 CNY. Therefore, when using the DK model to predict high-priced stocks, we can reduce the RMSE by using a larger quantification interval. From the above analysis, we conclude that the RMSE of the DK model is related to the quantification interval and the stock price. This is why the difference in MC results between dataset A and B is small, but the difference in DK model results is large. Because the stocks included in dataset B are all underpriced, with only 29 stocks having an average price of over 100 CNY, the RMSE for dataset B is smaller than that for dataset A. Hence, when using the DK model, the appropriate quantification interval should be selected based on the stock price. §.§ Correlation of Accuracy and RMSE Fig.<ref> shows the correlation between RMSE and accuracy with MC model and DK model in Dataset A. First of all, by comparing the two quantification intervals, we can still get the previous conclusion that setting a larger quantification interval leads to higher prediction accuracy but lower precision. Likewise, we compare the two models and find that MC's RMSE is generally lower than that of DK's model, with DK's RMSE being related to the stock price. However, combining Fig.<ref> and the subplot of Fig.<ref> to observe the overall distribution, we find that the general distribution trend is similar and can be divided into three parts, which are high accuracy with small RMSE, low accuracy with high RMSE, and low accuracy with low RMSE. It is quite logical that the first two cases appear, and the reasons for them are obvious. Then, we focus on why there are stock prediction results with low accuracy but small RMSE. In the case of quantifying stocks, we consider a certain range of prices as a state. And when we calculate the accuracy, we only determine whether the predicted state is consistent with the actual state, while the RMSE is calculated considering the distance between states. If the predicted state is near the actual state in each prediction, low accuracy but high precision will occur. Based on the distribution of the samples in Fig.<ref>, we believe that there is a relationship between accuracy and precision, but it requires complex mathematical modeling and theoretical analysis, which has not been covered in the current work. If we can get the relationship between them, we can combine the theory of calculating predictability by real entropy to get the upper bound of prediction precision of the prediction problem, that is, the lower limit of RMSE, and this is our prospect for future work. §.§ Supplementary Experiment Since we summarized in the previous section that the RMSE of the DK model is related to the stock price and quantification interval, we conducted a supplementary experiment with the DK model, using the dataset A which has more stock sample. Considering the range of the stock price sequence of the training set, the size of the state space is fixed for each stock to obtain the appropriate quantification interval. In order to investigate what the appropriate size of the state space is, we set the size of the state space to 50, 100, and 150 respectively to conduct the experiment. And we find that if the state space is set as 150, with the exception of several stocks, almost all stocks are quantificated by T = 0.01. And when their state space is set as 50, their quantification intervals are greater than T = 0.05. As a result, we finally choose a state space of 100 (SP=100 for short) as the appropriate size and compare it with the T = 0.01 and T = 0.05. As expected, both the accuracy curve and the Π_max curve for the moderate state space lie between the other two cases as shown in Fig.<ref>. In particular, we find that the prediction accuracy of stocks at SP = 100 is distributed between 0.58 and 1, which is unlike the other two cases where there are stocks with accuracy less than 0.5. Similarly, the upper bound of predictability for stocks is above 0.8, while many stocks have upper bounds in the range of [0.5, 0.8] when T = 0.01 and T = 0.05. Combined with the above two findings, we can know that fixing state space can improve the accuracy and predictability of stocks with low prediction accuracy. As for the RMSE results under three different quantification intervals, we consider the case where the same error has different importance for different levels of stock prices. For example, an RMSE of 0.1 CNY has a deep impact on stocks with an average stock price of 1 CNY, but this error is insignificant for stocks with an average price of 1000. To eliminate this difference, we set up an evaluation metric that uses the ratio of the predicted RMSE of the DK model to the average price of the stock price series, and re-observe the performance of DK in Fig.<ref>. The arithmetic mean result of ratios for SP = 100, T = 0.01, and T = 0.05 are 4.35‰, 7.45‰, 6.58‰, respectively. SP = 100's result is obviously better than the other two cases. § DISCUSSIONS The above analysis raises one key question: why do some stocks have higher predictability and prediction accuracy while others do not? So we try to discuss and find certain characteristics related to stocks to indicate whether one stock is easy to predict in this section. We arbitrarily choose six features, category, region, scale, life, avgprice and volatility, related to the stock data and its belonging enterprises, four of which are profile of the enterprises while other two of which are price measurements. category represents the industry to which the enterprise belongs. For category, we classify 4,147 enterprises into twenty categories and number them according to Fortune China's new classification standards for Chinese industries. The detailed information of these industries is shown in appendix. region means the geographical region where the enterprise is located, while 32 regions were numbered by Chinese provinces in appendix. scale denotes the number of employees in the enterprise. life represents how many years the enterprise has been traded on the stock market. In our data set, the minimum life is one year and the maximum is 31 years. Please refer to the appendix for more details. avgprice is the arithmetic mean last-price of the complete historical series and volatility is a measure of the intensity of price movements. The historical volatility σ is calculated based on the definition<cit.> as follows. σ = √(∑(x_i-x̅)^2/N-1) where x_i=ln(P_t/P_t-1) is log-return, x̅=1/N∑x_i is the mean of the return, N means the length of historical price sequence {P_1, P_2, …, P_n}, and P_t is the price at time t. Table.<ref> shows the ANOVA (Analysis of Variance) results to investigate the effect of regions and categories on stock prediction accuracy. Here, F value is the statistic of F test, p is the indicator of the significance of differences, SSB represents the sum of squares between groups, and SST is the sum of squares for total, η^2_p = SSB / SST is the effect size partial eta square, which represents the magnitude of difference between groups. The results may come to the conclusion that, in Chinese Shanghai and Shenzhen stock exchanges, the prediction accuracy of stock prices may be irrelevant to categories or regions. In other words, the industry and the location of the enterprises might not be the proper factor when predicting the prices. Fig.<ref> demonstrates the correlation coefficients using Spearman linear regression analysis for the remaining four quantitative features. Normally, the relationship can be very close with coefficient [0.7,1.0], relatively close [0.4, 0.7], and normal [0.2,0.4], respectively <cit.>. The absolute value correlation coefficients of both avgprice and volatility with ACC are above 0.4, indicating a relatively close correlation, while life and scale are not as relevant. Fig.<ref> further presents the details about the impact of avgprice and volatility on the accuracy. We can find that in the two subplots, the average accuracy of each type of stock gradually decreases as the index increases, while the standard deviation of the prediction accuracy of such stocks gradually increases, which verifies that price and volatility are indeed strongly and negatively correlated with accuracy. § CONCLUSIONS This paper calculates the predictability of stock prices for tick data of Shanghai and Shenzhen stock markets. Using MC and DK models for forecasting, we come to the conclusion that there is still a gap between the existing models and the optimal model of theoretical forecasting accuracy, implying that the Chinese stock market is predictable and there is still space to improve current models. In addition to this, we also discuss the attributes that affect the prediction accuracy and find that the average price and price volatility of the stock have a relatively strong relationship with accuracy. These results may provide reference value for researchers studying tick data and investors in the Chinese stock market. There are some aspects in this work that can be improved and added. We may optimize the models in the future and add more neural network models for comparison. Using mathematical models to represent the relationship between prediction accuracy and RMSE is also one of the goals of our future work. Apart from that, we only considered six profiles when discussing the factors affecting accuracy, there may be other characteristics that we have not tapped into. § DETAILS IN DISCUSSION In this appendix, we provide the details of five features related to the stock data and its belonging enterprises in dataset A. We classify the industries of the enterprises to which the stocks belong according to the new China Industry Classification Standard published by Fortune China. This standard has 23 categories, and the stocks in our data set include only 20 of them, which are shown in Table.<ref> for classification details and distribution of industries. Enterprise scale refers to the number of its employees, and we arbitrarily quantify this number as even as possible, shown in Table.<ref>. Regions indicate the provinces or equivalent regions where the Enterprise is located, including 22 provinces, 4 municipalities directly under the Central Government and 5 autonomous regions, as well as Hong Kong, Macao and Taiwan regions China (referred to as Outbound) in Table.<ref>. Life indicates how many years the company has been in the stock market by 2021, with the shortest time listed being 1 year and the longest being 31 years. For non-discrete values like stock price and volatility, we classify them according to their quantitative distribution, as detailed in Table.<ref> and Table.<ref>. All features were normalized before correlation analysis. [c]0.5 Index Region Number 1 Anhui 126 2 Beijing 377 3 Fujian 151 4 Gansu 34 5 Guangdong 673 6 Guangxi 38 7 Guizhou 30 8 Hainan 33 9 Hebei 62 10 Henan 87 11 Heilongjiang 39 12 Hubei 113 13 Hunan 116 14 Jilin 44 15 Jiangsu 479 16 Jiangxi 57 17 Outbound 3 18 Liaoning 75 19 Inner Mongolia 25 20 Ningxia 14 21 Qinghai 10 22 Shandong 228 23 Shanxi 40 24 Xi'an 58 25 Shanghai 338 26 Sichuan 136 27 Tianjin 60 28 Xizang 20 29 Xinjiang 57 30 Yunnan 38 31 Zhejiang 521 32 Chongqing 56 tableRegion List of the Enterprise [c]0.5 Index Price(CNY) Number 1 0-3 320 2 3-4 301 3 4-5 304 4 5-6 317 5 6-8 475 6 8-10 374 7 10-13 371 8 13-17 381 9 17-25 400 10 25-50 525 11 50+ 371 tableQuantification of Stock Price Index Volatility Number 1 0-0.01 500 2 0.01-0.016 491 3 0.016-0.021 459 4 0.021-0.027 516 5 0.027-0.033 480 6 0.033-0.04 486 7 0.04-0.05 503 8 0.05-0.07 449 9 0.07+ 236 tableQuantification of Volatility § ACKNOWLEDGMENT This work was partially supported by Key Program of Natural Science Foundation of China under Grant(61631018), Huawei Technology Innovative Research. Yueshan Chen also thanks Miss Fan Zhang for data preprocessing. 10 GluonTS Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, and Yuyang Wang. Gluonts: Probabilistic and neural time series modeling in python. Journal of Machine Learning Research, 21(116):1–6, 2020. autoML A. Alsharef and K. Aggarwal. Review of ml and automl solutions to forecast time-series data. Archives of Computational Methods in Engineering, 2022. andersen1998answering Torben G Andersen and Tim Bollerslev. Answering the skeptics: Yes, standard volatility models do provide accurate forecasts. International economic review, 39:885–905, 1998. 2012Using J. M. Azevedo, A. Rui, and P. Almeida. Using data mining with time series data in short-term stocks prediction: A literature review. International Journal of Intelligence Science, 2(4A):176–180, 2012. 2011A S. Chakravarty and P. K. Dash. A pso based integrated functional link net and interval type-2 fuzzy logic system for predicting stock market indices. Applied Soft Computing Journal, 12(2):931–941, 2011. chen2015spatiotemporal Yu-Zhong Chen, Zi-Gang Huang, Shouhuai Xu, and Ying-Cheng Lai. Spatiotemporal patterns and predictability of cyberattacks. PloS one, 10(5):e0124472, 2015. dahlem2015predictability Dominik Dahlem, Diego Maniloff, and Carlo Ratti. Predictability bounds of electronic health records. Scientific reports, 5(1):1–9, 2015. 2021ANN R.L. D’Ecclesia and D Clementi. Volatility in the stock market: Ann versus parametric models. Annals of Operations Research, 299:1101–1127, 2021. 1965Fama E. F. Fama. The behaviour of stock market prices. Journal of Business, 38:34–105, 1965. 2021Artificial Fgdc Ferreira, A. H. Gandomi, and Rtn Cardoso. Artificial intelligence applied to stock market trading: A review. IEEE Access, 9:30898–30917, 2021. fiedor2014frequency Paweł Fiedor. Frequency effects on predictability of stock returns. In 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), pages 247–254. IEEE, 2014. fischer2018deep Thomas Fischer and Christopher Krauss. Deep learning with long short-term memory networks for financial market predictions. European Journal of Operational Research, 270(2):654–669, 2018. gagniuc2017markov Paul A Gagniuc. Markov chains: from theory to implementation and experimentation. John Wiley & Sons, 2017. grossman1980impossibility Sanford J Grossman and Joseph E Stiglitz. On the impossibility of informationally efficient markets. The American economic review, 70(3):393–408, 1980. 2021Can Junyao Guo, Yineng Chen, Jinkang Zhu, and Sihai Zhang. Can we achieve better wireless traffic prediction accuracy? IEEE Communications Magazine, 59(8):58–63, 2021. 2011COMPARISON J. Hauke and T. Kossowski. Comparison of values of pearson's and spearman's correlation coefficients on the same sets of data. Quaestiones Geographicae, 30(2):87–93, 2011. 2019Literature B. M. Henrique, V. A. Sobreiro, and H. Kimura. Literature review: Machine learning techniques applied to financial market prediction. Expert Systems with Applications, 124:226–251, 2019. 2020Forcasting Vasyl Hryhorkiv, Lesia Buiak, Andrii Verstiak, Mariia Hryhorkiv, Oksana Verstiak, and Kateryna Tokarieva. Forecasting financial time sesries using combined arima-ann algorithm. In 2020 10th International Conference on Advanced Computer Information Technologies (ACIT), pages 455–458, 2020. HOBO Kouame Kouassi and Deshendran Moodley. Automated deep learning for trend prediction in time series data. In 2021 IEEE 24th International Conference on Information Fusion (FUSION), pages 1–8, 2021. 2021survey G. Kumar, S. Jain, and U.P. Singh. Stock market forecasting using computational intelligence: A survey. Archives of Computational Methods in Engineering, 28:1069–1101, 2021. 2018Which C. Lin, Z. Qiao, M. Wang, W. Chao, R. Du, and S. H. Eugene. Which artificial intelligence algorithm better predicts the chinese stock market? IEEE Access, 6:48625–48633, 2018. liu2019diffusion Lu Liu, Sihai Zhang, Wuyang Zhou, Wei Cai, and Qimei Cui. Diffusion kernel based mobility prediction for wireless users. In 2019 IEEE Int. Conf. on Pervasive Intelligence and Computing, pages 872–875. IEEE, 2019. 2020Financial S. Luo and C. Tian. Financial high-frequency time series forecasting based on sub-step grid search long short-term memory network. IEEE Access, 8:203183–203189, 2020. nabipour2020deep Mojtaba Nabipour, Pooyan Nayyeri, Hamed Jabani, Amir Mosavi, Ely Salwana, et al. Deep learning for stock market prediction. Entropy, 22(8):840, 2020. nagy2012partitional Gabor I Nagy and Krisztian Buza. Partitional clustering of tick data to reduce storage space. In 2012 IEEE 16th International Conference on Intelligent Engineering Systems (INES), pages 555–560. IEEE, 2012. 2007On N. Navet and S. H. Chen. On predictability and profitability: Would ai induced trading rules be sensitive to the entropy of time series? Scandinavian Journal of Infectious Diseases, 100(4):343–346, 2007. passalis2020temporal Nikolaos Passalis, Anastasios Tefas, Juho Kanniainen, Moncef Gabbouj, and Alexandros Iosifidis. Temporal logistic neural bag-of-features for financial time series forecasting leveraging limit order book data. Pattern Recognition Letters, 136:183–189, 2020. 2015Predicting J. Patel, S. Shah, P. Thakkar, and K. Kotecha. Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques. Expert Systems with Applications, 42(1):259–268, 2015. 2003Research K. E. Rong and T. Peng. Research on the validity of chinese stock market. Journal of Shanghai Maritime University, 03:277–280, 2003. sanchez2020testing MA Sánchez-Granero, KA Balladares, JP Ramos-Requena, and JE Trinidad-Segovia. Testing the efficient market hypothesis in latin american stock markets. Physica A: Statistical Mechanics and its Applications, 540:123082, 2020. AutoAI-TS Syed Yousaf Shah, Dhaval Patel, Long Vu, Xuan-Hong Dang, Bei Chen, Peter Kirchner, Horst Samulowitz, David Wood, Gregory Bramble, Wesley M. Gifford, Giridhar Ganapavarapu, Roman Vaculin, and Petros Zerfos. Autoai-ts: Autoai for time series forecasting. In Proceedings of the 2021 International Conference on Management of Data, SIGMOD '21, page 2584–2596, New York, NY, USA, 2021. Association for Computing Machinery. song2010limits Chaoming Song, Zehui Qu, Nicholas Blumm, and Albert-László Barabási. Limits of predictability in human mobility. Science, 327(5968):1018–1021, 2010. GECCO Abhiram Tirumala, Rishi Bhatnager, Sriram Mudireddy, Pranav Manjunath, and Jason Zutty. Designing a novel and high performance algorithmic trading model using evolutionary automl and technical analysis. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, page 312–315, New York, NY, USA, 2022. Association for Computing Machinery. weng2017stock Bin Weng, Mohamed A Ahmed, and Fadel M Megahed. Stock market one-day ahead movement prediction using disparate data sources. Expert Systems with Applications, 79:153–163, 2017. SSACNN Jimmy Ming-Tai Wu, Zhongcui Li, Gautam Srivastava, Meng-Hsiun Tasi, and Jerry Chun-Wei Lin. A graph-based convolutional neural network stock price prediction with leading indicators. Software: Practice and Experience, 51(3):628–644, 2021. LSTMLI JM. Wu, L. Sun, G. Srivastava, and JC. Lin. A long short-term memory network stock price prediction with leading indicators. Big Data, 9(5):343–357, 2021. 2017QuantCloud P. Zhang, K. Yu, J. Yu, and S. Khan. Quantcloud: Big data infrastructure for quantitative finance on the cloud. IEEE Transactions on Big Data, 4(3):368–380, 2017. 2016taxi Kai Zhao, Denis Khryashchev, Juliana Freire, Cláudio Silva, and Huy Vo. Predicting taxi demand at high spatial resolution: Approaching the limit of predictability. In 2016 IEEE International Conference on Big Data, pages 833–842, 2016. 2013Emergence Z. D. Zhao, Z. Yang, Z. Zhang, T. Zhou, Z. G. Huang, and Y. C. Lai. Emergence of scaling in human-interest dynamics. Scientific Reports, 3(12):3472, 2013.
http://arxiv.org/abs/2307.02862v1
20230706085753
A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task
[ "Shiqi Yang", "Atsushi Hashimoto", "Yoshitaka Ushiku" ]
cs.CV
[ "cs.CV" ]
A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task Shiqi Yang^1,2Work is done during intern at OMRON SINIC X., Atsushi Hashimoto^3, Yoshitaka Ushiku^3 ^1 Computer Vision Center, Bellaterra, Spain ^2 Department of Computer Science, Universitat Autònoma de Barcelona, Bellaterra, Spain ^3 OMRON SINIC X, Tokyo, Japan [email protected], {atsushi.hashimoto, yoshitaka.ushiku}@sinicx.com August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================== In recent years large model trained on huge amount of cross-modality data, which is usually be termed as foundation model, achieves conspicuous accomplishment in many fields, such as image recognition and generation. Though achieving great success in their original application case, it is still unclear whether those foundation models can be applied to other different downstream tasks. In this paper, we conduct a short survey on the current methods for discriminative dense recognition tasks, which are built on the pretrained foundation model. And we also provide some preliminary experimental analysis of an existing open-vocabulary segmentation method based on Stable Diffusion, which indicates the current way of deploying diffusion model for segmentation is not optimal. This aims to provide insights for future research on adopting foundation model for downstream task. § INTRODUCTION In the last decades, deep model trained with large amount of labeled data succeeds to be top-rank in almost all computer vision tasks. Besides the achievements in the supervised learning tasks, other research lines improve the generalization and universality ability, such as self-supervised learning <cit.> which empowers the model with strong representation feature learning capacity with only unlabeled data, open-set or open-world learning which endows the model with the ability to either reject <cit.> or distinguish <cit.> novel categories, and domain generalization <cit.> or domain adaptation <cit.> which improves model's generalization to test data of different distributions, to name a few. More recently, the training of models with abundant cross modality data is becoming more popular. For example, CLIP <cit.> is a visual-language model trained with huge amount of image and text pairing data, via a contrastive learning objective. Due to the learned image-language pairing representation, with the provided text prompts during inference time, the model excels at zero-shot recognition. SAM <cit.> is a general category-agnostic segmentation/localization solution which supports several types of prompts, it is capable of segmenting whole objects or object parts of any shape. ImageBind <cit.> learns a joint embedding space across six different modalities, with visual space as the intermedia embedding space, and it is a strong pipeline for cross-modality recognition tasks. Besides large model for discriminative tasks, diffusion based[In this report, we regard (text-to-image) diffusion model also as a kind of foundation model.] image generation is another emerging hot research topic. Stable Diffusion <cit.> is one of the most popular methods in both academic and non-academic communities. The pretrained Stable Diffusion could be easily adapted to the personalized data, for both image generation or editing, by fine-tuning part of the model <cit.> or conducting some processing in the fixed model <cit.>. Originally designed for text-to-image generation task, it can be easily extended to other conditional image generation task <cit.>, such as depth-to-image and sketch-to-image generation/translation. With the popularity of those foundation models, a natural question arises: can those pretrained models, which are originally for image recognition or generation, be applied to other downstream tasks? As these models are trained with huge amounts of data and possess strong zero-shot recognition ability or good feature representation, the learned knowledge is expected to also facilitate other downstream tasks. This provides the possibility of using a unified model for different tasks, which could have high practical value in real-world applications. In this paper, we conduct a short survey on utilizing pretrained foundation model for downstream tasks. We mainly focus on the segmentation task, since segmentation information is also useful for other tasks such as detection and localization. § UTILIZING FOUNDATION MODEL FOR DOWNSTREAM TASK In the first part, we will focus on the typical discriminative foundation model for downstream task. In the second part, we will exploit some current methods utilizing Stable Diffusion for downstream discriminative task. §.§ Visual-Language Model Large vision-language model, such as CLIP <cit.> and ALIGN <cit.>, are trained with image-language pairs via contrastive learning, due to its strong zero-shot image recognition performance, there is a new research line dubbed as open-vocabulary objection/segmentation, aiming to introduce the open category recognition ability into the objection or segmentation tasks. Early works on open-vocabulary segmentation such as LSeg <cit.> directly transform the vision-language model classification model to segmentation pipeline. More specifically LSeg directly predicts the category of the pixel embedding with the text embedding, without introducing any extra mask generator module. MaskCLIP <cit.> first shows that the value (V) embedding output by the CLIP visual part could be used as mask proposal for segmentation, together with the text embedding as the classifier weight the CLIP pipeline could directly output segmentation mask, then it further introduces Mask2Former <cit.> to improve the results, which is trained in a self training manner with the predicted segmentation masks. The recent works <cit.> follow the similar pipeline, which typically has two parts: the first part is transformer based mask proposal network and the second part is the CLIP which is to provide open-vocabulary prediction. There are also a few methods elegantly unifying these two parts, for example, ZegCLIP <cit.> and SAN <cit.> directly adopt CLIP as the main backbone (feature extractor part) and add a lightweight mask generator which takes input feature from CLIP. Since the pipeline with Mask2Former usually takes longer training time, the methods including MaskCLIP (the one without extra mask generator) have fewer parameters and also could achieve better performance, which could be a baseline for future research. §.§ Text-to-image Diffusion Model Diffusion models are another research hotspot in recent years. The most successful application is text-to-image generation, by fine-tuning or directly utilizing the pretrain diffusion model, where Stable Diffusion <cit.> is one of the most popular deployed diffusion model. Since the text-to-image generation model[If not specified, the (text-to-image) diffusion model refers to Stable Diffusion in the subsequent sections.] such as Stable Diffusion is trained with large amount of image-text pairs just like CLIP, a natural question is that whether those cross-modality generative models could be applied to discriminative task? As some pioneer works <cit.> show that the features inside the diffusion model already have rich semantic and localization information, the pretrained diffusion model has potential to be extended to other discriminative tasks. There are already a few works trying to utilize the text-to-image diffusion model for downstream tasks. Some methods <cit.> transform the text-to-image diffusion model to a zero-shot classification model which is competitive to CLIP, by obtaining the posterior classification scores based on the predicted noise during the denoising process. And other methods like OIDSE <cit.> and VPN <cit.> utilize the UNet features in the diffusion model for downstream tasks such as segmentation and depth estimation. In the following texts we focus on the segmentation task. In ODISE and VPN, the diffusion model is only to provide features, which will be the input to the subsequent mask generator network such as Mask2Former <cit.> or LAVT <cit.>, both methods only adopt one time step for the diffusion model, and VPN does not add the noise to the latent vector while ODISE does. In ODISE, an extra learnable module called implicit captioner is proposed to provide the textual embedding to UNet. VPN also utilizes a similar module denoted as text adapter, as well as cross attention maps to be combined with multi-level UNet features. Although these methods achieve good performance in the downstream tasks, we question the efficiency of this naive way of directly using UNet features with one time step. And actually, the ablation study in VPN already shows that there is limited improvement by using the extra text adapter and cross attention features, which indicates that this naive way is not totally efficient to fully exploit the diffusion model for segmentation. This conclusion also holds for the implicit captioner in ODISE. § EXPERIMENTAL ANALYSIS In this section, we will first show that the pretrained visual-language model, more specifically the CLIP, has the potential to be directly extended to other downstream tasks. Then, we will show the current methods using text-to-image diffusion model are not efficient with the naive way of deploying pretrained diffusion model as the feature extractor. §.§ Visual-Language Model We choose the widely used visual-language model CLIP for analysis. We visualize the CLIP visual features under the weakly supervised segmentation task <cit.>, where every image is provided with its ground-truth class labels. We adopt the Grad-CAM <cit.> for visualization[We follow CLIP-ES <cit.> to use the features before the last attention layer to compute CAM.]. For text prompt input to the CLIP language part, we only use 4 classes here: trees/palms, car, building and windows. The format of text prompt is "a photo of classname". The visualization is shown in Fig. <ref>, it indicates that directly using CLIP features is enough to achieve good localization or segmentation, and also the prompt engineering, i.e., the choice of text prompt, is also important to achieve better results. In Fig. <ref>, we further show that simply adopting the binary threshold on the Grad-CAM could lead to refined segmentation. Those findings that CLIP visual features already have localization and semantic information show the potential of the extensibility to other discriminative task. Fully investigating such localization ability of CLIP for segmentation or other tasks is still not widely studied yet in the community. §.§ Text-to-image Diffusion Model Here we do a detailed analysis for ODISE, which is an open-vocabulary segmentation method based on Stable Diffusion. In ODISE, the image will be fed into the diffusion model with adding noise once, and the features from encoder-decoder in the VQGAN along with the features from the UNet will be used for the subsequent Mask2Former for mask proposal. Unlike the original diffusion model achieving image generation through the denoising of multiple time steps, the UNet feature (with one time step) from the ODISE may have poor quality regarding the semantic and localization information, as a recent method <cit.> hypothesizes that the denoising process is a coarse-to-fine synthesis with multiple time steps. To verify it, we visualize the cross attention in different scenarios as shown in Fig. <ref>. In Fig. <ref>, we first deploy the Stable Diffusion for the normal text-to-image generation with the text prompt 'a horse on the grass', we visualize the cross attention corresponding to the token 'horse' in the last time step, we find these attentions are basically accurate localizing the object. Then we send the generated image back to the diffusion model with adding noise once just like ODISE does, and we also visualize the cross attention of token 'horse'. It turns out that the resulting attention maps become blurry and less accurate for localization, compared to the ones during the generation process. This attention degradation phenomenon may be even more severe if using real image as in the ODISE. Since the UNet features used by ODISE, which will be used by Mask2Former for mask proposal, are directly related to cross attention, the attention degradation may deteriorate the segmentation performance. We also directly visualize the UNet features by k-means clustering in Fig. <ref>, it shows in some case the UNet feature indeed has poor semantic and localization information, as shown in Fig. <ref> (right). The finding indicates the necessity of denoising process to get high quality features containing better semantic and localization information. We also conduct ablation study of ODISE. In ODISE, there are a diffusion model (with adding noise once) and an implicit caption module, the output of which will be utilized as textual/conditional embedding and will be combined with null text embedding via summation. The features from encoder-decoder in VQGAN inside the diffusion model, as well as the features from UNet will be sent to Mask2Former for mask proposal. In Tab. <ref>, we ablate several modules in ODISE, and it turns out that directly using UNet features with null text embedding (ODISE w/o IC) already achieves decent performance, and the performance gain from implicit captioner is relatively limited. Note that not using implicit captioner means only having null text embedding in UNet (unconditional embedding), which is not the right usage way of text-to-image diffusion with conditional and unconditional embeddings, it has not explored the language related information in Stable Diffusion. And in Tab. <ref>, ODISE, which utilizes CLIP, diffusion model and Mask2Former, is still inferior to SAN, which only uses CLIP and a lightweight decoder network. This indicates the current way using diffusion model in ODISE is relatively naive, and has further space to be improved. § CONCLUSION In this paper, we investigate some recent works on using foundation models for downstream tasks. Features from both discriminative model CLIP and generative model Stable Diffusion, which are trained with large amount of cross-modality data pairs, already contain semantic and localization information, and could be deployed for other discriminative tasks. Although achieving great performance, the current way using diffusion model for downstream tasks is not efficient. We hope this report could provide some insights for the future research. ieee_fullname
http://arxiv.org/abs/2307.14345v1
20230705122925
NOMA for STAR-RIS Assisted UAV Networks
[ "Jiayi Lei", "Tiankui Zhang", "Xidong Mu", "Yuanwei Liu" ]
cs.IT
[ "cs.IT", "cs.ET", "eess.SP", "math.IT" ]
fullwidth,itemindent=,listparindent=,itemsep=0ex,partopsep=0pt,parsep=0ex remarkRemark theoremTheorem theorembox 500 lemmaLemma lemmabox 500 corollaryCorollary corollarybox 500 propositionProposition op-tical net-works semi-conduc-tor @nat@width>@nat@width NOMA for STAR-RIS Assisted UAV Networks Jiayi Lei, Tiankui Zhang, Senior Member, IEEE, Xidong Mu, Yuanwei Liu, Senior Member, IEEE This work was supported by the National Key Research and Deployment Program of China under Grant 2019YFC1511302. Jiayi Lei and Tiankui Zhang are with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {leijiayi,zhangtiankui}@bupt.edu.cn). Xidong Mu and Yuanwei Liu are with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: {x.mu, yuanwei.liu}@qmul.ac.uk). August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper proposes a novel simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted unmanned aerial vehicle (UAV) non-orthogonal multiple access (NOMA) emergency communication network. Multiple STAR-RISs are deployed to provide additional and intelligent transmission links between trapped users and UAV-mounted base station (BS). Each user selects the nearest STAR-RIS for uploading data, and NOMA is employed for users located at the same side of the same STAR-RIS. Considering piratical requirements of post-disaster emergency communications, we formulate a throughput maximization problem subject to constraints on minimum average rate and maximum energy consumption, where the UAV trajectory, STAR-RIS passive beamforming, and time and power allocation are jointly optimized. Furthermore, we propose a Lagrange based reward constrained proximal policy optimization (LRCPPO) algorithm, which provides an adaptive method for solving the long-term optimization problem with cumulative constraints. Specifically, using Lagrange relaxation, the original problem is transformed into an unconstrained problem with a two-layer structure. The inner layer is solved by penalized reward based proximal policy optimization (PPO) algorithm. In the outer layer, Lagrange multipliers are updated by gradient descent. Numerical results show the proposed algorithm can effectively improve network performance while satisfying the constraints well. It also demonstrates the superiority of the proposed STAR-RIS assisted UAV NOMA network architecture over the benchmark schemes employing reflecting-only RISs and orthogonal multiple access. Emergency communication, resource allocation, simultaneously transmitting and reflecting reconfigurable intelligent surface, unmanned aerial vehicle. § INTRODUCTION Large-scale natural disasters, such as earthquakes and hurricanes often inflict serious and unpredictable losses of life and property. The communication network plays a vital role in post-disaster rescue and recovery. However, conventional terrestrial communication infrastructures may be severely damaged during a disaster <cit.>. For example, over 85% of cell towers in affected area were inoperative during the Hurricane Harvey in the United States in 2017. In such scenarios, a temporal emergency communication network needs to be quickly established to restore connectivity and provide fast responses to emergency requests <cit.>. Considering the poor geographical circumstance and extreme urgency of post-disaster rescue missions, the emergency communication network has to be easily deployed, low-cost, and high-capacity. Due to the characteristics of flexible deployment, affordable cost, and higher probability of line-of-sight (LoS) communication, the unmanned aerial vehicle (UAV)-based communication network has been one of the most potentially efficient candidates for emergency communications<cit.>. Nevertheless, there are still many technical bottlenecks for deploying UAVs as flying base stations (BSs) in disaster areas, including limited radio resource, adaptive trajectory design, and constrained UAV battery capacity. The insufficient battery capacity is the most intractable issue among above challenges, which has a significant negative impact on the energy efficiency and coverage of UAV, especially in the case of wide post-disaster areas <cit.>. As a result, there exists an urgent need for employing new technologies to make up for the drawbacks of UAV and fully unleash the potentials of UAV-based communications <cit.>. As a promising technology, reconfigurable intelligent surfaces (RISs), or named intelligent reflecting surfaces (IRSs), have received significant attention <cit.>. They are capable of significantly improving the performance of wireless communication networks by smartly reconfiguring the wireless propagation channel between the transmitter and the receiver. Conventional RISs mainly focus on reflecting the incident signal, so the transmitter and the receiver have to be located at the same side of the RIS. As a result, to face all users, conventional reflecting-only RISs must be deployed at the edge of the region of interest, which greatly limits the flexibility and efficiency of RISs. To address this issue, a novel simultaneously transmitting and reflecting RIS (STAR-RIS) is proposed in <cit.>, which is also named intelligent omni-surface (IOS) in some works <cit.>. Unlike reflecting-only RISs, the incident wireless signals can be transmitted and reflected by the STAR-RIS simultaneously. With this important characteristic, enhanced degrees of freedom (DoFs) for signal propagation manipulation are provided. STAR-RIS can serve all users at any location and full-space smart radio environment can be created. It has been confirmed in <cit.> that the coverage of a STAR-RIS is 1.5 times as large as that of a reflecting-only RIS. Therefore, it is more efficient to employ STAR-RISs instead of reflecting RISs in wireless communication networks. Furthermore, employing the non-orthogonal multiple access (NOMA) technology in RIS or STAR-RIS assisted networks is a notable research topic. On the one hand, NOMA allows multiple users to simultaneously occupy the same time-and-frequency resource <cit.>, thereby achieving user fairness, massive connectivity, and spectral efficiency enhancement in RIS assisted network. On the other hand, by increasing the design flexibility and smartly changing the transmission channel, RISs especially STAR-RISs introduce important benefits in NOMA networks. The win-win effect of combining STAR-RIS and NOMA has been shown in several studies <cit.>. Thanks to the advantages of easy deployment, low cost, low power consumption, and 360^∘ smart radio environment, it is a feasible, economical, and effective scheme to employ STAR-RISs in UAV-based emergency communication networks. On the one hand, deploying STAR-RISs on-site by professionals after disasters is realizable for following three reasons. Firstly, compared with traditional communication devices that require computer rooms, STAR-RIS requires almost no additional equipment besides its own panel, making the deployment simple. Secondly, STAR-RIS is small in size, light in weight, and can be mounted on the surface of buildings, making the addressing easy. Thirdly, STAR-RIS can be composed of small panels, which is easy to expand and flexible to deploy. For disaster areas inaccessible to emergency personnel, deploying STAR-RISs would be more difficult. However, there are still solutions, such as employing unmanned emergency vehicles equipped with STAR-RISs, manipulating UAVs to drop STAR-RISs at designated positions and so on. On the other hand, employing STAR-RISs in UAV emergency networks is economical and efficient. First of all, as mentioned above, the hardware cost and deployment cost of STAR-RISs are both low. In addition, the power consumption of STAR-RIS is negligible, making it extremely suitable for post-disaster scenarios where energy is scarce. What's more, by creating full-space smart radio environment, STAR-RISs can make up for the short endurance and small coverage of single UAV, thereby improving the network performance. It is worth mentioning that deploying multiple UAV base stations is one of means for emergency communications. However, without doubt, it brings higher cost and energy consumption, and more complex management and design than deploying multiple STAR-RIS. Driven by the aforementioned issues, this paper aims to investigate the STAR-RIS assisted UAV emergency networks with NOMA. In post-disaster scenarios, there are mainly three challenges for the STAR-RIS assisted UAV emergency networks. First of all, with the harsh electromagnetic environment and scarce radio resource after disasters, the UAV trajectory design and the radio resource allocation are very important for system performance improvement. Secondly, in order to make full use of the STAR-RISs, the passive beamforming design and STAR-RIS deployment are key topics that deserve investigation. Last but not least, compared with general communication scenarios, there are more stringent constraints and requirements in emergency communications. On the one hand, because of the paralysis of the ground power system after disasters <cit.>, the available energy for the trapped user equipment (UE) is severely limited, which brings additional challenge for efficient communications. On the other hand, to ensure post-disaster rescue for each user, the emergency communications have higher requirement on the minimum user transmission rate. In summary, it is a challenging but meaningful research topic to improve communication efficiency while satisfying the constraints on both UE rate and UE energy in STAR-RISs assisted UAV emergency communication networks, where UAV trajectory, STAR-RISs beamforming, and resource scheduling have to be carefully designed. §.§ Related Works * UAV-based Communication Networks: During past years, UAVs have been applied in various communication scenarios and play different roles <cit.>. UAVs in <cit.> acted as mobile relay nodes for throughput improvement. In <cit.>, a UAV equipped with an edge computing server was deployed to cope with computation-intensive tasks. In <cit.>, the UAV was used to charge the ground sensor users and transmit information simultaneously. There is also some research focusing on the application of UAV in post-disaster emergency communications <cit.>. Three UAV-assisted emergency communication schemes in different disaster scenarios were proposed in <cit.>. Specifically, with surviving ground base stations, the flight trajectory and communication scheduling of UAVs were jointly optimized. In the scenarios without ground BSs, a UAV assisted Device-to-Device (D2D) communication system was studied. For the information exchange between disaster areas and outside, the hovering positions of UAV relays were optimized. In <cit.>, in order to maximize the spectrum efficiency of the UAV-assisted emergency networks, the MBS power allocation, UAV zone selection and user scheduling were jointly optimized by a Deep-Q-Network-based algorithm. An adaptive UAV deployment scheme was proposed in <cit.> to achieve the maximal coverage for ground nodes after disasters. Most of these existing works related to UAV-based emergency communications focused on the UAV energy consumption, while the limited energy of the trapper users after disasters was ignored. * (STAR-)RIS-Assisted Wireless Networks: Recently, RIS-assisted wireless communication networks have been extensively studied <cit.>. In <cit.>, the channel estimation for the RIS-enhanced orthogonal frequency division multiplexing (OFDM) system was investigated. A RIS-aided massive MIMO system was studied in <cit.>, where the active beamforming at the multi-antenna access point and the passive reflecting beamforming at the RIS were jointly optimized. In <cit.>, the RIS was deployed in a mobile edge computing (MEC) system to reduce the average computation latency. As the incident signals can only be reflected by conventional RISs, in these RIS assisted networks, all users were located at one side of the RIS and access point (or base station) was assumed at the same side with users. To break through this assumption, increasing attention is paid to STAR-RIS, and some studies on the STAR-RIS assisted communication networks have been carried out <cit.>. The authors of <cit.> proposed three practical operating protocols for the STAR-RIS, namely energy splitting, mode switching, and time switching. Then, a BS power consumption minimization problem in the STAR-RIS assisted downlink communications was considered for each protocol. In <cit.>, a STAR-RIS-assisted MIMO system was studied based on the energy splitting scheme. The precoding matrices and the transmitting and reflecting coefficients were optimized by the block coordinate descent algorithm. What's more, owing to the unique dual-mode feature of the STAR-RIS, NOMA is naturally applied in STAR-RIS assisted networks <cit.>. In <cit.>, one transmitted user and one reflected user were paired as a NOMA cluster with the aid of the STAR-RIS. In <cit.>, the STAR-RIS was combined with the NOMA technology, and the sum secrecy rate of the artificial noise aided communication network was maximized. * RIS-Assisted UAV Networks: Currently, there have been a few works employing conventional reflecting-only RIS in UAV-based communication networks. In <cit.>, RIS was deployed on the ground and acted as a passive relay, while in <cit.>, RIS was mounted on the UAV and acted as an aerial mobile relay. Specifically, in <cit.>, RIS was involved to tackle the blocked link from UAV-BS (UBS) to ground terminals, by providing additional reflecting communication link. The 3D trajectory of the UAV and the phase shift of the RIS were designed to maximize the data transmission rate. In <cit.>, the RIS was deployed to enhance the communication quality and reduce the movement of the UAV, thus saving the UAV energy. It is notable that in the above existing RIS assisted UAV networks, all users were located at one side of the RIS, and UAV also flew in the same side. In <cit.>, the author aimed to maximize the throughput of a RIS-UAV relaying communication system. In <cit.>, a UAV equipped with RIS was used to assist ground base station to cover users in hotspot areas. In <cit.>, multiple UAVs were equipped with the RISs and acted as aerial passive relays between the on-site command center and the trapped users in disaster areas. The author mainly focused on the composite fading channel and the joint bandwidth-power allocation, where the design of RIS beamforming and UAV trajectory were not involved. §.§ Motivations and Contributions As summarized above, most of related works considered conventional reflecting-only RIS assisted UAV networks. As the incident signals can only be reflected by one side of RISs, there exist geographic constraints on transmitters and receivers, meaning that the UAV has to be on the same side of the RIS with users. In contrast, the incident signals can be reflected and transmitted by STAR-RIS simultaneously, thereby breaking through the geographic constraints, and fully reaping the benefits of UAVs and RISs. However, currently, few studies investigate the interplay between STAR-RIS and UAV communications [31,32]. In [31], the sum rate of users in STAR-RIS assisted UAV communication system was maximized, by jointly optimizing STAR-RIS beamforming, UAV trajectory and power allocation. In [32], the authors also aimed to improve the sum-rate by employing STAR-RIS in UAV communications, where the UAV was equipped with multiple antennas. It is worth noting that these existing works involved single STAR-RIS, and employed conventional orthogonal multiple access (OMA) method, which didn't give full play to the advantages of STAR-RIS. What's more, none of them focused on the post-disaster emergency scenarios, where limited user energy and guaranteed minimum transmission rate need to be considered together. Motivated by the observation, we propose a multiple STAR-RISs assisted UAV emergency communication network with NOMA. Our objective is to further explore the benefits of combining STAR-RIS and NOMA in UAV networks, and provide a feasible and efficient network architecture for post-disaster emergency communications. Furthermore, to tackle the cumulative constraints on user energy and rate in post-disaster scenarios, we involve a novel joint optimization algorithm, which is a combination of Lagrange relaxation and proximal policy optimization. The proposed algorithm updates penalty coefficients adaptively, enabling it to achieve the optimization objective while satisfying the constraints well. The main contributions of this paper are summarized as follows. * We propose a STAR-RIS assisted UAV NOMA emergency communication network, where a single UAV acts as the aerial BS and multiple STAR-RISs are deployed on the ground. Each UE accesses the nearest STAR-RIS to perform data uploading, and NOMA is employed for UEs located at the same side of the same STAR-RIS. Targeting the practical requirements of post-disaster communications, we formulate a throughput maximization problem subject to the cumulative constraints on the minimum UE average rate and the maximum UE energy consumption, where the UAV trajectory, STAR-RIS beamforming, and time and power allocation are jointly optimized. * We formulae the throughput maximization problem as a Constrained Markov Decision Process (CMDP), and propose a Lagrange based reward constrained proximal policy optimization (LRCPPO) algorithm to solve it. Applying the Lagrange relaxation, the CMDP is converted into an unconstrained problem with a two-layer structure. Given the Lagrange multipliers, the inner layer problem is then formulated as a Markov Decision Process (MDP) with penalized reward function, and solved by proximal policy optimization (PPO) algorithm. As for the outer layer problem, the Lagrange multipliers are updated by gradient descent. The proposed LRCPPO algorithm provides an adaptive and effective method for solving long-term optimization problems with cumulative constraints. * We illustrate the performance of the proposed network architecture and the LRCPPO algorithm by simulation. Three cases with different rate and energy constraints are considered. Numerical results show that 1) the combination of STAR-RISs and NOMA attains significant throughput gains over the conventional reflecting-only RISs and OMA for UAV emergency communications; 2) the proposed LRCPPO algorithm satisfies the constraints well in all cases, and the UAV trajectory can be adaptively adjusted according to the constraints; 3) the proposed LRCPPO algorithm evidently achieves higher throughput than the common reward shaping based algorithm and other benchmarks. The rest of this paper is organized as follows. Section 2 introduces the system model of the proposed multiple STAR-RISs assisted UAV NOMA emergency communication network and formulates a long-term throughput maximization problem. In Section 3, a Lagrange based reward constrained proximal policy optimization algorithm is proposed. Section 4 provides numerical results. Finally, Section 5 concludes this paper. § SYSTEM MODEL AND PROBLEM FORMULATION After disasters, the original terrestrial communication system is almost paralyzed. The UAV-mounted BS is a promising solution to rebuild communication connections for post-disaster areas quickly. However, the endurance and coverage of UAVs are usually limited. Deploying STAR-RISs to assist the communication between the ground UEs and the UBS will be effective in further improving the efficiency of future emergency communication networks. This paper focuses on the uplink transmission in the STAR-RIS assisted UAV emergency communication network. The list of notations is illustrated in Table. <ref>. §.§ Network Architecture As shown in Fig. <ref>, K UEs are randomly distributed in a post-disaster area, where the ground BS is severely damaged. One UBS is dispatched and I available STAR-RISs are deployed to recover the post-disaster communication. Both UBS and UE are equipped with single antenna. The STAR-RISs are operated with the time switching protocol, which means that STAR-RISs periodically switch all elements between transmitting mode and reflecting mode <cit.>. It has been proved in <cit.> that the optimal location of RIS should be close to the receiver or the transmitter. Inspired by this insightful conclusion, we assume that each UE selects the STAR-RIS which is closest to it to perform data uploading [In post-disaster scenarios, there are very likely to be only a few proper candidate locations for STAR-RISs. Based on this practical reality, we consider fixed and given locations of STAR-RISs in this paper.]. The UE data signal received by the UBS is a superposition of the signals from two transmission links: direct link (UE–UBS link) and STAR-RIS aided link (UE–STAR-RIS–UBS link). The latter is further divided into transmitting link and reflecting link, depending on the relative locations of the UAV, UE and STAR-RIS. When the UAV and the UE are on the same side with respect to the corresponding STAR-RIS, the data uploading exploits the reflecting link via the STAR-RIS. Otherwise, the signal is uploaded to the UBS by exploiting the transmitting link. Suppose that all STAR-RISs have the same number of elements, which is denoted by M. The 3D locations of UEs and STAR-RISs are assumed to be fixed, which are denoted by 𝐪_k = [ q_k,x,q_k,y,H_k] and 𝐪_i = [ q_i,x,q_i,y,H_i], respectively. The UAV flies at the fixed altitude of H_U within the duration of T, and its maximum flight speed is v_max. The duration T is divided into N time slots of the same length τ. τv_max≪H_U, so the location of the UAV in each time slot is approximately regarded as unchanged. Then, the UAV flight trajectory is characterized by a set of vectors ℚ= {𝐪_U[n]}_n = 1^N, where 𝐪_U[n] - 𝐪_U[n - 1]≤v_maxτ. Let 𝒦_i denote the set of UEs served by the i-th STAR-RIS, where 𝒦 = 𝒦_1∪𝒦_2∪ .... ∪𝒦_I and 𝒦_i∩𝒦_i' = ∅ ,∀ i,i' = 1,2,...,I. For 𝒦_i, the set of UEs located at the left side of the STAR-RIS i is denoted by 𝒦_i^L = { k ∈𝒦_i|q_k,x < q_i,x}, and the set of UEs located at the right side is denoted by 𝒦_i^R = { k ∈𝒦_i|q_k,x≥q_i,x}. The sizes of 𝒦_i^L and 𝒦_i^R are denoted by K_i^L and K_i^R, respectively. Each STAR-RIS and the corresponding served UEs occupy orthogonal spectrum resources in the form of frequency division multiple access (FDMA). For the interior of 𝒦_i, using the time division multiple access (TDMA), UEs in 𝒦_i^L and UEs in 𝒦_i^R are served sequentially. Then, UEs in 𝒦_i^L (𝒦_i^R) upload data at the same time and the same frequency band via non-orthogonal multiple access (NOMA). The available frequency bandwidth of the system is denoted by BW. The duration allocated to 𝒦_i^L at time slot n is denoted by τ _i^L[n] and the duration allocated to 𝒦_i^R is denoted by τ _i^R[n], where τ _i^L[n] + τ _i^R[n] = τ. Note that τ _i^L[n] and τ _i^R[n] are also the time switching allocation that the STAR-RIS i works in the transmitting mode or reflecting mode. The framework of timescales and multiple access modes is shown in Fig. <ref>. §.§ Channel Model At time slot n, the complex equivalent baseband channel vectors between the UE k and the STAR-RIS i, between the STAR-RIS i and the UBS, between the UE k and the UBS are denoted by 𝐡_k - i[n] ∈ℂ^M × 1, 𝐡_i - U[n] ∈ℂ^M × 1, h_k - U[n] ∈ℂ^1 × 1, respectively. Both the large-scale fading and small-scale fading are captured, and modeled as Rician fading<cit.>. Specifically, the channel coefficient between the UE k and the STAR-RIS i is given by 𝐡_k - i[n] = √(ξ _k - i)( √(G_k - i/G_k - i + 1)𝐡_k - i^LoS + √(1/G_k - i + 1)𝐡_k - i^NLoS[n]), where G_k - i denotes the Rician factor. ξ _k - i denotes the large-scale fading coefficient, which is determined by the distance d_k - i = 𝐪_k - 𝐪_i and given by ξ _k - i = ξ _0/d_k - i^α _1. ξ _0 is the channel power at the reference distance of 1 meter, and α _1 is the path loss exponent. 𝐡_k - i^LoS and 𝐡_k - i^NLoS[n] denote the line-of-sight (LoS) channel component and the non-line-of-sight (NLoS) component, respectively. It is assumed that the STAR-RIS employs a uniform linear array (ULA) of elements, and then 𝐡_k - i^LoS is given by 𝐡_k - i^LoS = e^ - j2πd_k - i/λ×[ 1,e^ - j2π d/λcosϕ _k - i,...,e^ - j2π (M - 1)d/λcosϕ _k - i]^T, where d is the STAR-RIS element spacing, λ denotes the carrier wavelength, and cosϕ _k - i = q_i,x - q_k,x/d_k - i is the cosine of the angle of arrival (AoA). The NLoS component of each element 𝐡_k - i^NLoS[n] is assumed to be independent and identically distributed and follow the circularly symmetric complex Gaussian distribution with zero mean and unit variance. Similarly, the channel coefficient between the STAR-RIS i and the UBS is 𝐡_i - U[n] = √(ξ _i - U[n])( √(G_i - U/G_i - U + 1)𝐡_i - U^LoS[n] + √(1/G_i - U + 1)𝐡_i - U^NLoS[n]), where ξ _i - U = ξ _0/d_i - U^α _2[n], d_i - U[n] = 𝐪_U[n] - 𝐪_i. α _2 denotes the path loss exponent of the STAR-RIS–UBS link and G_i - U denotes the Rician factor. The LoS component is given by 𝐡_i - U^LoS[n] = e^ - j2πd_i - U[n]/λ×[ 1,e^ - j2π d/λcosϕ _i - U[n],...,e^ - j2π (M - 1)d/λcosϕ _i - U[n]]^T, where cosϕ _i - U[n] = q_U,x[n] - q_i,x/d_i - U[n] is the cosine of the angle of departure (AoD). The NLoS component 𝐡_i - U^NLoS[n] also follows the circularly symmetric complex Gaussian distribution with zero mean and unit variance. For the direct link (UE k–UBS), the channel vector is given as h_k - U[n] = √(ξ _k - U[n])( √(G_k - U/G_k - U + 1) h_k - U^LoS[n] + √(1/G_k - U + 1) h_k - U^NLoS[n]), where ξ _k - U[n] = ξ _0/d_k - U^α _3[n], d_k - U[n] = 𝐪_U[n] - 𝐪_k. α _3 denotes the path loss exponent of UE-UBS link and G_k - U is the corresponding Rician factor. The LoS component is h_k - U^LoS[n] = e^ - j2πd_k - U[n]/λ and the NLoS component follows h_k - U^NLoS[n] ∼𝒞𝒩(0,1). According to the i-th STAR-RIS's location with respect to the UBS at time slot n, it can be determined whether 𝒦_i^L (𝒦_i^R) is a reflected UE cluster or a transmitted UE cluster. ζ [n] = { 0,1} is used to represent the reflection/transmission flag of UE clusters. The value is 1 for reflection and 0 for transmission. ζ ^κ_i^L[n] = {[ 1, 1pt 1pt 1pt 1pt 1pt if 1pt 1pt 1ptq_U,x[n] 1pt 1pt 1pt≤q_i,x; 0, 1pt 1pt 1pt 1pt 1pt 1pt if 1pt 1pt 1ptq_U,x[n] 1pt 1pt 1pt > q_i,x, ]. ζ ^κ_i^R[n] = {[ 1, 1pt 1pt 1pt 1pt 1pt if 1pt 1pt 1ptq_U,x[n] 1pt 1pt 1pt > q_i,x; 0, 1pt 1pt 1pt 1pt 1pt 1pt if 1pt 1pt 1ptq_U,x[n] 1pt 1pt 1pt≤q_i,x. ]. The transmission coefficient matrix and the reflection coefficient matrix of STAR-RIS i at time slot n are represented by the diagonal matrix Λ_i^t[n] = diag( e^iθ _i1^t[n],e^iθ _i2^t[n],...,e^iθ _iM^t[n]) and Λ_i^r[n] = diag( e^iθ _i1^r[n],e^iθ _i2^r[n],...,e^iθ _iM^r[n]) respectively. θ _im^t[n],θ _im^r[n] ∈ [0,2π ) denote the reflection phase shift and the transmission phase shift of the m-th element of STAR-RIS i. Finally, the superimposed channel vector for uplink transmission of UE k at time slot n is expressed as h_k[n] = {[ h_i - U^H[n]( ζ ^κ_i^L[n]Λ_i^r[n] + ( 1 - ζ ^κ_i^L[n])Λ_i^t[n])h_k - i[n]^STAR - RIS 1pt 1pt 1pt aided 1pt 1pt 1pt 1pt link + h_k - U[n]^direct 1pt 1pt 1pt link, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt for 1pt 1pt 1pt k ∈𝒦_i^L,i = 1,2,...,I; h_i - U^H[n]( ζ ^κ_i^R[n]Λ_i^r[n] + ( 1 - ζ ^κ_i^R[n])Λ_i^t[n])h_k - i[n]^STAR - RIS 1pt 1pt 1pt aided 1pt 1pt 1pt 1pt link + h_k - U[n]^direct 1pt 1pt 1pt link, 1pt 1pt 1pt 1pt 1pt for 1pt 1pt 1pt k ∈𝒦_i^R,i = 1,2,...,I 1pt ].. §.§ NOMA Communications The UBS receives the signal from UEs in 𝒦_i^L (𝒦_i^R) using NOMA. Successive interference cancelation (SIC) is employed at the UBS, and multi-UEs-signals are decoded in the descending order of channel gain. The decoding order for the UE k at time slot n is denoted by ω _k[n] and expressed as ω _k[n] = {[ { 1,2,...,K_i^L} , 1pt 1pt 1pt 1pt 1pt 1pt∀ 1pt 1pt 1pt k ∈𝒦_i^L,i = 1,2,...,I; { 1,2,...,K_i^R} , 1pt 1pt 1pt 1pt 1pt∀ 1pt 1pt 1pt k ∈𝒦_i^R,i = 1,2,...,I ].. The transmission power of the UE k at time slot n is represented by P_k[n]. Based on the above network architecture and channel model, the average transmission rate of UE k at time slot n is given by r_k[n] = {[ τ _i^L[n]/τ I log _2( 1 + p_k[n]| h_k[n]|^2/∑_ω _k'[n] ≥ω _k[n],k' ∈𝒦 _i^Lp_k'[n]| h_k'[n]|^2 + σ ^2), 1pt 1pt 1pt 1pt 1pt∀ 1pt 1pt 1pt k ∈𝒦_i^L,i = 1,2,...,I; τ _i^R[n]/τ I log _2( 1 + p_k[n]| h_k[n]|^2/∑_ω _k'[n] ≥ω _k[n],k' ∈𝒦 _i^Rp_k'[n]| h_k'[n]|^2 + σ ^2), 1pt 1pt 1pt 1pt 1pt∀ 1pt 1pt 1pt k ∈𝒦_i^R,i = 1,2,...,I ]., where σ ^2 is the noise power at the UBS receiver. §.§ Problem Formulation In post-disaster emergency communication scenarios, a large throughput is desired for collecting UEs data. At the same time, practical constraints have to be considered. On the one hand, a minimum transmission rate guarantee is required to ensure a reliable communication link for each trapped user. On the other hand, due to the paralyzed power system after disasters, the energy storage of UEs would be limited and cannot be charged in time. It is assumed that the maximum available energy for each UE in the duration T is E_max. By jointly optimizing the UAV trajectory 𝐪_U[n], the UEs transmit power P_k[n], the STAR-RIS time switching allocation τ _i[n] = {τ _i^L[n],τ _i^R[n]}, and the STAR-RIS phase shift matrix Λ_i^r[n],Λ_i^t[n], our goal is to maximize the long-term uplink throughput of the proposed emergency communication network while satisfying constraints on the minimum required UE average rate and maximum UE available energy [The proposed network structure can be applied in other scenarios where the existing infrastructure fails to meet communication needs, such as large-scale sports events, but the network objective and constraint involved should be designed based on actual situation.]. The optimization problem is formulated as follows. Max_𝐪_U[n],P_k[n],τ_i[n],Λ_i^r[n],Λ_i^t[n]1/N∑_n = 1^N ∑_k = 1^K r_k[n],   s.t.    1/N∑_n = 1^N r_k[n] 1pt≥r_min, 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt∀ k ∈𝒦 , ∑_n = 1^N P_k[n]τ _i^L[n] ≤E_max, 1pt 1pt 1pt 1pt 1pt∀ k ∈𝒦 _i^L,i = 1,2,...,I, ∑_n = 1^N P_k[n]τ _i^R[n] ≤E_max, 1pt 1pt 1pt 1pt∀ k ∈𝒦 _i^R,i = 1,2,...,I, 𝐪_U[n] - 𝐪_U[n - 1]≤v_maxτ , 1pt 1pt 1pt 1pt 1pt n = 1,2,....,N, 𝐪_U[0] = 𝐪_0, P_k[n] ≤P_max, 1pt 1pt 1pt n = 1,2,....,N,k = n = 1,2,....,K, τ _i^L[n] + τ _i^R[n] = τ , 1pt 1pt 1pt i = 1,2,...,I,n = 1,2,....,N, θ _im^t[n],θ _im^r[n] ∈ [0,2π ),m = 1,2,...,M,i = 1,2,...,I,n = 1,2,...,N. Constraint (<ref>) represents the UE average rate could not be lower than r_min. Constraints (<ref>) and (<ref>) represent the energy consumed by UE in the duration T should be less than E_max. Constraint (<ref>) indicates the maximal UAV flight speed is v_max. Constraint (<ref>) means the UAV takes off at the fixed position 𝐪_0 and (<ref>) represents the maximum transmit power of UE is P_max. To tackle this non-convex optimization problem, we mainly face following three challenges. Firstly, the four optimization variables are highly coupled with each other, so it is difficult and complicated to divide the original problem into sub-problems and then solve them successively. Secondly, as the formulated problem is long-term and dynamic, achieving long-term optimization by solving per-single-slot optimization is not rigorous in theory. Thirdly, unlike the constraints on a single time slot<cit.>, the average rate and the energy consumption of (<ref>) and (<ref>), (<ref>) are long-term and cumulative, bringing difficulties to the design of the reward function in Markov decision process. Therefore, traditional reinforcement learning methods are not suitable to be applied directly. § LAGRANGE BASED REWARD CONSTRAINED PROXIMAL POLICY OPTIMIZATION ALGORITHM In this section, we propose a Lagrange based reward constrained proximal policy optimization algorithm, abbreviated as LRCPPO, to solve (<ref>). First of all, the long-term and dynamic optimization problem is formulated as a Constrained Markov Decision Process (CMDP). Secondly, applying the Lagrange Relaxation method, the CMDP is converted into an unconstrained problem with a min-max two-layer structure. The maximization problem in the inner layer is formulated as a Markov Decision Process with the penalized reward function, and solved by the reward constrained proximal policy optimization algorithm. The minimization problem in the outer layer is solved by gradient descent. The proposed LRCPPO algorithm updates relevant parameters on three timescales, and converges to a feasible suboptimal solution. §.§ CMDP A fundamental CMDP is defined by the tuple {S,A,r,P,s_0, c,δ} <cit.>. S denotes the state space. A denotes the action space. r:S × A × S →ℝ represents the reward function. P:S × A × S → [0,1] is the state transition probability matrix and s_0 is the initial state. c:S × A × S →ℝ denotes the immediate constraint function (also called penalty function) at one time slot. δ∈ [0,1] is the threshold of the corresponding long-term cumulative constraint function. In our proposed system model and optimization problem, the above seven elements are defined as follows. * State S. The state in time slot n is the UAV 2D location [ q_U,x[n],q_U,y[n]]. * Action A. The action is composed of the UAV flight direction (denoted by the angle θ _U[n] ∈ [0,2π )), the UAV flight speed v[n] ∈ [0,v_max], the UE transmit power P_k[n], the time switching allocation τ_i[n] = {τ _i^L[n],τ _i^R[n]} and the beamforming matrix Λ_i^r[n], Λ_i^t[n]. * Reward function r. The reward function at time slot n is defined as the sum of the uplink rates of all UEs, r(s_n,a_n) = ∑_k = 1^K r_k[n]. * State transition probability matrix P. The UAV position at next time slot is fully determined by the position and the action at the current time slot, so each element of P takes the value of 1. * Initial state s_0 is the UAV starting point 𝐪_0 = [0,0,H_U]. * Constraint function c. For the rate constraint in (<ref>), the immediate rate constraint at time slot n is defined as c_r,k(s_n,a_n) = r_min - r_k[n],k ∈𝒦. For the energy constraint in (<ref>), the immediate energy constraint at time slot n is c_e,k(s_n,a_n) = {[ P_k[n]τ _i^L[n] - E_max/N,∀ k ∈𝒦 _i^L,i = 1,2,...,I; P_k[n]τ _i^R[n] - E_max/N,∀ k ∈𝒦_i^R,i = 1,2,...,I ].. * Constraint threshold δ. Based on the definition of constraint functions, the constraint (<ref>) is equivalently expressed as ∑_n = 1^N c_r,k(s_n,a_n)≤ 0 ,k ∈𝒦, and the constraints (<ref>) and (<ref>) are equivalent to ∑_n = 1^N c_e,k(s_n,a_n)≤ 0,k ∈𝒦. Therefore, δ _r,k = δ _e,k = 0 in this paper. Policy π :S →Δ _A is defined as the probability distribution over actions and π (a|s) ∈ [0,1] denotes the probability of taking action a at state s. The set of all possible policies is denoted by Π and the objective function guided by π is given by J_R^π= 𝔼^π[∑_n r(s_n,a_n) |s_0 = 𝐪_0]. Similarly, the cumulative rate constraint function can be expressed as J_C_r,k^π= 𝔼^π[∑_n c_r,k(s_n,a_n) |s_0 = 𝐪_0], and the cumulative energy constraint function is given by J_C_e,k^π= 𝔼^π[∑_n c_e,k(s_n,a_n) |s_0 = 𝐪_0]. Then, the optimization problem (<ref>) can be equivalently expressed as max_π∈Π J_R^π,   s.t.    J_C_r_,k^π≤ 0,k ∈𝒦, J_C_e_,k^π≤ 0 1pt ,k ∈𝒦, (<ref>)∼(<ref>). §.§ Lagrange Relaxation We employ the Lagrange relaxation method to solve the CMDP <cit.>. Given (<ref>), the Lagrange function is defined as L(λ _r,k,λ _e,k,π ) = J_R^π - ∑_k = 1^K λ _r,kJ_C_r,k^π - ∑_k = 1^K λ _E,kJ_C_e,k^π, where λ _r,k≥0 is the Lagrange multiplier (or understood as a penalty coefficient) for the rate constraint of the UE k and λ _e,k≥ 0 is the Lagrange multiplier for the energy constraint of the UE k. Then the optimization problem is converted into following unconstrained problem min_λ _r,k≥0λ _e,k≥0max_π L(λ _r,k,λ _e,k,π ) 1pt 1pt 1pt 1pt 1pt 1pt s.t. 1pt 1pt 1pt 1pt (<ref>)∼(<ref>). Due to the weak duality, there is a gap between the solution of (<ref>) and that of the Lagrange dual problem (<ref>). In fact, the solution of (<ref>) provides an upper bound for the target solution. What's more, as the Lagrange multipliers increase, the solution to (<ref>) would finally converge to that of (<ref>) <cit.>. Therefore, our goal is transformed into finding the saddle point ( π ^*(λ _r,k^*,λ _e,k^*),λ _r,k^*,λ _e,k^*) of (<ref>). As (<ref>) is a two-layer optimization problem, we apply a two-step approach to solve it. For the inner layer, max_π L(λ _r,k,λ _e,k,π ), is solved with given λ _r,k,λ _e,k. For the outer layer, the Lagrange multipliers are increased by gradient descent until corresponding constraints are satisfied. §.§ The inner layer : Reward Constrained Proximal Policy Optimization In this subsection, we propose a reward constrained proximal policy optimization method to solve the inner layer of (<ref>), max_π L(λ _r,k,λ _e,k,π ), with given λ _r,k and λ _e,k. Employing the Lagrange relaxation, the constraints (<ref>) ∼ (<ref>) are incorporated into the initial objective function. As a result, the maximization problem max_π L(λ _r,k,λ _e,k,π ) in the inner layer can be formulated as a fundamental MDP {S,A,r̂,P,s_0} with given Lagrange multipliers. The state, action, state transition probability matrix, and initial state of the MDP are completely the same with that of the CMDP. However, it is worth noting that the reward function of the MDP is redefined and named as penalized reward. Penalized Reward. The penalized reward function is defined as r̂(λ _r,k,λ _e,ks_n,a_n) = r(s_n,a_n) - ∑_k = 1^K λ _r,kc_r,k(s_n,a_n) - ∑_k = 1^K λ _e,kc_e,k(s_n,a_n). Note that the penalty coefficients in (<ref>) are acted by the Lagrange multipliers. A reward shaping method is applied to tackle the constraints of the CMDP in some reinforcement learning algorithm based works <cit.>. This method also adds penalty terms to the initial reward. But the penalty coefficients in the reward shaping are determined under manual selection, which brings two serious drawbacks: On the one hand, the manual selection of coefficients is a time-consuming and computationally intensive process of hyper-parameters tuning. It will be very tricky when the number of constraints increases. On the other hand, reward shaping is a non-adaptive approach. Once the communication network parameters change, the penalty coefficients need to be re-selected. The Lagrange based penalized reward proposed in this paper is able to make up for the drawbacks of the reward shaping and has higher interpretability. Then the corresponding penalized objective function is give by Ĵ^π( λ _r,k,λ _e,k,s) = 𝔼^π[ . ∑_n r̂( λ _r,k,λ _e,k,s_n,a_n)|s_0 = 𝐪_0], and the MDP is expressed as max_πĴ^π( λ _r,k,λ _e,k,s) s.t. 1pt 1pt 1pt 1pt (<ref>)∼(<ref>). To solve this MDP, the proximal policy optimization (PPO) <cit.> algorithm is used. PPO is an Actor-Critic based reinforcement learning algorithm, which is able to tackle MDP problems with continuous action efficiently. More importantly, by introducing the proximal optimization and clipping functions, the PPO has high data utilization and is easy to converge. Next, we illustrate how the PPO algorithm works in two parts: Critic network and Actor network. Critic Network. The input of the Critic neural network is the state, and the output is the corresponding estimated value. The parameters of the Critic network are denoted by θ _c. At each iteration, {s_j}_j = 1^|B| in a certain number of samples B = {s_j,a_j,r̂_j,s'_j}_j = 1^|B| is input, and {V^θ _c(s_j,λ _r,k,λ _E,k)}_j = 1^|B| is obtained. |B| is the batch size of sample memory. Then the target value function is calculated by V'^θ _c(s,λ _r,k,λ _e,k) =∑_n γ ^nr̂(λ _r,k,λ _e,k,s_n,a_n) |s_0 = s = r̂(λ _r,k,λ _e,k,s,a) + γV'^θ _c(s',a',λ _r,k,λ _e,k)|s, where γ∈ [0,1] is the discount factor for future penalized rewards and V'^θ _c(s'_|B|,λ _r,k,λ _e,k) = V^θ _c(s'_|B|,λ _r,k,λ _e,k). The temporal difference (TD) error (loss function) of the Critic network is defined as Loss(θ _c,λ _r,k,λ _e,k) = 1/|B|∑_j = 1^|B|( V'^θ _c(s_j,λ _r,k,λ _e,k) - V^θ _c(s,λ _r,k,λ _e,k)) ^2. A^θ _c(s_j,λ _r,k,λ _E,k) = V'^θ _c(s_j,λ _r,k,λ _E,k) - V^θ _c(s_j,λ _r,k,λ _E,k) is called the advantage function at the state s_j. The critic network is trained by minimizing the TD-error, that is θ _c^' = θ _c - η _c∇ _θ _cLoss(θ _c), where η _c is the updating step-size (learning-rate) for θ _c. Actor Network. The input of the Actor network is also the state, and the output is the parameterized policy. To make more effective use of data, the importance sampling technology is applied in the Actor network of PPO. Specifically, the Actor network consists of two policy neural networks with completely the same structure: the behavior policy network (for generating data) and the target policy network (for interacting with environment). The parameters are denoted by θ _a_behavior and θ _a_target respectively. Note that only the target policy network θ _a_target is trained, while the θ _a_behavior is merely copied from θ _a_target at intervals of fixed iterations. At each iteration, {s_j}_j = 1^|B| is fed into the two policy networks, and then the probabilities of {a_j}_j = 1^|B| are obtained, denoted as {p_θ _a_target( a_j|s_j)}_j = 1^|B| and {p_θ _a_behavior( a_j|s_j)}_j = 1^|B| respectively. The importance weight of the action a_j is defined as IW(a_j) = min{p_θ _a_target( a_j|s_j)/p_θ _a_behavior( a_j|s_j),clip( p_θ _a_target( a_j|s_j)/p_θ _a_behavior( a_j|s_j),1 - ε ,1 + ε)}, where ε is a hyper-parameter. Then the loss function of the Actor network is given by Loss(θ _a_target) = 1/|B|∑_j = 1^|B|( IW(a_j)A^θ _c( s_j,λ _r,k,λ _e,k)). The network parameters are updated by θ '_a_target = θ _a_target + η _a∇ _θ _a_targetLoss(θ _a_target), where η _a is the learning rate for the Actor. Unlike A3C (updated per step) or Policy Gradient (updated per episode), the network parameters θ _c , θ _a_target of the PPO are updated per |B|-steps. Besides, the Actor training is guided by the output of the Critic, so η _c > η _a is often required to ensure that the Critic update is performed on a faster timescale than that of the Actor. The detailed diagram of the reward constrained proximal policy optimization is shown in Fig. <ref>. §.§ The outer layer : Gradient Descent In this subsection, with policy π_θ obtained by the reward constrained proximal policy optimization algorithm, we aim to update the Lagrange multipliers λ _r,k, λ _e,k in the outer layer of (<ref>) by gradient descent. The partial derivatives ∇ _λ _r,kL(λ _r,k,λ _e,k,π_θ ) and ∇ _λ _e,kL(λ _r,k,λ _e,k,π_θ ) are easily derived from (<ref>), which are denoted as ∇ _λ _r,kL(λ _r,k,λ _e,k,π_θ ) = - J_C_r,k^π_θ and ∇ _λ _e,kL(λ _r,k,λ _e,k,π_θ ) = - J_C_e,k^π_θ respectively. Then, the Lagrange multipliers are updated as λ _r,k^' = Γ _λ[ λ _r,k - η _l,r∇ _λ _r,kL(λ _r,k,λ _e,k,π_θ )] = Γ _λ[ λ _r,k + η _l,rJ_C_r,k^π_θ], λ _e,k^' = Γ _λ[ λ _e,k - η _l,e∇ _λ _e,kL(λ _r,k,λ _e,k,π_θ )] = Γ _λ[ λ _e,k + η _l,eJ_C_e,k^π_θ], where Γ _λ is an operator keeping the Lagrange multipliers positive, that is λ _r,k,λ _e,k∈ [0,∞ ]. η _l,r and η _l,e are the updating step-sizes for λ _r,k and λ _e,k respectively. As J_C_r_,k^π and J_C_e_,k^π are long-term functions, all information over the N time slots is required for (<ref>) and (<ref>). In other words, the Lagrange multipliers are updated per episode (N-steps). The updating rate should be slower than the learning rates of Critic network and Actor network, that is, η _c > η _a > η _l,r/η _l,e. §.§ Discussion on the Proposed LRCPPO Algorithm We propose the LRCPPO based joint optimization algorithm to find the suboptimal solution of (<ref>), and the procedure is given in Algorithm <ref>.The agent in the proposed algorithm is acted by a central controller, which could be realized by the Software Defined Networking (SDN). The central controller is able to obtain the data required for training, including the positions of UEs and STAR-RISs and channel information. Also, the optimized policy is transmitted to the UAV, UEs and STAR-RISs by the central controller.  Convergence: The LRCPPO is a combination of the Lagrange relaxation and the PPO, in which a three-timescale approach is involved. On the fast timescale, the penalized reward is estimated by the Critic network; on the intermediate timescale, the parameterized policy is learned by the Actor network; and on the slow timescale, the Lagrange multipliers are updated by the gradient descent. In order to ensure the above three timescales, η _c > η _a > η _l,r/η _l,e is required. Based on the Theorem 2 given in <cit.>, it can be confirmed that as long as η _c > η _a > η _l,r/η _l,e is satisfied, the LRCPPO algorithm can converge to a feasible solution. For a full proof to convergency for the three-timescale approximation process, please refer to the Appendix E in <cit.>.  Complexity:  Recall that the state dimension of the CMDP is 2 and the action dimension is 2+K+I+2MI. The batch size of sample memory is |B|. The LRCPPO is mainly composed of three parts: Critic network, Actor network and Lagrange multipliers. The temporal computational complexity of the Critic network for one episode is 𝒪_C( ⌈N/|B|⌉( ( |B| + 1)( Z_C0Z_C1 + ∑_l = 1^L_C - 2Z_ClZ_C( l + 1) + Z_C( L_C - 1)Z_CL_C) + 3|B|)), where L_C is the number of layers of the Critic neural network, Z_Cl is the number of corresponding neurons, Z_C0 = 2 and Z_CL_C = 1. Similarly, the temporal computational complexity of the Actor network for one episode is 𝒪_A( ⌈N/|B|⌉( 2|B|( Z_A0Z_A1 + ∑_l = 1^L_A - 2Z_AlZ_A( l + 1) + Z_A( L_A - 1)Z_AL_A) + 3|B|)), where L_A is the number of layers of the Actor neural network, Z_Al is the number of corresponding neurons, Z_A0 = 2 and Z_AL_A = 2 + K + I + 2MI. The temporal computational complexity of the Lagrange multipliers updating for one episode is 𝒪_L( 2KN). We assume that the proposed algorithm converges with G episodes, and then the temporal computational complexity of the LRCPPO is given as 𝒪( G( 𝒪_C + 𝒪_A + O_L)). § SIMULATION RESULTS In this section, numerical results are provided to verify the performance of the LRCPPO based joint optimization algorithm and the proposed STAR-RIS assisted UAV NOMA network. In the simulation, we consider a disaster area with the size of 800 m × 800 m. The number of STAR-RISs is 3 and their positions are fixed at [400,600,10], [200,300,10] and [600,200,10]. In addition, the element spacing of the STAR-RIS is assumed as half of the carrier wavelength. UEs are distributed randomly in the area. The UAV's initial location is set to 𝐪_0= [0,0,200]. The main simulation setups are shown in Table <ref>. Three cases with different constraints are considered in the simulation. In case 1, there are no constraints on UE rate or UE energy. In case 2, the minimal average rate requirement is set as 300000 bps and the maximal available energy for UE is 180 J, that is, r_min = 300000 bps and E_max = 180 J. In case 3, the constraints are more stringent, where the lower limit for UE rate and the upper limit for UE energy are 1000000 bps and 90 J, respectively. Figs. <ref>–<ref> show the feasibility of the proposed LRCPPO based joint optimization algorithm in the above three cases. The convergence is verified in Fig. <ref>, where the number of STAR-RIS elements is 40. Fig. <ref> shows that as the training progresses, the uplink throughput in all three cases increases quickly and finally tends to be stable. It is worth noting that the convergence speed of the proposed algorithm slows down with stricter constraints. This is because more numbers of iterations are needed for updating the Lagrange multipliers to satisfy stricter constraints, and the parameters of the Actor network and the Critic network are updated accordingly. In addition, the optimized throughput for case 1, case 2 and case 3 decrease successively, which are about 21.89 bps/Hz, 21.37 bps/Hz, and 20.76 bps/Hz. This shows that to satisfy the UE rate constraints and UE energy consumption constraints, the overall performance of the system is sacrificed. Fig. <ref> shows the average rates and the cumulative energy consumption of 12 UEs in three cases. It can be observed that the rate constraints (<ref>) and the energy consumption constraints (<ref>) and (<ref>) of 12 UEs are all well satisfied by the proposed LRCPPO algorithm for both case 2 and case 3. The average rates without constraints vary greatly among 12 UEs, where the maximum rate is able to reach 5.5 Mbps while the minimum rate is close to 0. The variance is narrowed in case 2 and significantly smaller in case 3. The cumulative energy consumption of UEs in three cases possesses the same performance as the average rates. The above phenomenon shows that stricter constraints lead to stronger UE fairness, which is of great importance in emergency communication scenarios. In addition, Fig. <ref> illustrates that the constraint on the minimum UE average rate can increase the rates of some UEs, while the achievable rates of other UEs are reduced due to the limited UE energy. The increased component is smaller than the decreased component, so the overall throughput of the network decreases with stricter constraints, which provides further explanations for the results in Fig. <ref>. In Fig. <ref>, the optimized UAV trajectories in three cases are plotted. In the duration T, the UAV takes off at the same point but ends up in different positions in three cases. Without constraints, the UAV passes through UEs and then hovers near a globally superior position. The globally optimal position is jointly determined by the path loss of both direct link and STAR-RIS aided link for 12 UEs. In case 2 and case 3, the UAV adaptively adjusts its trajectory and enlarges its active area, so as to take into account each UE and contribute to satisfying the constraints. Then we evaluate the performance of the proposed LRCPPO based joint optimization algorithm in comparison to the following benchmark algorithms: * Reward shaping based PPO: As mentioned in Remark <ref>, reward shaping is a common method for solving constrained long-term optimization problems with reinforcement learning <cit.>. For our proposed problem, the reward function after reward shaping is defined as r(s_n,a_n) = ∑_k = 1^K r_k[n] - χ _r∑_k = 1^K c_r,k( s_n,a_n) - χ _e∑_k = 1^K c_e,k( s_n,a_n), where χ _r and χ _e are coefficients used to balance the values. To be fair, we also adopt the PPO algorithm in the reward shaping. * Zero phase shift: The phase shifts of the STAR-RIS elements are all fixed at zero, while other variables are still determined by the LRCPPO. * Random phase shift: The phase shifts of the STAR-RIS elements are randomly generated, while other variables are still determined by the LRCPPO. * Equal time allocation: The time slot is equally allocated to the left UEs and the right UEs, that is τ _i^L[n] = τ _i^R[n]=τ/2. The other variables are still determined by the LRCPPO. * Circular UAV trajectory: The UAV flies at the maximum speed v_max along a circular path with [400,400,200] as the center and a radius of 400 m. The other variables are still determined by the LRCPPO. Fig. <ref> and Fig. <ref> illustrate the performance of the proposed algorithm in case 2 and case 3. Firstly, it is seen that the throughput obtained by the proposed LRCPPO, reward shaping based PPO, random phase shift, equal time allocation, and circular UAV trajectory all increases with the number of STAR-RIS elements in Fig. <ref> and Fig. <ref>. However, the throughput of the zero phase shift appears to be independent of the number of STAR-RIS elements and performs much worse than that of the other five algorithms. This shows that the phase shifts of the STAR-RIS have to be reasonably designed, so as to achieve the performance gain. In addition, no matter in case 1 or case 2, the uplink throughput achieved by the proposed LRCPPO algorithm is significantly greater than that obtained by the reward shaping based PPO algorithm. This verifies that, as explained in Remark <ref>, due to manual selection of the penalty coefficients, the non-adaptive reward shaping PPO algorithm inevitably sacrifices the overall network performance to satisfy the constraints. Differently, the penalty coefficients in the LRCPPO algorithm are acted by the Lagrange multipliers, and updated iteratively by gradient descent. Therefore, the LRCPPO is able to make up for the vital shortcoming of the reward shaping, and bring evident improvement in network performance. In fact, in both case 2 and case 3, the throughput of the proposed algorithm is the largest among all six algorithms, which verifies the superiority of the proposed algorithm. Furthermore, comparing Fig. <ref> and Fig. <ref>, we can find that this superiority is more pronounced with r_min = 300000 bps &  E_max = 180 J than r_min = 1000000 bps &  E_max = 90 J. This is because that as mentioned earlier, the stricter constraints on UE rate and UE energy significantly limit the maximum achievable network throughput. In Fig. <ref>, we demonstrate the performance of the proposed STAR-RIS assisted UAV NOMA communication network, by comparing it with two benchmark networks. One of the benchmarks is reflecting-only RIS assisted UAV NOMA communication network, where the RIS is only able to serve the UEs located at the same side of the RIS as the UAV. The other is STAR-RIS assisted UAV OMA communication network, where the UEs at the same side of the same STAR-RIS, 𝒦_i^L (𝒦_i^R), perform uplink data transmission by orthogonal frequency division multiple access (OFDMA). All lines in the figure are obtained by the proposed LRCPPO based algorithm. First of all, it is observed that with the increase in the number of (STAR-)RIS elements, the throughput of three different networks all shows an obvious upward trend. Secondly, no matter in case 2 (r_min = 300000 bps &  E_max = 180 J) or case 3 (r_min = 1000000 bps &  E_max = 90 J), with the same number of elements, the throughput of the proposed network is higher than that of the reflecting-only RIS assisted network, and much higher than that of the OMA-based network. This reveals that the combination of STAR-RIS and NOMA makes important contributions to the improvement of network performance. Besides, for the same network architecture, the throughput in case 2 is always higher than that in case 3, which illustrates again that the overall performance is sacrificed for meeting the individual UE constraints. § CONCLUSION In this paper, we investigated a novel STAR-RIS assisted UAV NOMA emergency communication network, where multiple STAR-RISs were employed to assist the UAV-mounted BS and NOMA was involved for further improvement. We formulated a long-term uplink communication throughput maximization problem subject to the constraints on minimum UE average rate and maximum UE available energy in post-disaster emergency communication scenarios. A Lagrange based reward constrained proximal policy optimization algorithm was developed. Numerical results revealed that the constraints were satisfied well and the UAV trajectory can be adaptively adjusted by the proposed LRCPPO algorithm. The performance of the proposed algorithm was verified to be the best among several benchmark algorithms. It also showed that the proposed STAR-RIS assisted UAV NOMA framework can significantly improve the performance over the benchmark reflecting-only RIS and OMA schemes. The joint optimization of STAR-RIS deployment and UAV trajectory will be investigated in our future study. IEEEtran
http://arxiv.org/abs/2307.00840v1
20230703082649
Greedy Selection for Heterogeneous Sensors
[ "Kaushani Majumder", "SibiRaj B. Pillai", "Satish Mulleti" ]
eess.SP
[ "eess.SP" ]
Submitted to the IEEE Transactions on Signal Processing Shell et al.: Greedy Algorithm for Sensor Selection in Heterogeneous Sensor Networks Greedy Selection for Heterogeneous Sensors Kaushani Majumder, Student Member, IEEE, Sibi Raj B. Pillai, Satish Mulleti, Member, IEEE K. Majumder, S. R. B. Pillai, and S. Mulleti are with the Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, 400076, India. Emails: [email protected], [email protected], [email protected] August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================ In large-scale sensor networks, simultaneously operating all the sensors is power-consuming and computationally expensive. It is often necessary to adaptively select or activate a few sensors at a time. A greedy selection (GS) algorithm is widely used to select sensors in homogeneous sensor networks. It is guaranteed a worst-case performance (1 - 1/e) ≈ 63% of the optimal solution when the performance metric is submodular. However, in heterogeneous sensor networks (HSNs), where the sensors can have different precision and operating costs, the sensor selection problem has not been explored sufficiently well. In this paper, a joint greedy selection (JGS) algorithm is proposed to compute the best possible subset of sensors in HSNs. We derive theoretical guarantees for the worst-case error of JGS for submodular performance metrics for an HSN consisting of two sets of sensors: a set with expensive high-precision sensors and a set of cheap low-precision sensors. A limit on the number of sensors from each class is stipulated, and we propose algorithms to solve the sensor selection problem and assess their theoretical performance guarantees. We show that the worst-case relative error approaches (1 - 1/e) when the stipulated number of high-precision sensors is much smaller than that of low-precision sensors. To compare the JGS algorithm with existing methods, we propose a frame potential-based submodular performance metric that considers both the correlation among the measurements as well as the heterogeneity of the sensors. Experimentally, we show that the JGS algorithm results in 4-10 dB lower error than existing methods. Sensor selection, heterogeneous sensor networks, submodular maximization, greedy algorithm, frame potential. § INTRODUCTION In many applications such as medical imaging and healthcare <cit.>, seismic processing <cit.>, environmental monitoring <cit.>, power networks <cit.>, smart homes and internet of things (IoT) networks <cit.>, etc., a large number of sensors are deployed in varied environments to collect data. In these applications, often the task is to estimate a low-dimensional parameter vector from the sensor measurements. The accuracy of the estimation task increases with the number of sensors or measurements. However, this also results in high power requirements to operate the sensors in addition to higher communication and storage costs. To reduce operational costs, one can deploy fewer sensors. However, such a method is not adaptive. Specifically, when the measuring fields and their noise characteristics change over time, a fixed deployment is inefficient. Alternatively, it is advantageous to deploy a larger number of sensors and then choose the desired numbers based on the information available apriori. Activating only a few sensors at a time also leads to a considerable reduction in operating costs. The sensor networks could be either homogeneous, where all the sensors have similar accuracies, or heterogeneous. In heterogeneous sensor networks (HSNs), two or more sets of sensors may observe the same or diverse phenomena <cit.>. When an HSN is used to measure a common phenomenon or parameter, the sensors in different sets are characterized by their accuracies or noise levels. A commonly used HSN comprising of one set of high-precision expensive sensors (low noise) and another set of low-precision inexpensive sensors (high noise) is shown in Fig. <ref> <cit.>. For example, high-precision sensors can be weather stations that are sparsely distributed over a geographical region in weather monitoring tasks. Their sparsity is because of their high cost of construction, the unavailability of suitable locations for their installation, or their high power consumption. The estimation inaccuracy due to the low spatial density of these high-precision sensors is compensated by using low-powered inexpensive, low-precision sensors which can be deployed in bulk. This can help improve the sampling resolution and hence the quality of the inference task while still respecting the overall deployment budget. Even in a homogeneous sensor network, selecting a subset of sensors to reduce the operational cost is an NP-hard problem <cit.>. Various methods have been proposed to find an approximate solution to this problem, such as convex relaxation <cit.>, different heuristics <cit.>, and a greedy selection procedure <cit.>. While convex relaxation methods bound the error in estimation in terms of the optimal solution, they are computationally expensive and do not scale well with the network size. Heuristics are shown to perform well in practice, although no theoretical guarantee is available for their performance. The greedy algorithm, on the other hand, is computationally faster than the convex relaxation approach while achieving a worst-case performance of ( 1 - 1/e ) ≈ 63% of the optimal performance when optimizing a submodular cost metric <cit.>, where e is Euler's constant. Unlike homogeneous networks, only a few results are available for the sensor selection problem in HSNs. In <cit.>, the authors consider HSNs where each sensor is specified by its operating cost. For such networks, a modified greedy algorithm was used that maximizes the performance while keeping the total operating cost of the network within a specified budget. These methods are shown to have similar performance guarantees approaching the limit ( 1 - 1/e ). Though the methods may have acceptable performance guarantees, they cannot be used when a minimum number of sensors to choose from each subset is specified. Zhang et al. <cit.> proposed a cross-entropy-based stochastic optimization method for the HSN problem where they considered minimizing the total operational cost of the selected sensors while achieving a specified mean squared error (MSE). In this approach, there is an undesirable possibility that the algorithm may choose all the available sensors for achieving a given MSE. Kirchner et al. <cit.> consider heterogeneous sensor selection problem in vehicle tracking applications. Their problem setup involves a set of sensors, each obtaining multiple measurements of a common phenomenon. The task is to find the optimal number of measurements each sensor needs to communicate with a central data processing unit, with a budget on the maximum bandwidth allocated to the sensor. Their approach is indeed to independently and greedily select measurements from each sensor without considering the correlation among the measurements of the different sensors. Moreover, the number of sensors themselves is not optimized in <cit.>. In a related field of work, the problem of sensor placement in heterogeneous sensor networks has been considered in <cit.>. In <cit.>, high or low-power sensor placement is considered at each available location based on the feedback of successful transmission. On the other hand, <cit.> considers sensor placement in IoT networks, optimizing the energy harvesting capabilities of the diverse sensors according to their channel gains and the required accuracy of the estimation task. These methods fundamentally differ from the sensor selection task since the type of sensor at each location is not fixed and has to be selected by the algorithm. Moreover, the number of sensors of each type that can be used is also a design parameter in <cit.>. All the above-mentioned methods for sensor selection in HSNs suffer from the drawback that they do not work when a cardinality constraint is placed on the sensor selection problem. A cardinality constraint ensures that the number of sensors selected from each subset lies within a given bound. This helps in making sure that a minimum number of sensors are selected for reliable estimation of the parameter vector while also ensuring the total number of sensors lies within a given budget. In this paper, we consider an HSN comprising two sets of sensors and propose an algorithm to select the best set of sensors when cardinality constraints specify the number of sensors to be selected from each subset. Each sensor measures a spatial observation induced by a set of deterministic unknown parameters, which need to be estimated. Two different noise distributions characterize the two sets of sensors. In such a scenario, finding the optimal subset of sensors from each set is necessary to keep the network's operational cost within a budget while achieving the best possible performance. Although, for mathematical simplicity, we consider only two subsets constituting the network, our proposed algorithm works for multiple subsets as well. The main contributions of this paper are as follows. * This work introduces a greedy algorithm for selecting sensors in HSNs under cardinality constraints. The constraints specify the number of sensors to select from each subset. * Theoretically, the proposed algorithm is proved to have a worst-case performance of (1 - 1/e) of the optimal when the number of high-precision sensors to be chosen is less than a fraction (10% or lower) of the low-precision ones. * A submodular performance metric called weighted frame cost (WFC) is proposed for our simulation experiments. It exploits both the correlation structure given by the measurement model and the noise characteristics of the sensors. The WFC is based on the frame potential (FP) measure for the homogeneous case introduced in <cit.> and adapts it for HSNs. * Extensive experiments are conducted to evaluate the proposed algorithm for both linear as well as non-linear measurement models. We perform small-scale experiments to compare the performance of the proposed method with the optimal solution obtained through an exhaustive search. We then extend the results to large-scale sensor selection problems and compare the results with other greedy-based and random approaches. Our experiments consider both additive Gaussian noise, as well as the effect of quantization as distortions corrupting the measurements. Finally, as an application of the proposed algorithm in non-linear measurement models, we consider the problem of selecting sensors for estimating the direction-of-arrival of plane-waves emitted by multiple sources. * The small-scale experiments show that the proposed algorithm achieves near-optimal solutions (over 99% of the optimal performance), which is better than the theoretical performance guarantees obtained. This is similar to the performance of existing greedy methods for sensor selection in homogeneous networks <cit.>. * In all the experiments conducted, the proposed method consistently achieves a mean squared error which is 4-10 dB less than the existing methods. The rest of the paper is organized as follows. In Section <ref>, we define the problem. The greedy algorithm for sensor selection in homogeneous sensor networks is presented in Section <ref>. The proposed algorithm and the theoretical guarantees for its performance are presented in Section <ref>. Various experiments and simulation results are presented in Section <ref>. Finally, our conclusions are presented in Section <ref>. Throughout the paper, vectors (matrices) are represented as bold-faced lower-case (upper-case) letters such as 𝐚 (𝐀). We use 𝐱∈ℝ^N (𝐱∈ℂ^N) to denote a real (complex) N-dimensional vector. The norm of a vector 𝐱 belonging to a normed space is denoted as 𝐱, and the inner product between two vectors 𝐱 and 𝐲 is denoted as ⟨𝐱, 𝐲⟩. In particular, 𝐱_2 denotes the Euclidean norm of 𝐱. Sets are represented by upper-case calligraphic letters like 𝒮, with |𝒮| denoting its cardinality and 𝒫( 𝒮) standing for the power set of 𝒮. A set function 𝒞: 𝒫( 𝒮) →ℝ is defined as a function that acts on any subset 𝒯⊆𝒮, producing a real-valued output. For a positive integer N, we use the following representation 𝒩 ={ 1, 2, …, N }. § PROBLEM DEFINITION Consider the problem of recovering an unknown vector 𝐱∈ℂ^K from its noisy measurements obtained by N ≥ K sensors. The measurements are modeled as 𝐲 = 𝒜(𝐱) +η , where 𝐲∈ℂ^Nis the measurement vector with its i-th element representing observation from the i-th sensor. The measurement operator 𝒜: ℂ^K→ℂ^N is a function of the locations of the sensors and η∈ℂ^N is an additive noise vector. The measurement operator could be linear or non-linear. While 𝒜(·) is represented as a measurement matrix 𝐀∈ℂ^N × K in the linear case, yielding 𝐲 = 𝐀𝐱 +η, more general non-linear models also come under our purview. For example, in the direction-of-arrival (DoA) estimation problem, 𝒜(𝐱) is given as 𝐀(θ) α where θ and α denote angles and amplitudes of sources. In this case, the unknown parameters to be estimated are 𝐱 = [θ^T α^T]^T. Here, since 𝐀(θ) is a function of the parameters θ, the observations 𝐲 are not linear functions of θ. In our work, we consider a heterogeneous sensor network comprising two sets of sensors 𝒮_1⊆𝒩 and 𝒮_2⊆𝒩 where ( 𝒮_1 ∪𝒮_2 ) = 𝒩 and ( 𝒮_1 ∩𝒮_2 ) = ∅. We assume that the sensors in 𝒮_1 have higher precision than those in 𝒮_2. To model the precision, we assume that the elements of the noise vector η corresponding to the set 𝒮_1 have standard deviation σ_l, and those from the set 𝒮_2 have standard deviation σ_h, where σ_h >σ_l. We further assume that elements of η for two different sensors are independent. Hence, the covariance matrix of the noise vector is given as Σ = diag( σ_1^2, σ_2^2, …, σ_N^2 ), with σ_i^2 = σ_l^2 if i ∈𝒮_1 and σ_i^2 = σ_h^2 if i ∈𝒮_2. Though the sensors in 𝒮_1 are assumed to be of high quality, their dedicated infrastructure may necessitate huge operational costs, for example, power consumption. On the other hand, with the availability of state-of-the-art low-power wireless transmitters and receivers, it is cost-effective to deploy a large number of cheap sensors in the set 𝒮_2. However, the communication costs may demand that only a small fraction of them can simultaneously be active. To balance between the cost and estimation accuracy, one can choose fewer sensors from the set 𝒮_1 and enhance reconstruction accuracy by choosing sensors from 𝒮_2 without significantly increasing the cost. We mathematically formulate the problem as follows. Our objective is to select M_1 high-precision sensors from 𝒮_1, and M_2 low-precision sensors from 𝒮_2 such that a given accuracy metric 𝒞 is maximized. This is captured in the following optimization problem. 𝒯_1 ⊆𝒮_1, 𝒯_2 ⊆𝒮_2max 𝒞( 𝒯_1 ∪𝒯_2 ), subject to | 𝒯_1 | = M_1, | 𝒯_2 | = M_2. P1 Notice that (<ref>) is a combinatorial optimization problem for which no polynomial time algorithms are known to exist. Particularly, finding the optimal solution requires a computationally expensive search over all possible subsets of 𝒮_1 and 𝒮_2, respecting the cardinality constraints. Instead, we propose an algorithmic solution to (<ref>) using a greedy approach and characterize its worst-case performance theoretically. In order to lay the foundations for our algorithm, we first present the well-known greedy selection (GS) algorithm for homogeneous sensor networks  <cit.>. § GREEDY SELECTION FOR HOMOGENEOUS SENSORS The problem of sensor selection has been well explored in the case of homogeneous sensor networks composed of sensors with similar noise levels <cit.>. Consider the problem of selecting M out of N sensors to maximize a metric 𝒞. The corresponding optimization problem is formulated as 𝒯⊆𝒮max 𝒞( 𝒯), subject to | 𝒯| ≤ M. P2 Here 𝒮 denotes the set from which the required subset 𝒯 of size at most M needs to be selected. This is an NP-hard combinatorial optimization problem. An iterative greedy selection algorithm to solve (<ref>) was proposed by Nemhauser et al. <cit.>. This method starts with an empty set 𝒯. At each iteration, the algorithm chooses a new measurement/sensor from the remaining ones, which maximizes the performance metric 𝒞 (or alternatively minimizes estimation error). This step is repeated M-times to select those many measurements (See Algorithm <ref>). Nemhauser et al. <cit.> showed that when 𝒞 is a normalized monotone non-decreasing submodular function, then the worst case error in the performance of the solution given by the GS algorithm is bounded as follows, 𝒞(𝒯^∗)/𝒞( 𝒯_OPT) ≥( 1 - ( M-1/M)^^M) ≥( 1 - 1/e), where 𝒯^∗ is the solution found by the greedy algorithm and 𝒯_OPT is the optimal solution of (<ref>). Let 𝒫( 𝒩) denote the power set of 𝒩. Then a set function 𝒞 : 𝒫( 𝒩) →ℝ is said to be monotone increasing if 𝒞(𝒮) ≥𝒞(𝒯), ∀𝒯⊆𝒮⊆𝒩, and is said to be normalized if 𝒞(∅) = 0, where ∅ denotes the empty set. A submodular function is a class of set functions that follows the property of diminishing returns, mathematically stated as 𝒞( 𝒮∪{j}) - 𝒞( 𝒮) ≤𝒞( 𝒯∪{j}) - 𝒞( 𝒯), ∀𝒯⊆𝒮. The performance guarantee stated in (<ref>) theoretically bounds the GS algorithm's worst-case performance to ( 1 - 1/e ) ≈ 63% of the optimal performance. In the next section, we propose a modified version of the greedy algorithm to make it suitable for selecting sensors in HSNs, as described in (<ref>), and analyze its performance. § A JOINT GREEDY SELECTION ALGORITHM FOR HETEROGENEOUS SENSOR NETWORKS The problem of sensor selection in HSNs, as stated in (<ref>), has not been explored much in the literature, to the best of our knowledge. A straightforward method is to choose M_i sensors from 𝒮_i for i=1,2 independently using the homogeneous greedy selection algorithm. We call this the independent greedy selection (IGS) approach. Another technique respecting the constraints is to independently and randomly select (IRS) the required number of sensors from each set. Last but not least, an exhaustive search can find the optimal answer but has prohibitive computational complexity. For comparison, let 𝒯_OPT be an optimal set identified by exhaustive search. While it is clear that the first two approaches quickly identify feasible solutions, it will be unraveled later that they fail to yield good performance. Therefore, we turn to a joint greedy selection (JGS) approach. §.§ Proposed Joint Greedy Selection Algorithm The JGS algorithm simultaneously considers both 𝒮_1 and 𝒮_2 to circumvent the shortcomings of the independent selection approaches discussed. The algorithm adds one sample at a time, starting from two empty sets. Specifically, we start with 𝒯_i = ∅, i = 1, 2. Let 𝒯 = 𝒯_1 ∪𝒯_2 denote the set of selected sensors. Then for |𝒯_1|<M_1 and |𝒯_2|<M_2, at every iteration, we select a new sample i^∗∉𝒯 such that the cost 𝒞(𝒯∪{ i^* }) is maximized. After the selection, the sets 𝒯_1 and 𝒯_2 are updated depending on whether the sample i^* belongs to 𝒮_1 or 𝒮_2. The iterations continue until one of the sets is exhausted, that is, until either | 𝒯_1 | = M_1 or | 𝒯_2 | = M_2 condition is satisfied. After this point, the search is restricted to the set where | T_i | < M_i. The iterations continue until we select the desired number of sensors. These steps are summarized in Algorithm <ref>. It is to be noted that in JGS, there is a change in the search space when the algorithm exhausts one of the subsets, that is, when either | 𝒯_1 | = M_1 or | 𝒯_2 | = M_2 is satisfied. In the first few iterations, JGS searches for the next candidate sensor from the unselected ones. However, once one of the sets is exhausted, the algorithm can only search over the remaining subset. This is the step where the algorithm essentially differs from the vanilla GS algorithm. It is unknown beforehand at which iteration this switch occurs, and as we discuss later, the iteration at which the switch takes place plays a key role in the performance analysis of the JGS algorithm. The proposed algorithm can be readily extended to sensor networks with more than two subsets having different noise characteristics. In this case, multiple switches will occur in the algorithm's search space as each subset gets exhausted. In this work, we consider only two subsets of sensors 𝒮_1 and 𝒮_2 in Algorithm <ref> as it simplifies assessing performance guarantees of the algorithm as discussed next. §.§ Performance Guarantee The proposed JGS algorithm may generate a suboptimal solution for (<ref>). Hence, to evaluate the algorithm, it is essential to evaluate its worst-case performance. In this section, we prove a theoretical bound for the maximum deviation of the achieved performance from the optimal performance. For the rest of this section, we shall assume that the cost function 𝒞 being maximized by the algorithm is a normalized, monotone, non-decreasing submodular function. Let the algorithm reach the maximum number of elements M_1 of set 𝒮_1 or M_2 of set 𝒮_2 at the m_s-th iteration. This means that at the m_s-th iteration, the algorithm exhausts one of the sets, and from the ( m_s+1 )-th iteration, it can choose its next measurement from the other set. The value of m_s is critical in evaluating the worst-case performance. This aspect is captured in our main result, summarized in the following theorem, which gives a bound on the worst-case performance of the proposed JGS algorithm in terms of the cost achieved by an optimal set 𝒯_OPT for (<ref>). Consider the optimization problem in (<ref>) where 𝒞 is assumed to be a normalized monotone, non-decreasing, submodular function. Let the JGS algorithm at the m_s-th iteration select M_1 elements from the set 𝒮_1. Then the cost of the final set 𝒯^∗ selected by the algorithm is bounded as 𝒞( 𝒯^∗) ≥[1 - M_1/M_2∑_j=0^M_1 + M_2 - m_s - 1( 1 - 1 + M_1/M_2)^j - . . ( 1 - 1 + M_1/M_2)^M_1 + M_2 - m_s( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT). If, however, the algorithm reaches the maximum number of elements M_2 to choose from the set 𝒮_2 at the m_s-th iteration, then the final cost obtained is bounded below as 𝒞( 𝒯^∗) ≥[1 - M_2/M_1∑_j=0^M_1 + M_2 - m_s - 1( 1 - 1 + M_2/M_1)^j - . . ( 1 - 1 + M_2/M_1)^M_1 + M_2 - m_s( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT). Proof of the theorem is deferred to Appendix <ref>. As seen from Theorem <ref>, The bounds in Theorem <ref> depend on M_1, M_2, and m_s. Thus, unlike the GS algorithm for homogeneous sensor networks, we cannot give a single number that specifies the worst-case error. Next, we discuss how the worst-case error varies with M_1, M_2, and m_s. §.§ Analysis of Performance Guarantee We assess the theoretical bounds in (<ref>) and (<ref>) by fixing m_s and varying M_1 and M_2. For the sake of discussion, we assume that M_1 is reached first by the algorithm. Otherwise, M_1 and M_2 will get interchanged in the discussion and following results. With M_1 samples selected first from set 𝒮_1, we have that M_1 ≤ m_s ≤ M_1 + M_2 - 1. We analyze the algorithm for the extreme cases, m_s = M_1 and m_s = M_1 + M_2 - 1, to get a better insight into the results. When m_s = M_1, we have from (<ref>), 𝒞( 𝒯^∗) ≥[ 1 - M_1/M_2∑_j=0^M_2-1( 1 - M_1 + 1/M_2)^j - . . ( 1 - M_1 + 1/M_2)^M_2( 1 - 1/M_1 + M_2)^M_1] 𝒞( 𝒯_OPT). For m_s = M_1, the algorithm exhausts all possible M_1 sensors from 𝒮_1 before choosing any sensor from 𝒮_2. This implies that the sensors in 𝒮_2 do not have any effect on the selection process from 𝒮_1, and vice-versa, which is highly unlikely. In this case, the algorithm reduces to the independent greedy selection (IGS) method. Numerical experiments show that the worst-case performance guarantees for this case are low, with 𝒞( 𝒯^∗) taking values about 12.5%, 20% and 32% for M_1 = 30, M_2 = 300, M_1 = 20, M_2 = 200, and M_1 = 10, M_2 = 100 respectively. At the other extreme, when m_s = M_1 + M_2 - 1, we get from (<ref>), 𝒞( T^∗) ≥[ 1 - M_1/M_2 - ( 1 - M_1 + 1/M_2) . . ( 1 - 1/M_1 + M_2)^M_1+M_2-1] 𝒞( 𝒯_OPT). This occurs when the algorithm exhausts 𝒮_1 only at the last but one iteration, implying that the measurements in 𝒮_1 and 𝒮_2 are highly correlated. Figure <ref> shows the performance guarantees when M_1 and M_2 vary, with m_s = M_1 + M_2 - 1. We observe that the worst case performance asymptotically approaches the value ( 1 - 1/e ) ≈ 63% as M_1 gets progressively smaller than M_2, with M_1/M_2→ 0. Specifically, we observe that for M_1/M_2 = O( 10^-2) the worst case performance reaches ≈ 63% of the optimal performance. For relatively larger values of M_1 compared to M_2, such as M_1/M_2≈ 0.1, the selected set has a performance which is at worst 55% of the optimal performance. Figure <ref> shows the variation of the performance guarantee when M_1 and M_2 are kept constant and m_s is varied. As the figure shows, the worst-case performance error for lower values of m_s is not very encouraging. This is due to the approximations we make in deriving the bound, and may be improved if these approximations can be improved. However, lower values of m_s are less likely since this scenario implies very less correlation among the measurements in 𝒮_1 and 𝒮_2. It is observed from Fig. <ref> that the performance remains flat for lower values of m_s (for m_s = M_1 till about 50-60% of M_1 + M_2), and then the performance increases as m_s increases beyond this value until it reaches (1 - 1/e) of the optimal performance. These results imply that the algorithm performs better when the switch in the search space occurs later in the algorithm. From Fig. <ref> and Fig. <ref>, we conclude that the theoretical performance guarantee obtained by the JGS algorithm is similar to the standard greedy method when m_s takes higher values. Higher values of m_s mean that JGS utilizes the correlation information between the two subsets for more number of steps. The above observations prompt two questions. Firstly, theoretical guarantees suggest that higher values of m_s are preferable. However, m_s is dependent on the measurements and the path taken by the algorithm and can not be controlled. Hence it is necessary to understand which values of m_s are more probable for the algorithm in practice. Secondly, our analysis till now is based on the theoretical performance bound obtained in Theorem <ref>. We need to explore whether the performance of the algorithm in practice is indeed as low as suggested by Fig. <ref> for lower values of m_s. The answer to the second question again depends on the achievable values of m_s. These two questions are explored and answered in section <ref>. The simulation experiments performed show that lower values of m_s (m_s = M_1) have a very low probability of occurrence (less than 1%). Moreover, the average performance of JGS is about 99% of the optimal value, suggesting that even for lower values of m_s, practically, the algorithm gives a near-optimal performance. These results are presented in the next section. § EXPERIMENTS AND RESULTS This section presents a detailed analysis of the proposed algorithm through simulations for linear and nonlinear measurement models. The results are compared with different methods such as greedy selection, random selection, independent greedy selection, and independent random selection (cf. Table <ref> for the list of methods and see Section <ref> for details). We apply an exhaustive search to find an optimal solution for a small-scale problem. The methods are compared in terms of normalized mean-squared error measured as MSE = 10 log_10(𝐱 - 𝐱̂^2 /𝐱^2), where 𝐱̂ is an estimate of 𝐱. Before presenting the results, we first introduce the performance metric used for these experiments. §.§ Weighted Frame Potential (WFP) In estimation tasks, MSE is a commonly employed performance metric. In a subset estimation problem, MSE is not used as it is not a submodular function, and instead, its proxies are considered. Since the Cramér-Rao lower bound (CRLB) gives a lower bound for MSE, different submodular functions of the CRLB matrix (defined as the inverse of the Fisher information matrix (FIM) <cit.>), such as trace(CRLB), - log (CRLB), and the largest eigenvalue λ_max(CRLB) are commonly used as surrogates for MSE <cit.>. Cost functions based on the CRLB matrix work well as long as the number of measurements or cardinality of the subsets is greater than the number of unknowns K. Otherwise, the FIM matrix may become singular, and CRLB is undefined. To elaborate, consider the linear measurement model in (<ref>) where η consists of independent and identically distributed Gaussian random variables with zero mean and unit variance. Let 𝐲_𝒮 denote a subset of measurements of 𝐲, indexed by the set 𝒮. Then FIM for estimation of 𝐱 from 𝐲_𝒮 is given as 𝐀_𝒮^T𝐀_𝒮 where 𝐀_𝒮∈ℝ^|𝒮|× K is a matrix consisting of rows of 𝐀 that are indexed by the set 𝒮. The matrix is singular if |𝒮| < K, and hence, CRLB can not be defined. The singularity of the FIM results in an arbitrary selection of measurements in the first K-1 iterations of any greedy algorithm. For non-linear measurement models, when the number of measurements is less than K, the non-singularity of the FIM depends on the measurement model 𝒜 and cannot be determined in the general case. To consistently use a single performance metric for both linear and non-linear measurement models, a non-CRLB-based metric is required. Frame potential (FP) is another performance metric used for subset selection which is based on the correlation between the measurements <cit.>, and it does not suffer from the singularity issue. Since correlated measurements add less information about the unknown vector to be estimated, FP-based cost can be minimized for subset selection. Submodular cost functions based on FP are defined for linear and non-linear homogeneous measurements in <cit.> and <cit.>, respectively, where in the latter case, it is assumed that 𝒜 is a differentiable function of 𝐱. We extend these definitions for heterogeneous measurements and introduce weighted frame potential (WFP) as a performance metric. In a greedy selection step, while selecting a new sensor (or measurement) from the remaining ones, any measurement which is highly correlated with the selected sensors or has low SNR should be given less consideration. Following this logic, we define WFP as WFP(𝒮) ∑_i,j ∈𝒮 w_i w_j | < ∇_𝐱 y_i, ∇_𝐱 y_j > |^2/∇_𝐱 y_i _2^2 ∇_𝐱 y_i _2^2, where ∇_𝐱 y_i denotes the gradient of the i-th observation with respect to the parameter vector 𝐱, and w_i ϕ( σ_i ) are non-negative weights dependent on the noise variance σ_i corrupting the i-th measurement. The definition is an extension of the modified frame potential introduced in <cit.> and applies to non-linear and linear heterogeneous measurements. In the case of a linear measurement system 𝐀, the WFP metric reduces to the simpler form, WFP(𝒮) ∑_i,j ∈𝒮 w_i w_j | < 𝐚_i, 𝐚_j > |^2/𝐚_i _2^2 𝐚_j _2^2, where 𝐚_i denotes the i-th row of 𝐀. The weighting function ϕ( σ_i ) needs to be chosen such that the measurements corrupted by low noise are preferred for selection. A natural choice is ϕ( σ_i ) = 1 / σ_i. This is also the weighting function used in maximum likelihood estimation techniques <cit.>. However, when σ_i ≈ 0, that is when the noise affecting a measurement is negligible, this makes the corresponding terms in WFP extremely large. Since our aim is to minimize the correlation among samples, having ϕ( σ_i ) = 1 / σ_i essentially ensures that the measurements with very low noise are never selected. To address this issue, we choose ϕ( σ_i ): ℝ^+ → [0,1] to be a function that maps the noise variance to a value within [0,1], such that more weight is given to measurements with low noise. Several such functions have been used, such as 1/1 + σ_i, ( 1/1 + exp( - ( σ_i - σ) )), and (tanh( σ_i - σ)/2), with σ representing the mean of the noise variances of all the measurements. It is observed empirically that the sigmoid function, ( 1/1 + exp( - ( σ_i - σ) )), gives the best performance among all the ϕ( σ_i ) used. Hence, we select this as the weighting function for our experiments. In literature <cit.>, an FP-based submodular performance metric has been defined that, when maximized, finds an approximate solution for the problem (<ref>). Similarly, we define a submodular performance metric called weighted frame cost (WFC) based on the WFP, WFC(𝒮) WFP (𝒩) - WFP( 𝒩\𝒮). Here 𝒩\𝒮 denotes the set of measurements which are not in 𝒮. In Appendix <ref>, we show that WFC is a normalized, monotone non-decreasing submodular function. The cost, WFC, is used as the performance metric 𝒞 in problem (<ref>) for the simulations in the following sections. Since the problem is to maximize the cost WFC(𝒮), this results is minimization of WFP(𝒩\𝒮). Since our objective is to minimize the frame potential of the selected set, the output of the algorithm is the set 𝒩\𝒯^∗, if 𝒯^∗ is the solution of the maximization problem (<ref>). Next, we present simulation results for linear and non-linear measurement models. §.§ Linear Measurement Model We first present simulation results when the measurement model is linear in nature as in (<ref>). In the experiments, 𝐀 is a subsampled discrete cosine transform (DCT) matrix obtained by randomly selecting K columns from a N × N DCT matrix. The entries of the parameter vector 𝐱∈ℝ^K are independent and identically distributed (i.i.d.) zero-mean Gaussian random variables with variance 25. For the additive noise model in (<ref>), the two noise variances are specified in terms of the SNRs of their corresponding measurements, SNR_H and SNR_L, where SNR_H>SNR_L. The SNRs are defined as follows. SNR_H1/N_1 σ_l^2∑_i ∈𝒮_1( [𝒜(𝐱)]_i ) ^2, and SNR_L1/N_2 σ_h^2∑_i ∈𝒮_2( [𝒜(𝐱)]_i )^2, where [𝒜(𝐱)]_i is the noise-free measurement at the i-th sensor. For each set of SNRs, MSEs are averaged over 1000 Monte-Carlo iterations. For each Monte Carlo iteration, 𝐱, 𝐀, SNR_H, and SNR_L are kept fixed, while the noise covariance matrix Σ is changed by randomly choosing the locations of the N_1 high-precision and N_2 low-precision sensors out of the total N possible locations. The first experiment is designed to answer the questions raised at the end of Section IV, specifically, to assess the values of m_s attained by JGS in practice. To this end, we consider a small-scale sensor network with parameters given as M_1 = 1, M_2 = 10, N_1 = 5, N_2 = 15, and K = 5. The SNR of the high-precision samples, SNR_H, is set to 40 dB, while the SNR of the low-precision samples, SNR_L, is varied from 0 to 30 dB. The results are averaged over 1000 Monte Carlo iterations. As can be observed from Fig. <ref>(a), m_s = M_1 is very unlikely, with probability less than 1% for all specified values of SNR_L from 0 dB to 30 dB. When the difference between SNR_H and SNR_L is high, it is more probable that m_s achieves higher values, even M_1+M_2-1, than when this difference is low. When SNR_H≈SNR_L, the two subsets of sensors approach homogeneity, and hence the selection of sensors from either set is equally likely. This results in smaller values of m_s. In all the cases, m_s is most probable after selecting 4-9 sensors. In order to compare the performance of JGS in terms of the optimal solution, we perform an exhaustive search over all possible sensor subsets to obtain the optimal performance. We observe that, on an average, the algorithm reaches more than 98.8% of the optimal performance (See Fig. <ref>(b)). This is much higher than the theoretical performance lower bound of 63%. It should be kept in mind that the worst-case performance of 63% as given by Theorem <ref> is only a lower bound. It guarantees that the performance cannot go below this value. However, the performance, as with the standard GS algorithm <cit.>, can be much better in practice, as is seen in Fig. <ref>(b). The average performance of 98.8% also indicates that even for lower values of m_s, the performance obtained by JGS is much higher than the lower bounds obtained in Theorem <ref>. Next, for a small-scale problem, the performances of the different algorithms mentioned in Table <ref> are compared with the optimal solution 𝒯_OPT obtained through an exhaustive search. In this problem, the goal is to select M_1 = 3 sensors from N_1 = 5 high-precision sensors, and M_2 = 7 sensors from N_2 = 10 low-precision sensors for K = 5. Since the performance of any method depends on the ratio SNR_H/SNR_L, we fix SNR_H = 40 dB and vary SNR_L. The MSEs of different methods are shown in Fig. <ref>. We observe that compared to IGS and the random selection methods (RS and IRS), GS and the proposed JGS method have 5-8 dB lower error. Among GS and JGS methods, GS results in 0.2-0.5 dB lower error. This is intuitively reasonable since the heterogeneous sensor selection problem is more constrained than the homogeneous sensor selection problem. Specifically, by applying the GS algorithm, we can select M_1+M_2 measurements from N_1+N_2 measurements but can not ensure that out of the selected sensors, M_1 are selected from one set and the rest from the other. As the difference between SNR_H and SNR_L decreases, the difference between the performances of GS and JGS reduces as the two sets become more homogeneous in nature. The optimal solution gives an MSE about one dB less than that JGS achieves. Thus, the proposed algorithm achieves near-optimal performance. While simulating the small-scale problem, we also take the opportunity to verify the weighted frame cost achieved by JGS, IGS, and IRS as compared to the optimal WFC. Figure <ref> shows the WFCs for SNR_H = 40 dB and SNR_L = -5 dB, 0 dB, 30 dB, and 35 dB. We observe that the solution sets obtained by the JGS algorithm achieve performances equivalent to the optimal. In comparison, IGS and IRS produce lower WFCs. This experiment shows that, in practice, the proposed JGS algorithm achieves near-optimal performance. Figure <ref> and Fig. <ref> show that MSE and WFC are inversely correlated with each other, in the sense that they follow inverse trends. A lower value of WFC indicates a higher MSE. In the next experiment, we focused on a large-scale problem with M_1 = 10, N_1 = 50, M_2 = 100, N_2 = 150, and K = 30. Due to the size of the problem, an exhaustive search can not be applied. The MSEs for the rest of the methods are compared in Fig. <ref>. We observe that the JGS approach is giving 4-8 dB lower error than the IGS, RS, and IRS methods for different noise levels. Next, we consider the large-scale problem as earlier, but instead of adding two levels of noise, we consider quantizing the measurements with a coarse and a fine quantizer. The simulation settings consider the practical scenario where high-precision and low-precision sensors may also be characterized by the quantization levels of their analog-to-digital converters (ADCs). Although quantization error is often modeled as random additive noise, the operation is non-linear. In this experiment, the high-precision sensors in the set 𝒮_1 are quantized with Q_h = 16 bits. For the measurements in 𝒮_2, a quantizer with Q_l bits is used where Q_l varies between 2 to 12. Figure <ref> shows the results of this experiment of estimating the parameter vector 𝐱 from its quantized linear measurements. As in the previous cases, JGS gives lower MSE than random and independent greedy methods by 3-8 dB. In the previous experiments, the results are obtained for a fixed 𝐱, while the noise covariance matrix Σ and the measurement noise at each Monte Carlo trial is changed. In the following experiment, we compare the algorithms for different 𝐱 realizations where Σ is fixed. In this case, the measurements are corrupted by additive zero-mean Gaussian noise. The noise variances for 𝒮_1 and 𝒮_2 are chosen as σ_h = 10^-4, and σ_l is varied from 3 × 10^-4 to 3. The average MSE is calculated over 1000 Monte Carlo trials where in each trial, entries of 𝐱 are chosen from an i.i.d. uniform distribution selected from the range [-1,1]. This ensures that the noise-free measurements are bounded, thus making the SNR values of the two sets finite. Comparison of average MSEs for different methods are presented in Fig. <ref> for both small- and large-scale problems with the values of M_1, M_2, N_1, N_2, and K as used in the previous experiments. It is observed that JGS performs better than IGS, RS, and IRS by 4-9 dB, while GS outperforms JGS by about 0.5-1.5 dB. These results show that the set of sensors chosen by the proposed JGS algorithm can be used to estimate the underlying parameter vector 𝐱 coming from a given distribution as long as the measurement matrix 𝐀 and the noise characteristics given by Σ remain the same. §.§ Non-linear Measurement Model To compare the methods for non-linear measurement models, we consider the direction of arrival (DoA) estimation problem. In this model, we assume that there are K far-field sources transmitting from K different directions θ = [θ_1, θ_2, …, θ_K] ∈ℝ^K. The receiving antenna array is composed of two different sets of sensors with different noise characteristics. The problem is then to select the best set of M_1 high-precision sensors and M_2 low-precision sensors from the linear sensor array so that the MSE of the estimated DoA is minimized. The measurements are related to the directions as 𝐲 = 𝐀( θ) α + η∈ℂ^N, where 𝐀( θ) is the array sensing matrix whose (nk)-th entry is e^-j2π/λd_n sin( θ_k ), for k = 1,…,K, n = 1, …, N, where d_n denotes the location of the sensors measured from a reference sensor taken at the origin, and λ is the wavelength of the transmitted waves. The vector α = [α_1, …, α_K]^T∈ℝ^K consists of the amplitudes of the waveform received from the K sources. The vector η represents the additive noise in the measurements. The goal is to select M_1 high-precision sensors and M_2 low-precision sensors from the given sensor array, such that the error in the estimation of θ and α is minimized. In these experiments, simulations are performed for selecting M_1 = 10 and M_2 = 100 sensors out of N_1 = 50 and N_2 = 150 sensors, respectively, for estimating the directions and amplitudes of K = 15 sources by using MUSIC algorithm <cit.>. The simulations are performed with the sources transmitting at 77-GHz, which corresponds to λ = 4 mm. The sensor locations d_n, n = 1, …, N are chosen uniformly at random within the interval [0,1]. In this experiment, we set SNR_H = 40 dB, and SNR_L being varied. The entries of θ are chosen uniformly at random from the interval [-π, π], and entries of α are i.i.d. Gaussian random variables with zero mean and variance 25. Figure <ref> shows the results of this experiment. As in the linear cases, the algorithm is shown to perform almost as well as GS (0.5-1 dB worse) even when the measurement model is non-linear and vastly outperforms the random selection methods and the IGS method (by 5-15 dB). In conclusion, the above experiments show that the proposed JGS algorithm gives a near-optimal performance; the MSE it achieves is much lower than random solutions and independent greedy solutions for both linear and non-linear measurement models. Being a less-constrained problem, GS gives a slightly lower MSE than JGS. § CONCLUSION In this work, we have considered the problem of sensor selection in heterogeneous sensor networks when the number of sensors to choose from each subset is specified as a cardinality constraint. We proposed a joint greedy algorithm to select the sensors from the different subnetworks. For mathematical simplicity in deriving performance guarantees, only two subsets of sensors are considered in JGS. However, the algorithm can be readily extended to HSNs with more than two sensor subsets. We proved lower bounds for the performance with respect to the optimal performance. Although the theoretical performance guarantee depends on the number of sensors to select from each subset as well as the path taken by the algorithm, asymptotically, it is seen that the worst-case error reaches (1-1/e) ≈ 63% of the optimal performance when M_1 / M_2 ≪ 1. The worst-case error bound obtained in this work is not tight, and tighter bounds may be obtained by considering the errors in the approximations. We also showed using simulations that the algorithm works well in practice and performs better than the predictions of the worst-case error bound. The small-scale experiments show that JGS gives a near-optimal solution. Both the small-scale and large-scale network simulations show that JGS gives lower MSE than random selection methods and IGS by 4-10 dB in linear and non-linear measurement systems. The proposed algorithm thus can select sensors well in heterogeneous sensing environments. § SUBMODULARITY OF WFC The function WFC as given by (<ref>) is normalized since WFC(∅) = WFP(𝒩) - WFP(𝒩) = 0. Let 𝒮_1 ⊂𝒮_2 ⊂𝒩. Then, WFC(𝒮_2) - WFC(𝒮_1) = WFP(𝒩\𝒮_1) - WFP(𝒩\𝒮_2) = ∑_i ∈𝒮_2 \𝒮_1∑_j ∈𝒩\𝒮_1 w_i w_j | < ∇_𝐱 y_i, ∇_𝐱 y_j > |^2/∇_𝐱 y_i _2^2 ∇_𝐱 y_j _2^2≥ 0, since w_i ≥ 0 for all i ∈𝒩. This shows that WFC is a monotone non-decreasing function. Finally, with 𝒮_1 ⊂𝒮_2 ⊂𝒩 and ρ_j(𝒮) defining the incremental cost of adding the element j to the set 𝒮 with WFC as the cost function, ρ_j (𝒮_1) - ρ_j (𝒮_2) = ∑_i ∈𝒩\𝒮_1 w_i w_j | < ∇_𝐱 y_i, ∇_𝐱 y_j > |^2/∇_𝐱 y_i _2^2 ∇_𝐱 y_j _2^2 - ∑_i ∈𝒩\𝒮_2 w_i w_j | < ∇_𝐱 y_i, ∇_𝐱 y_j > |^2/∇_𝐱 y_i _2^2 ∇_𝐱 y_j _2^2 = ∑_i ∈𝒮_2 \𝒮_1 w_i w_j | < ∇_𝐱 y_i, ∇_𝐱 y_j > |^2/∇_𝐱 y_i _2^2 ∇_𝐱 y_j _2^2≥ 0. This shows that WFC as defined in (<ref>) is a submodular set function. § PROOF OF THEOREM <REF> For this proof, let 𝒯_m denote the set of samples selected by the algorithm till the m-th iteration, c_m = 𝒞( 𝒯_m ) - 𝒞( 𝒯_m-1) denote the incremental cost of the added sample at the m-th iteration, and ρ_j ( 𝒯_m ) denote the incremental cost of adding the element {j} to the set 𝒯_m. In the proof, the following property of normalized, monotone non-decreasing submodular functions 𝒞 will be extensively used. 𝒞(𝒯) ≤𝒞(𝒮) + ∑_j∈ (𝒯-𝒮)ρ_j(𝒮), ∀𝒮, 𝒯∈𝒩 where ρ_j(𝒮) 𝒞( 𝒮∪{ j }) - 𝒞(𝒮) is the incremental cost of adding the element j to the set 𝒮. In order to prove Theorem <ref>, we first show the performance of the algorithm until the switching of the search space occurs at the m_s-th iteration, and then how that affects the performance of the algorithm after the switch. Lemma <ref> states the performance for m ≤ m_s, and Lemma <ref> states the performance for m > m_s, where m denotes the iteration count of the algorithm. For m ≤ m_s, if 𝒯_m is the set selected by the algorithm till the m-th iteration, and 𝒞 is a normalized non-decreasing submodular function, then 𝒞( 𝒯_m) ≥[ 1 - ( 1 - 1/M_1 + M_2)^m] 𝒞( 𝒯_OPT), for m = 1, 2, …, m_s. The cost function 𝒞 is a normalized function, so that, if 𝒯_0 = ∅, then, 𝒞( 𝒯_0 ) = 0. The cost function 𝒞 is a non-decreasing function, so that, ρ_j ( 𝒯_m ) = 𝒞( 𝒯_m ∪{j}) - 𝒞( 𝒯_m ) ≥ 0. We use the property of submodular functions given by (<ref>), assuming the function is non-decreasing, to get, 𝒞( 𝒯_OPT) ≤𝒞( 𝒯_m ) + ∑_j ∈( 𝒯_OPT - 𝒯_m )ρ_j ( 𝒯_m ). Now for m ≤ m_s, the algorithm chooses the best possible sample out of the entire available set of samples at each step. Thus, we get from inequality (<ref>), 𝒞( 𝒯_OPT) (a)≤𝒞( 𝒯_m ) + ∑_j ∈( 𝒯_OPT - 𝒯_m ) c_m+1 (b)≤𝒞( 𝒯_m ) + ( M_1 + M_2 ) c_m+1 = 𝒞( 𝒯_m ) + ( M_1 + M_2 ) ( 𝒞( 𝒯_m+1) - 𝒞( 𝒯_m ) ), where step (a) follows since c_m+1≥ρ_j ( 𝒯_m ) for any j ∈( 𝒮_1 ∪𝒮_2 - 𝒯_m ), since the greedy algorithm chooses the sample having highest incremental cost at each iteration. (b) follows as | 𝒯_OPT - 𝒯_m | ≤( M_1 + M_2 ), since 𝒯_OPT contains only ( M_1 + M_2 ) elements. Using inequality (<ref>), and the fact that 𝒞( 𝒯_0 ) = 0, we can use inductive analysis to get, 𝒞( 𝒯_m+1) ≥[ 1 - ( 1 - 1/M_1 + M_2)^m+1] 𝒞( 𝒯_OPT). From Lemma <ref>, we get for m = m_s, 𝒞( 𝒯_m_s) ≥[ 1 - ( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT). This gives the worst-case performance of the set chosen by the algorithm till the m_s-th iteration, that is, till just before the switch in the search space occurs. Using inequality (<ref>), we next find a lower bound for the performance of the algorithm for iterations m > m_s. Assuming that the algorithm has selected all M_1 samples from 𝒮_1 at the m_s-th iteration, for iteration m = m_s + k, 1 ≤ k ≤( M_1 + M_2 - m_s ), if 𝒯_m_s + k is the set selected by the algorithm till the ( m_s + k )-th iteration, and 𝒞 is a normalized non-decreasing submodular function, then 𝒞( 𝒯_m_s+k) ≥[ 1 - M_1/M_2∑_j=0^k-1( 1 - M_1 + 1/M_2)^j - . . ( 1 - M_1 + 1/M_2)^k ( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT), for k = 1, 2, …, ( M_1 + M_2 - m_s ). If, on the other hand, the algorithm exhausts M_2 samples from 𝒮_2 at the m_s-th iteration, then, under similar assumptions on the cost function, 𝒞( 𝒯_m_s+k) ≥[ 1 - M_2/M_1∑_j=0^k-1( 1 - M_2 + 1/M_1)^j - . . ( 1 - M_2 + 1/M_1)^k ( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT), for k = 1, 2, …, ( M_1 + M_2 - m_s ). For m > m_s, we can consider as if the algorithm is starting with 𝒯_m_s as the initial set. Then, inequality (<ref>) is the initial cost from where the algorithm begins. Let us first assume that the algorithm has selected M_1 samples from 𝒮_1 at the m_s-th iteration. The other case, when M_2 samples are exhausted from 𝒮_2 at the m_s-th iteration follows similarly. For m ≥ m_s, we again use the property of submodular functions given by (<ref>), assuming the function is non-decreasing, to obtain, 𝒞( 𝒯_OPT) ≤𝒞( 𝒯_m ) + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1ρ_j ( 𝒯_m ) + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_2ρ_j ( 𝒯_m ). Now, from the ( m_s+1)-th iteration, i.e., for m > m_s the algorithm selects the samples from the set ( 𝒮_2 - 𝒯_m ) only. It can be seen from Fig. (<ref>) that ( 𝒯_OPT - 𝒯_m ) ∩𝒮_2 = ( 𝒯_OPT - 𝒯_m ) ∩( 𝒮_2 - 𝒯_m ). Thus, we get, 𝒞( 𝒯_OPT) (a)≤𝒞( 𝒯_m ) + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩( 𝒮_2 - 𝒯_m ) c_m+1 + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1ρ_j ( 𝒯_m ) (b)≤𝒞( 𝒯_m ) + M_2 c_m+1 + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1ρ_j ( 𝒯_m ) (c)≤𝒞( 𝒯_m ) + M_2 c_m+1 + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1ρ_j ( 𝒯_m_s - 1) (d)≤𝒞( 𝒯_m ) + M_2 c_m+1 + ∑_j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1 c_m_s (e)≤𝒞( 𝒯_m ) + M_2 c_m+1 + M_1 c_m_s (f)≤ M_2 𝒞( 𝒯_m+1) - ( M_2 - M_1 - 1 ) 𝒞( 𝒯_m ). Here, * (a) follows since the greedy algorithm chooses from the set ( 𝒮_2 - 𝒯_m ) at the m-th iteration, making the incremental cost c_m+1≥ρ_j ( 𝒯_m ) for any j ∈( 𝒯_OPT - 𝒯_m ) ∩( 𝒮_2 - 𝒯_m ) ⊆( 𝒮_2 - 𝒯_m ). * (b) follows as ( 𝒯_OPT - 𝒯_m ) ∩𝒮_2 = ( 𝒯_OPT - 𝒯_m ) ∩( 𝒮_2 - 𝒯_m ) ⊆( 𝒯_OPT∩𝒮_2 ), thus making | ( 𝒯_OPT - 𝒯_m ) ∩( 𝒮_2 - 𝒯_m ) | ≤| ( 𝒯_OPT∩𝒮_2 ) | = M_2. * (c) follows from the definition of submodular functions, since ( 𝒯_m_s - 1) ⊆( 𝒯_m ). * (d) is true since at the m_s-th iteration of the greedy algorithm, the incremental cost c_m_s≥ρ_j ( 𝒯_m_s - 1) for any j ∉𝒯_m_s -1. Now, as 𝒯_m_s -1⊆𝒯_m, then ( ( 𝒯_OPT - 𝒯_m ) ∩𝒮_1 ) ⊆( ( 𝒯_OPT - 𝒯_m_s - 1) ∩𝒮_1 ). This implies, if j ∈( 𝒯_OPT - 𝒯_m ) ∩𝒮_1, then j ∉𝒯_m_s - 1. * (e) is true since ( ( 𝒯_OPT - 𝒯_m ) ∩𝒮_1 ) ⊆( 𝒯_OPT∩𝒮_1 ), thus making | ( 𝒯_OPT - 𝒯_m ) ∩𝒮_1 | ≤| 𝒯_OPT∩𝒮_1 | = M_1. * (f) follows from the monotone increasing property of the given cost. Thus, 𝒞( 𝒯_m_s) ≤𝒞( 𝒯_m). Now, rearranging the inequality (<ref>), we get, 𝒞( 𝒯_m+1) ≥1/M_2𝒞( 𝒯_OPT) + ( 1 - M_1 + 1/M_2) 𝒞( 𝒯_m ). Using inequalities (<ref>) and (<ref>), we get, for k = 1, 2, …, ( M_1 + M_2 - m_s ), 𝒞( 𝒯_m_s+k) ≥[ 1 - M_1/M_2∑_j=0^k-1( 1 - M_1 + 1/M_2)^j . . - ( 1 - M_1 + 1/M_2)^k ( 1 - 1/M_1 + M_2)^m_s] 𝒞( 𝒯_OPT). If the algorithm selects M_2 samples from 𝒮_2 first at the m_s-th iteration, the roles of M_1 and M_2 get interchanged in the above steps. Similarly, the roles of 𝒮_1 and 𝒮_2 get interchanged. Using the above analysis, we arrive at inequality (<ref>). Theorem <ref> follows directly from Lemma <ref>, by replacing m_s + k = M_1 + M_2, that is, by evaluating the worst-case performance of the algorithm from (<ref>) for the last iteration. IEEEtran
http://arxiv.org/abs/2307.01919v1
20230704210456
Black Holes as a Collider of High Energy Particles
[ "Bobur Turimov", "Shuhrat Hayitov" ]
gr-qc
[ "gr-qc", "astro-ph.HE" ]
APS/123-QED [email protected] Beg Astronomical Institute, Astronomy St. 33, Tashkent 100052, UzbekistanSchool of Engineering, Central Asian University, Tashkent 111221, Uzbekistan New Uzbekistan University, Mustaqillik Avenue 54, Tashkent 100007, Uzbekistan Samarkand State University of Architecture and Construction, Lolazor St. 70, Samarkand 140147, Uzbekistan According to the Banados-Silk-West (BSW) process, rotating black holes can act as particle colliders capable of achieving arbitrarily high center-of-mass energy (CME), provided that a specific angular momentum of one of the particles is present. In this discussion, we demonstrate that both Kerr black holes and Schwarzschild black holes could serve as potential sources of high-energy particles in the polar region. Black Holes as a Collider of High Energy Particles Shuhrat Hayitov August 1, 2023 ================================================== The origin of ultra-high-energy cosmic rays (UHECRs) is still unclear and their observational evidence of energy 10^20 eV posses some interesting and challenging questions in the scientific community. It is widely believed that they originate from extragalactic sources <cit.>. The observed spectrum of such UHECRs exhibits several distinctive features <cit.>. At approximately 4× 10^18 eV, there is a hardening in the spectrum known as the "ankle." This phenomenon can be attributed to a transition from Galactic to extragalactic cosmic rays (CRs) in models with either mixed composition or iron dominance <cit.>. Alternatively, in proton-dominated models, the hardening could arise from losses due to pair production during propagation <cit.>. From a theoretical perspective, there are several models that explain the mechanisms behind high-energy particles. For example, the Blandford-Znajec mechanism <cit.> elucidates the release of rotational energy from a rotating black hole. Another model, known as the Penrose process <cit.>, describes how energy can be extracted from a rotating black hole, with a maximum energy efficiency of approximately 21%. In the magnetic Penrose process, this efficiency exceeds 100% <cit.>. It has also been investigated the annihilation of dark matter particles in the gravitational field of black holes <cit.> and shown that the CME of a pair of colliding identical particles in this scenario is given by E_c.m.=2√(5)m. Additionally, the BSW process <cit.> provides insight into collisions between pairs of particles falling into rotating black holes. Interestingly, for specific values of the angular momentum (i.e., l_1=2 or l_2=2) of one of the particles, this process can generate arbitrarily large CME near the horizon. However, this model showed that the static black hole can not release high energy. In this Letter, we have conducted an investigation into a model aimed at elucidating the release of high-energy relativistic particles from the polar region of black holes. Our model builds upon the BSW process by incorporating the angular motion of test particles around both Kerr and Schwarzschild black holes. Remarkably, we have found that the CME (Collisional Mass-Energy) of colliding particle pairs diverges towards infinity in the polar region of both types of black holes. This intriguing phenomenon offers a potential explanation for the mechanisms behind high-energy sources observed in astrophysics, including Quasars, Blazars, and other similar sources. For a visual representation of this process, refer to the schematic illustration depicted in Figure <ref>. Ignoring the effects arising from limitations on the maximal spin of the black hole, back-reaction effects, and sensitivity to the initial conditions of the collisions, as mentioned in <cit.>, we focus on the collision of a pair of massive particles near Kerr black holes. However, it is important to note that these effects may be considered in future studies. The main objective of this Letter is to provide a qualitative analysis of particle collisions near a black hole. According to Ref. <cit.>, the center-of-mass energy (CME) for the colliding pair of particles in curved spacetime is given by E_ c.m.^2 = -g^μν(p_1μ+p_2μ)(p_1ν+p_2ν). Here, p_iμ = m_iẋ_iμ represents the four-momentum of the particles, satisfying p_iμp^iμ = -m_i^2, and ẋ_i^μ denotes the four-velocity of the colliding particles with mass m_i (where i=1,2). Finally, the CME of the colliding pair of particles is given by: E_ c.m.^2/2m_1m_2=1+(m_1-m_2)^2/2m_1m_2-g_μνẋ_1^μẋ_2^ν . Here we focus this equation in the Kerr spacetime. The equations of motion in Kerr spacetime given as <cit.> Σṙ=√([(r^2+a^2) E-al]^2-Δ( K+r^2)) , Σθ̇=√( K-(l-a Esin^2θ)^2/sin^2θ-a^2cos^2θ) , Σϕ̇=a/Δ[(r^2+a^2) E-al]+l-a Esin^2θ/sin^2θ , Σṫ=r^2+a^2/Δ[(r^2+a^2) E-al]+a(l-a Esin^2θ) , and associated with three constants of motion, specific energy E, specific angular momentum l, and Carter constant K, where Δ=r^2-2r+a^2, Σ=r^2+a^2cos^2θ. Using equation of motion (<ref>)-(<ref>), the CME in the background of the Kerr spacetime takes a form: E_ c.m.^2/2m_1m_2=1 +(m_1-m_2)^2/2m_1m_2+R_1(r)R_2(r)-√(R_1^2(r)-Δ(r^2+ K_1))√(R_2^2(r)-Δ(r^2+ K_2))/ΣΔ -T_1(θ)T_2(θ)+√( K_1-T_1^2(θ)-a^2cos^2θ)√( K_2-T_2^2(θ)-a^2cos^2θ)/Σ , where R_i(r)=(r^2+a^2) E_i-al_i and T_i(θ)=l_iθ-a E_isinθ. As evident from the analysis of Eq.(<ref>), a singularity arises at the horizon of the black hole (i.e., r_+=1+√(1-a^2) or Δ=0). However, this issue can be resolved by introducing a modification. By multiplying the expression R_1(r)R_2(r)+√(R_1^2(r)-Δ(r^2+ K_1))√(R_2^2(r)-Δ(r^2+ K_2)) to both the numerator and denominator of the third term in equation (<ref>) and subsequently applying the condition Δ=0, the singularity can be mitigated. Nevertheless, there exist two additional singularities at θ=0 and θ=π due to the presence of the trigonometric function θ in the last term of equation (<ref>), which cannot be avoided. Consequently, it can be concluded that the center-of-mass energy (CME) of colliding particles tends to infinity in the polar region of the black hole, not only near the horizon but also at other points within these regions. It is important to note that equations of motion (<ref>)-(<ref>) are dependent on three constants, and for simplicity, we can set E_i=1 for particles approaching the black hole from infinity <cit.>. Furthermore, at θ=0, π, equation (<ref>) exhibits divergence, indicating that the Carter constant approaches infinity. Hence, for a specific angle θ_0 at which particles collide, including 0 and π, the following condition can be obtained: K_i=(l_i-asin^2θ_0)^2^2θ_0+a^2cos^2θ_0. Substituting this condition into (<ref>), we can derive E_ c.m.^2/m_1m_2 =(m_1-m_2)^2/m_1m_2+[R_1(r_+)+R_2(r_+)]^2/R_1(r_+)R_2(r_+) +[R_1(r_+)T_2(θ_0)-R_2(r_+)T_1(θ_0)]^2/Σ(r_+,θ_0)R_1(r_+)R_2(r_+) , which depends on black hole's horizon, the angle that particle collide, as well as their angular momentum. Near the horizon of the extrmal Kerr black hole with r_+=1 and a=1, equation (<ref>) yields E_ c.m.^2/m_1m_2=(m_1+m_2)^2/m_1m_2+2(l_2-l_1)^2/(l_1-2)(l_2-2)sin^2θ_0 , while near the horizon of the Schwarzschild black hole it reduces to E_ c.m.^2(r→ 2)/m_1m_2=(m_1+m_2)^2/m_1m_2+(l_2-l_1)^2/2sin^2θ_0 . Notice that by setting θ_0=π/2 and m_1=m_2=m, one can reproduce results for the CME that is reported in <cit.>. Now one can see that the CME depends on not only angular momentum of particles but also the angle θ_0 that particles collide. The CME of identical particles with identical angular momentum is determined as E_ c.m.=2m. Interestingly, in the polar region of the black hole i.e. θ_0=0, π, the CME tends to infinity when angular momentum of particles are different from each others. Figure <ref> represents angular dependence of the CME of pair colliding particles, while Figure <ref> draws possible way of obtaining the real CME for particles in the (l_1,l_2) plane at θ_0=π/2. It shows CME will be always real when l_i<2 or l_i>2. It is observed that the CME remains real when l_i<2 or l_i>2. Additionally, it is worth considering the impact of factors such as the maximum spin limit of the black hole, back-reaction effects, and the sensitivity to initial collision conditions. Nevertheless, the main result for the CME in Eq. (<ref>) contains a singularity at θ_0=0, π. This research is supported by Grants F-FA-2021-510 of the Uzbekistan Ministry for Innovative Development. *
http://arxiv.org/abs/2307.02524v1
20230705180000
Large Deviations Theory Beyond the Kibble-Zurek Mechanism
[ "Federico Balducci", "Mathieu Beau", "Jing Yang", "Andrea Gambassi", "Adolfo del Campo" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "cond-mat.stat-mech", "hep-th" ]
i sgn ⟨ ⟩
http://arxiv.org/abs/2307.03287v1
20230706204923
Complexity Heliophysics: A lived and living history of systems and complexity science in Heliophysics
[ "Ryan M. McGranaghan" ]
physics.space-ph
[ "physics.space-ph", "nlin.AO", "physics.hist-ph" ]
Systems and Complexity in Helio]Complexity Heliophysics: A lived and living history of systems and complexity science in Heliophysics [1]Ryan M. McGranaghan *[1]NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, 91109, CA, USA In this piece we study complexity science in the context of Heliophysics, describing it not as a discipline, but as a paradigm. In the context of Heliophysics, complexity science is the study of a star, interplanetary environment, magnetosphere, upper and terrestrial atmospheres, and planetary surface as interacting subsystems. Complexity science studies entities in a system (e.g., electrons in an atom, planets in a solar system, individuals in a society) and their interactions, and is the nature of what emerges from these interactions. It is a paradigm that employs systems approaches and is inherently multi- and cross-scale. Heliophysics processes span at least 15 orders of magnitude in space and another 15 in time, and its reaches go well beyond our own solar system and Earth’s space environment to touch planetary, exoplanetary, and astrophysical domains. It is an uncommon domain within which to explore complexity science. After first outlining the dimensions of complexity science, the review proceeds in three epochal parts: 1) A pivotal year in the Complexity Heliophysics paradigm: 1996; 2) The transitional years that established foundations of the paradigm (1996-2010); and 3) The emergent literature largely beyond 2010. This review article excavates the lived and living history of complexity science in Heliophysics. The intention is to provide inspiration, help researchers think more coherently about ideas of complexity science in Heliophysics, and guide future research. It will be instructive to Heliophysics researchers, but also to any reader interested in or hoping to advance the frontier of systems and complexity science. [ * ===== § PATHWAY TO COMPLETING THIS REVIEW [inline]Bear in mind: If my most immediate/urgent pathway to SFI and maturity in complexity thinking is my living reviews article (what will 'put me on the map'), how do I need to be treating the development of that piece? * workflow now: starting with `setting the stage' section - read, improve language, add acronyms, get permissions and input figures, update corpus in helpful ways, and then finally figure out how to make the material more interactive and data science-augmented; then a final read-through from perspective of reader. * NOTE: I am using this as an application-oriented way to learn complexity science. As I am doing the reviews of these articles, place my emphasis not on getting through the article, but understanding a new important concept in complexity thinking (and let that be a crucible for including or excluding the article) * complete my complexity helio corpus; do research about how to provide a corpus in an interactive way + complete the corpus that I will submit with this manuscript (finalize all numbers and understanding of the corpus) running script for all thresholds to explore now, then generate summary graphics for appendix and choose a threshold to visualize in Kumu and create those data using Python networkX; deliverables will include the code via HelioKNOW github, the glossary, the CSV for the data from the chosen threshold; JSON of the networkx data, kumu visualization * complete the acronyms section * get all citations * refer to GDoc to make sure I have captured my thoughts from those notes * Revisit whether I have sufficiently covered information theory * Complete my overview of the risk and resilience/ecosystem in crisis framework; now make it cohere as a meeting place of the data-driven and theory-driven sciences - reading required * - what is the state of that? How could I provide it more interactively (simply a JSON file?)? Import to ADS and look at the author or citation networks and topics? How might I augment the corpus itself with ADS, semantic scholar, and web of science? I need to add the description of how this corpus was generated (and generate an updated corpus with the new glossary) * Figure out how to provide the citation network as an interactive visual (represent using Omar's tools and how to serve via the web? Jupyter book? Should I run it through the concept tagger? Is this a paper in and of itself?) Post-initial-submission to do's * continue to grow the key definitions, acronyms, and datasets section * do more reading and learning about risk and resilience frameworks and revamp that section * Study that follows: Can I show that space physics is a complex system in some fundamental way? Perhaps even an exemplary complex system for studying some element of complex systems in a pristine way (e.g,. solar flares may embody some of this as might substorms)? * don't forget that I need to `sell' this (see Evernote on this and pitch: <https://www.evernote.com/shard/s268/nl/34844235/32cd9b99-04b8-0240-8a4a-cb9ffef76ada?title=Complexity%20Heliophysics%20Living%20Reviews%20Pitch>) Figures needing permissions * Figure 8 from <https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/92JA01193> * Figure 12 from Klimas 1996 * Figure 15 from Klimas 1996 * Figure 3 from <cit.> some papers to consider reading * concepts and controversies <https://link.springer.com/article/10.1007/s11214-015-0155-x> * <https://ui.adsabs.harvard.edu/abs/2013JGRA..118.4108C/abstract> * <https://ui.adsabs.harvard.edu/abs/2007AIPC..932..161C/abstract> * <https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2004RG000161> Reviewers to suggest: * Joe Borovsky * Sandra Chapman * Enrico Camporeale * Someone from SFI? * § INTRODUCTION The [21st] century will be the century of complexity. -Stephen Hawking Heliophysics processes span at least 15 orders of magnitude in space and another 15 in time. The reaches of this science go well beyond our own solar system and Earth’s space environment to touch planetary, exoplanetary, and astrophysical domains. The history of Heliophysics has, like many sciences, been one of specialization–categorizing and separating domains and building understanding within those ever more boundaried systems. The approach has produced remarkable achievement, yet in a century in which: * sensing capabilities are revealing cross-scale behavior (e.g., field-aligned currents across scales <cit.>); * data analysis and computational tools are enabling cross-system and multi-scale research <cit.> (e.g., combined particle and magnetohydrodynamic (MHD) simulations <cit.>); and * the practical demands on Heliophysics science are growing exponentially (e.g., societal dependence on space, risk to critical infrastructure due to space weather, expansion of humanity into space and the solar system–each in some respect dissolving of the boundary between Heliophysics and humanity), the Heliophysics community faces the need to shift the paradigm by which it creates new scientific knowledge <cit.>–the advent of Complexity Heliophysics. We will capitalize the term in this review to draw attention to the fact that we posit it as a paradigm, a framework of assumptions, principles and methods from which the members of the community work <cit.>; a kind of generalization that characterizes the next stage of a community's work <cit.>. Note that we are not inventing the paradigm, merely giving it a name and outlining and connecting the varied research and research avenues that compose it with the intention of providing inspiration, helping researchers think more coherently about these ideas, and guiding future research. In order to usher in this new paradigm, we must first understand what complexity science is, specifically in the context of Heliophysics research. Complexity is a difficult thing to define. It is often used synonymously with `something we do not yet know' <cit.>, a placeholder label for a new frontier of scientific knowledge. However, this review will adopt a more principled view. We will refer to complexity science as a paradigm, drawing distinction from attempts to describe it as a discipline, ill-suited to what complexity actually is. Indeed complexity is a paradigm of scientific discovery. It is fundamentally distinct from `complicated.' It is the study of phenomena that emerge from a collection of interacting objects. To understand a complex system requires a plurality of frameworks and the ability to move more seamlessly between scales of the system (e.g., micro and macro). As such, complexity science spans numerous dimensions. We briefly introduce a few of those dimensions that are important to this review and give them context in Heliophysics here. The reader will recognize these as organizing themes in this review. §.§ Systems science and cross-scale Crossing between the scales of the system is the foundation of the complexity science paradigm. Complexity science inherently expands across scale–the interacting objects may be particles in a simulation regime, flux ropes and their individual and connected dynamics on the surface of the Sun, or coupling of entire systems like the magnetosphere and ionosphere. In the context of Heliophysics, complexity science is the study of a star, interplanetary environment, magnetosphere, upper and terrestrial atmospheres, and planetary surface as interacting subsystems. Each of these subsystems can be further broken down into regions (e.g., the auroral region of the upper atmosphere) and all the way down to more elementary components such as electrons and protons. At which scale one chooses to examine the Heliophysics system determines the methods one uses and ultimately one's understanding of it. For instance, in the magnetosphere one might choose the macro-scale, dictating a magnetohydrodynamic (MHD) method, or the micro-scale, requiring a kinetic method. Indeed, one of the most vexing questions that has obstinately refused answer in Heliophysics has been, “What level do we need to look at the system to understand and predict it?” <cit.>. Complexity science is a paradigm that suggests ways of reconciling the micro and macro scales. It is the collection of methods to understand a system across scales, the smaller scale behavior in connection with the phenomena that emerge from it. §.§ Self-organization, emergence, and scaling theory Inseparable from cross-scale understanding is the concept of emergence. Emergence is the term used to describe phenomenon that are `more than the sum of their parts' <cit.>. Emergence is observed in virtually all areas of inquiry, such as how large numbers of individual fish are able to behave dynamically as a school when threatened by a predator <cit.>. In terms of scale, emergence is the occurrence of actions at one scale giving rise to phenomenon on another level. The idea that order at some higher order or coarse-grained level of a system is organized by a number of interacting sub-systems is called self-organization <cit.>. Self-organization is a powerful toehold in complexity analyses because it reveals that emergence is observable in statistical characteristics of the system. Examining system properties across scales, self-organization causes power law behavior. Power laws are a departure from assumptions of normality that has governed much traditional scientific and engineering analyses and instead involves heavy tails in the probability distributions. Like self-organization, power laws are powerful because they imply underlying driving mechanisms that are identical across scales of the system and produce the same statistical signature at all scales. For instance, biological organisms across a remarkable range (e.g., mice to elephants) exhibit power law scaling with a 1/4 slope such that the metabolic rate increases proportionally to the body size raised to a 1/4 power <cit.>. So as body size doubles the metabolic rate, or the rate at which the organism consumes energy, increases only by about 75%. Such scaling relationships could provide fundamental principles governing systems, in this case the physiology and energy requirements of living organisms. Power laws are found across systems, from cells to cities, and it is important to wonder if they extend to Heliohpysics and what principles they might reveal. This review will cover instances where power laws are found in Heliophysics systems. The similarity across scales that a power law reveals is a property called scale-free or scale-invariant. Scale-invariance means that there is some mechanism that acts across scales of a system that could, in principle, be discovered. Additionally, the same mechanism may act in quite different systems that exhibit power law distributions with the same slopes <cit.>. <cit.> proposed the concept of self-organized criticality (SOC) to explain power law behavior and the correlation that extends over many orders of magnitude in complex dynamical systems. Ultimately, SOC implies that the stability of macro-scale systems depends on the self-organization of local events into scale-invariant dynamical patterns characterized by power law probability distribution functions with certain values of the exponents <cit.>. The study of these scale-invariant patterns is generally referred to as scaling theory, offering tools to connect small and large-scale dynamics or micro and macro states. The identification of these power law relationships and application of scaling theory in general have been a focus of Complexity Heliophysics research. Though we will address SOC, we point readers to more comprehensive examinations of the topic and its history in space physics <cit.>. Those excellent developments will free up space in this review for novel development of the topic. Related to scaling theory is the concept of coarse-graining. Coarse-graining is considering a system at a higher or coarser level at which some of the finer scale behavior has been smoothed over. Newton's laws are a coarse-graining for the physics of motion. At these scales, the laws describe the system sufficiently, though they may break down at finer scales. There are coarse-grained theories that are dynamically and statistically sufficient; aggregations that are as good predictors of their future selves as any more microscopic description is. Tools to explore self-organization and emergence provide details about when these aggregations may exist and when they do not. Coarse-graining is thus incredibly useful because it can yield an effective theory, which is a representation that allows one to better predict the system than if the intricacies of a finer scale were considered. For instance, measuring the temperature of a system, a coarse-graining of the aggregate motion of its particles, permits more accurate predictions for a given computational capacity than if each of the individual particles' speeds and directions were measured <cit.>. §.§ Information and acknowledging uncertainty Emergence is a way that order is extracted from many interacting parts and scaling laws describe that order statistically. To analyze order mathematically, the driving principle of the complexity paradigm, one must begin with information and its counterpart entropy. Information quantifies the amount of dependency or connection between a random variable and itself at a different time or with other variables at the same or different times. Entropy quantifies the amount of uncertainty involved in the value of a random variable. <cit.> delineates information from entropy, “In a physical system, information is the opposite of entropy, as it involves uncommon and highly correlated configurations that are difficult to arrive at.” It is in these uncommon configurations that mathematicians, physicists, and scientists have observed various physical systems. The mathematicians of the early to mid 1900s applied the study of order and disorder to communication systems, creating the field of information theory as the `mathematical treatment of the concepts, parameters and rules governing the transmission of messages through communication systems' <cit.>. These pioneers (e.g., Claude Shannon, Florence Violet McKenzie, Warren Weaver, Alan Turing, Norbert Wiener, to name only a few) became the first information theorists or cyberneticists <cit.>. It should be noted that women and minorities are often left out of the history of cybernetics, but played integral roles whose influences are still being discovered. Subsequently, the complexity paradigm was built on ideas of information. The transmission of messages in a communication system provides an apt analog to the transfer of energy through the physical solar system. Heliophysicists have taken the information theoretic approach to the solar-terrestrial system (e.g., <cit.>). Information theory provides rigorous mathematical formalisms to study the nonlinear relationships and feedbacks that characterize complex systems <cit.>, especially because they can go beyond linear correlational analyses, capture nonlinear relationships, and establish causalities. Researchers have found that information theory can describe the structures and signatures of order against the random, entropic background on which they act. Information theory thus provided an entry point into the complexity science paradigm. Information theory is inherently probabilistic. The equations deal with random variables and probability distributions. As apparent in the connection to entropy, implicit in information theoretical approaches is a quantification of uncertainty. Indeed the complexity paradigm requires an acknowledgement of uncertainty and uncertainty quantification becomes important within it. Progress toward information theoretic approaches and uncertainty quantification in Heliophysics leads to the advent of risk and resilience frameworks as a bridge between deterministic physics-based methods and empirical data-driven approaches <cit.>. The intersection or reconciliation of these two is a grand challenge for Heliophysics and one for which past work suggests the complexity paradigm may provide new possibilities for progress (see Section <ref>). §.§ Networks, network science, and collectivity Information leads to another central dimension of complexity science: networks. If the complexity science paradigm is about understanding the emergence of patterns from the interactions of their parts, then networks are its specimens and network science its toolkit. A network is simply a collection of entities, or nodes, and their relationships, or edges. For example, in a social network the nodes are people and the edges whether they know or are friends with one another. Networks, also called graphs, permit the representation of a system in a way that captures more of the complexity than, say, a rigid spatial grid representation could. As the network structure is remarkably representative of the natural world <cit.>, thinking of a system in this way can lead to new and useful insights <cit.>. The 21st century has witnessed the advent of new theoretical tools to extract knowledge from networks of many different kinds <cit.>, and a concomitant dawn of network science in Heliophysics. As an objective of this review is to focus on areas of the complexity paradigm that have received less development, special emphasis is placed on networks and network science in Heliophysics. While there is a productive and generative body of network science research in the solar-terrestrial system (e.g., <cit.>, the related topic of collectivity has received relatively little attention. Collectivity, or collective behavior, is a term used to describe approaches to understanding emergent phenomenon particularly through representing the system as a network. From the function of parts (nodes) together with their interactions (edges) collective behavior emerges. Collective behavior has progressed from a description of phenomena to a framework for understanding the mechanisms by which collective action emerges <cit.>. The research that composes this review suggests that areas ripe for future analyses may be identified in the language and discipline of collective behavior, especially in its treatment of networks. The areas include both the study of the physical solar-terrestrial system and the social networks or communities of Heliophysics that study it. Thus, the implications are not only for fundamental science but also for how we do science. Ultimately, one cannot do modern science without information theory and now networks. These form pillars of complexity science that we will use to explore the history of Complexity Heliophysics. §.§ Risk and resilience framework New frameworks are required to handle uncertainty and embody the complexity paradigm. This is acutely true in Heliophysics, which has an existential counterpart in the societal impacts of the solar-terrestrial connection known as space weather <cit.>. In order to translate the science of Heliophysics into actionable knowledge for space weather, the complexity paradigm dictates a risk and resilience framework <cit.>. This robust design focuses on discovering mechanisms that maintain functionality under changing or uncertain environments <cit.>. In a risk and resiliency framework, a system is treated as complex and can be defined by whether or not it can accommodate changes and reorganize itself while maintaining the crucial attributes that give the system its unique characteristics <cit.>. Risk and resilience offer ways that data-driven information can be incorporated with complex systems understanding and decisions made amidst uncertainty <cit.>. There is a small but growing volume of work that treats the solar-terrestrial connection as a complex system and casts the problem as one of risk quantification and resilience-building <cit.>. From these works and related literature from sister disciplines like Earth Science and terrestrial weather, a framework can be defined. How do we define risk and hazards and how might understanding Heliophysics and space weather in these terms illuminate a path toward a society resilient to their vicissitudes? This review pulls together the existing literature and knowledge to discuss this question. We conclude this review with a look at just a few of the works that have taken risk and resilience as a framework within which actions can be determined and decisions made amidst, sometimes extreme, uncertainty. The attempt to review the existing works and to abstract a framework for risk and resilience offers guidance on a topic central to Heliophysics in the 21st century: bridging research and operations; determining the relationship between basic and applied research, a topic taken up in Section <ref> §.§ Roadmap for this review With the paradigm shift comes new capacities to understand the system. Heliophysics has embraced some of the dimensions, however, a number of them remain relatively unexplored. The purpose of this review is to apprehend the development of the paradigm and to establish the historical trajectory for the tools of complexity science within Heliophysics. From this history, important trends emerge that will guide researchers (early and senior) toward directions that may be more capable of responding to the problemscape of Heliophysics in the 21st century. The objectives of this work are: * To define the paradigm of Complexity Heliophysics in the context of the seminal works that compose it, seen in a new converging light; * To detail the network of complexity studies in Heliophysics, setting the stage for the needed research in the 21st century. The corpus of cited text will be uncommon to most Heliophysics research papers (e.g., pulling liberally from areas outside of the traditional scope of Heliophyscis research articles) and a unique resource in and of itself (e.g., serving as a hub for the network of research that all readers can use in their exploration of Complexity Heliophysics); and * To lay a foundation for how complexity science may help address outstanding questions in Heliophysics science, including the intersection of fundamental and applied research and the use of artificial intelligence and machine learning (AI/ML). The purpose of this review is to present the research that was compiled and explored and the artifacts from it in a navigable way that lends itself to being able to adopt the tools and ideas. Therefore, artifacts from this work include: * This review article; * A glossary of terms (the bedrock of information search, integration, and automated analyses <cit.>) that define complexity Heliophysics; and * A new corpus of Complexity Heliophysics compiled using NLP where the included papers have been filtered based on Heliophysics or Heliophysics-adjacent journals and by matching terms in the papers to those in the new glossary. These resources are provided in a Github repository for this publication[ <https://github.com/rmcgranaghan/ComplexityHelio_LivingReviews>]. Something we would like to see come from this work is a more collaborative examination of the corpus of articles (the collection of documents manually compiled in the references of this review along with those automatically generated through NLP methods (see Appendix <ref>)). One way to accomplish this is to create a database of the papers in the corpus (our references list plus the articles compiled from NLP analysis of Heliophysics literature) on a platform that allows the community to add margin notes, annotate, and hold conversations around the papers. This would allow the insights researchers generated when reading published works to build on one another and for the conversations around those works to evolve. Inspiration can be found in the Fermat's Library[<https://fermatslibrary.com/>]. Altogether, the resources provided in this piece constitute a library of Complexity Heliophysics in the ethos of libraries as cultural technologies[<https://www.societylibrary.org/build-multimedia-libraries>] and sacred places <cit.>. Previous works have capably reviewed complexity science up to ∼1996 <cit.>. We do not attempt to restate those works here, and instead take the Klimas review as our starting point. This enables us to give more attention to the voluminous body of work that has been created since. Parts of the discussion will, of course, reach to works prior to 1996. We augment the literature review and synthesis with a more perspective-based portion (Sections <ref> and <ref>) where we attempt to describe trends perceived through creating this review and to identify key issues confronting Complexity Heliophysics. We acknowledge that some readers will not incorrectly read parts of this contribution as opinion; however, we have tried to support our views by quotes and references compiled across more-than-Heliophysics publication and knowledge wherever possible. Finally, like any review or synthesis, this is incomplete. It is not the goal of this work to review exhaustively every paper that is related to complexity science in Heliophysics, but rather to present a useful selection, deliberately chosen to reveal the story of the complexity science paradigm in Heliophysics, and to illuminate generative areas for future thinking and research. §.§.§ The use of Natural Language Processing (NLP) in this review Given the volume and breadth of the information available to a Heliophysicist, a problem exacerbated by the pace of scientific research and publication, traditional approaches to search and discovery as well as ingestion of new materials (e.g., largely manual) will need to be augmented by new tools. Perhaps most pressing is the need to create and adopt mature natural language processing (NLP) tools to help one search through, organize, and summarize the vast literature. NLP refers to interactions between computers and human language and is often used to refer to the programming of computers to process, analyze, and respond to large amounts of natural language data. We have employed these techniques in this review. As a supplementary piece that intends to make this a living contribution to the state of knowledge, we provide a corpus that was generated by natural language processing methods of 33 journals (those within the NASA Astrophysics Data System (ADS) deemed most important to Heliophysics) that any reader can use freely to determine other trends not addressed here. Across the 33 journals, we compiled a corpus of nearly 125 thousand articles with their authors and abstracts. After matching words in the title and abstract with terms in our complexity glossary, we arrived at a Complexity Heliophysics corpus of roughly three thousand documents, two orders of magnitude larger than a typical journal article bibliography. The details of how the corpus was generated can be found in Appendix <ref>. It augments the bibliography, a corpus in itself, of works directly cited in this review. The author acknowledges a wealth of knowledge much greater than the papers directly cited in this review contributed to the writing. Resources related to the automated corpus generation and results are provided in an accompanying Github repository for this review[ <https://github.com/rmcgranaghan/ComplexityHelio_LivingReviews>]. We suggest that such automated corpora could perhaps even become standard for future reviews. The one provided herein should be considered a resource that complements the extensive references cited in the body of this review and contains high potential for discovering trends and knowledge about Complexity Heliophysics. It is important to note that the manual and automated corpora are not disjoint nor is the manual corpus strictly a subset of the automated corpus. Many references are shared across them, lending validation to the process of generating the automated set, but there are many references in the manual set that are not included in the automated one. This points to the flexibility of the scientist-driven discovery process, pulling in relevant references and material that might be more distant or irregularly connected to the research at hand than the necessarily more rigid automated process. This review, in particular, read widely in gathering material, many connections of which an automated approach would likely not have captured. The point is there must be an intersection of manual and automated gathering of resources, the manual approach benefiting from flexibility and capacity to range widely and be discerning and the automated approach benefiting from the volume of resources it can examine. The augmented way that we have approached this living review illuminates a trend in Heliophysics research and one point of this manuscript is to demonstrate the hybrid manual-automated approach. We discuss this trend in research below in Section <ref>. Readers interested in the history of complexity science in Heliophysics will enjoy beginning with Section <ref>. Those who might be interested in a more quantitative analysis might begin with the importance of NLP (Section <ref>) and our provided corpus, where they can emerge their own conclusions and trends. Those with an interest in solving the key open questions and challenges will find most value in Sections <ref> and <ref>. § KEY DEFINITIONS Any conversation or published work situated sufficiently in spaces between established fields requires a period of adjusting or coming to a more shared language. We provide a set of definitions that are important to the review that follows as a way to create common language. * artificial intelligence the theory and development of computer programs able to perform tasks normally thought to require human intelligence <cit.>. * coarse-graining considering a system at a higher or coarser level at which some of the finer scale behavior has been smoothed over * collective intelligence the study of collective behavior, that is adaptive, wise, or clever structures and behaviors by groups, in physical, biological, social, and many engineered systems <cit.> * corpus a collection of documents * emergence the term used to describe phenomenon that are `more than the sum of their parts' * feedback a loop of interactions across a system; in the computational fields, feedbacks are defined by outputs of a process being put back into an input of the same process * graph/network a collection of entities (nodes or vertices) and their relationships (edges) * information Information quantifies the amount of dependency or connection between a random variable and itself at a different time or with other variables at the same or different times. Its counterpart, entropy, quantifies the amount of micro-states involved in the value of a random variable * machine learning the study of computer algorithms that allow computer programs to automatically improve through experience <cit.>. ML is a sub-field of artificial intelligence. * natural language processing (NLP) programming of computers to process, analyze, and respond to large amounts of natural language data * named entity recognition (NER) subtask of NLP that seeks to locate and classify named entities mentioned in unstructured text * resilience the property of a system to accommodate changes and reorganize itself while maintaining the crucial attributes that give the system its unique characteristics <cit.> * sandpile cellular automata model a model based on adding sand to a pile and observing and quantifying the results, specifically the avalanches, that occur due to simple rules defining the evolution <cit.>. It is a specific example of an automaton that is a regular grid of cells, each with a finite number of possible states, and a fixed rule governing its state at the next time step given its current state and the states of neighboring cells <cit.>. * scale-free/scale invariant property of a system in which there is similarity across scales, the same structure is observed regardless of the scale at which the system is observed * self-organized criticality (SOC) <cit.> proposed the concept of self-organized criticality (SOC) to explain the power law, or scale-invariant, correlation extending over many decades in complex dynamical systems. It is the observation that, as articulated by <cit.>, “...some slowly-driven, dissipative, extended, dynamical systems can naturally exhibit a spontaneous organization towards a sort of dynamical critical point. The critical state does not depend from the initial conditions, does not require a fine tuning, and behaves as an attractor for the dynamics. All these systems are characterized by an intermittent dynamics, 1/f noise, and by a threshold dynamics, i.e. a local stepwise instability that occurs when the local field exceeds some critical value.” SOC dynamics can be viewed as a sort of dynamical transition between metastable configurations near a critical point and is an explanation of the ubiquity of 1/f noise * system A group of interacting or interrelated elements that act according to a set of rules to form a unified whole <cit.>. A systems perspective is at the center of complexity science. § SETTING THE STAGE FOR COMPLEXITY HELIOPHYSICS: FROM 1996 The world of Complexity Heliophysics touches many areas. We will focus this review on areas that are important and have received less development, including the critical linkages between the papers. For instance, we will address the topic of self-organized criticality, but defer a comprehensive examination of the topic and its history in space physics to <cit.> and <cit.>, allowing space here for further development of complexity in Heliophysics. As mentioned in the introduction, we begin with the review article by <cit.>, which could be interpreted as a turning point. They provide the relevant framing, “Earth's magnetosphere responds to the time-varying solar wind in an organized and repeatable fashion. Evidence has accumulated indicating that this organized evolution is a manifestation of low-dimensional magnetospheric dynamics. It appears that over the largest spatial scales and over substorm timescales and beyond, a relatively small number of magnetospheric state variables dominate the evolution. The identities of these variables are not known at present, and very little of the dynamical system that governs their evolution is understood. Determining these variables and understanding the related dynamics are the primary goals of this research. If these goals are reached, then a spatio-temporal framework will result within which reside the complex phenomena that are collectively called geomagnetic activity.” Klimas et al., details the transition over the preceding several decades from an era of linear correlative studies to the beginning of a new era of nonlinear dynamical studies. Why? Because the linear filters of <cit.> across varying solar wind activity demonstrated clear nonlinear behavior through distinct peaks in the time lags of activity. “Generally, there is a peak in the filters at lag time 20-30 min showing that there is always a response in electrojet activity to solar wind activity 20-30 min earlier; Bargatze et al. attributed this peak to the directly driven magnetospheric response <cit.>. There is a second peak at lag time one hour that is most evident for the moderate activity filters; this peak was attributed to the unloading magnetospheric response <cit.>”. The Bargatze work made clear the direct and indirect (directly-driven and unloading) modes of the magnetosphere <cit.>. Klimas et al., begin from this background and the implication that nonlinear explication of the solar wind-magnetosphere coupling requires nonlinear treatment. A key contribution of their review is a convergence of work to address the directly-driven and unloading modes of the solar wind-magnetosphere system and the corresponding evolution of the methods from linear correlative studies to nonlinear dynamical studies. It is an excellent starting point to understand the Complexity Heliophysics paradigm. The central question in the review was whether it can it be shown that the magnetospheric dynamics, represented by systems of differential equations, are effectively represented by a low-dimensional dynamical system and, if so, what is the nature of that dynamical system? The response to this question is traced through three approaches, interleaved in time: autonomous time series, analog (input-output) models, and computationally mature input-output models. The self-described summary of the work is a review of approaches to “find a low-dimensional analogue model of the magnetospheric dynamics derived directly from data and interpreted in terms of magnetospheric physics.” It laid out the existing state-of-the-practice toward what was a major goal of the magnetospheric community since early in the formation of Heliophysics as a discipline: to find a low-dimensional analogue model of the magnetospheric dynamics. The first observation from the collection of articles in the Klimas review is the prevalence of using the auroral electroject index (AE and the auroral upper and auroral lower indices, AU and AL, that constitute it) <cit.> to be the measurable variable to reconstruct the magnetosphere. In fact, the standard or benchmark dataset from most of the work addressed is a set of 34 events for which solar wind data are collated with the AL index <cit.>. Appendix <ref> contains a list of important datasets, and their original appearance in the literature, that appear across this review to aid readers who wish to compile datasets and explore data-driven research across the datasets that have factored importantly in the Complexity Heliophysics paradigm. §.§ Autonomous Time Series Autonomous approaches make the assumption that the evolution of the system representation (the state vector) is solely a function of the internal dynamics and independent of external factors. They are concerned with the question of whether or not the evolution of the magnetosphere is organized such that a few variables alone can describe its evolution. Studies attempt to determine whether the dissipative magnetosphere dynamically evolves into a low-dimensional `attractor' state and then attempts to describe the attractor to develop a model of the organized physical state. A measure of the attractor dynamics is the correlation dimension. The correlation dimension considers the set of state variables, or the phase space, of the system. To calculate the correlation dimension an arbitrary measurement from those variables is chosen and compared to other measurements in the neighborhood of the chosen one, defined by a sphere of radius r. The number of measured state vectors within the neighborhood are counted (C_r) and the variation of this number as r→0 is determined. The importance of the correlation dimension, D_cr, is that if C(r) ∝ r^D_cr as r→0, then D_cr is an estimate of the attractor dimension near the chosen point. For an entire system, an average correlation dimension can be found by averaging the correlation dimension from each chosen measurement over all of the measured points. For a simple closed loop phase space, purely periodic system, in three dimensions (three state variables), D_cr=1. In a chaotic system, the attractor phase space is made up of many folded and closely packed layers and D_cr falls between 2 and 3. A non-integer correlation dimension is a feature of `strange' attractors <cit.>. The dimension of the attractor is an important estimate of the physical system that gives rise to it. In the periodic case, the dimension is one, indicating that any two of the variables can be written in terms of the third and that the system is one-dimensional. Between two and three, the correlation dimension indicates the minimum phase space dimension that supports the attractor is three, and that is the physical dimension of the system. Across the literature a rather wide range of correlation dimensions between 2.4-4 have been found: from 3.6 in the first application of the technique to magnetospheric dynamics <cit.> to a set of papers that found low correlation dimensions <cit.>. There were numerous critiques of the correlation dimension technique to determine the magnetospheric dimension, including the requirement of large databases of measurements and lack of knowledge of what volume would be sufficient, spurious periodicities in the databases as have been identified in the <cit.> data, potential presence of background or superimposed trends in the data across activity levels and unrelated to the magnetospheric dynamics, and sensitivity to delay times chosen for the analysis. Among the limitations that exist across attempts to understand the complex magnetosphere and geospace environment is the inability to measure all of the state variables or even to define them a priori. The solution has been to substitute for the unobservable variables functions of measurable ones to constitute the state vector. Among the most readily available has been the AE/AL/AU indices and the best available datasets have been those that align the solar wind drivers with the AE indices' responses. Limitations in the use of these reconstructed variables permeate the history of complexity Heliophysics studies of the magnetosphere. A far-reaching critique of the approaches reviewed was the fact that the AL index time series is a colored noise output of a high-dimensional stochastic process, which would result in a misleading low correlation dimension when those data are used to proxy the complex magnetosphere <cit.>. Colored noise is stochastic noise that, like a chaotic time signal, exhibits a power law spectrum f^α <cit.>. In the case of colored noise, the correlation dimension is actually a measure of the fractal dimension of the system and is unrelated to the existence of an attractor. Numerous studies examined the likelihood that the AE/AL index time series are generated by a stochastic signal, with varying conclusions. Though useful methods arose from these investigations (e.g., self-affinity <cit.>, autocorrelation analyses <cit.>, singular spectrum analyses <cit.>, the use of surrogate data <cit.>), no consensus was established. The conclusion from the autonomous method studies was rather that no conclusive statement could be made about the low dimensionality of the magnetosphere–contradictory results pervade the literature. The lack of resolution of the central question of the variables that govern the magnetosphere and its evolution led to the consideration of magnetosphere by nonlinear input-output methods. From <cit.>: [The] magnetosphere is not an autonomous system but is continuously driven by the stochastic solar wind. It is therefore possible that, even if the magnetosphere were a low-dimensional chaotic system, we might not find it by studying just one time series. This is because the magnetosphere may not have time enough to converge to the possible attractor for times long enough to produce data with a detectable number of close returns of the trajectory. For this reason it has been suggested that the magnetosphere should be described as a nonlinear input-output system [Prichard and Price, 1992; Price and Prichard, 1993] <cit.>. §.§ Input-Output Models: Analogue Models Autonomous methods assume that the system evolves solely due to internal dynamics. Loading rates or external forcing may be present, but it is taken to be a constant and a parameter of the system rather than a dynamical variable. In the autonomous methods there is a common approach to use a series expansion of a function to represent dynamics at a certain point in space and time and to discard the higher-order terms. The shortcomings of the autonomous methods raised the question of the dynamics that reside in these higher-order terms. Subsequent to the controversies and conflicting conclusions of many autonomous system approaches and studies, input-output approaches began to play a central role in the study of the magnetospheric dynamics. Input-output (I-O) approaches use both the input to the system and its output response to determine the coupling characteristics. The Klimas et al. review divides models into two categories: 1) analogue and 2) more recent nonlinear and computationally intensive approaches. Analogue modeling makes assumptions about the magnetospheric coupling characteristics (determining them from `preconceived notions of the processes that constitute geomagnetic activity') whereas the latter I-O methods reviewed determine them directly from data. The output of the latter I-O methods is a so-called phenomenological dynamical system, one that has been deduced from time series analyses of the input and output data rather than through analogue modeling. The two approaches reveal a debate central to science and that foreshadows the key challenge discussion that is introduced at the conclusion of this review (see Section <ref>) about the relationship between theory- and data-driven theories of discovery. Analogue approaches to magnetospheric dynamics assume that the magnetosphere is a coherently driven system that evolves in an manner that can be modeled by a low dimensional dynamical system. The analogue models reviewed by Klimas et al., share the assumption that the magnetospheric dynamics are low dimensional. To constitute the input and output time series, the analogue models shared the approach of using one of the best available observables thought to be related to the magnetospheric dynamics: the electrojet indices <cit.>. The AU index is a measure of the eastward electrojet driven by reconnection dynamics in the dayside magnetosphere. The AL index, alternatively, is a measure of the westward electrojet that is connected to high-latitude magnetopause reconnection in the magnetotail. In using the electrojets as measures of the response to solar wind energy input, the assumption is that the currents flow as a result of appropriate conductances and electric fields mapped from the magnetosphere to the ionosphere through Alfvèn waves along intermediary field lines. <cit.> developed a model of only the directly-driven magnetosphere (responses of the magnetosphere as a result of direct energy input from the solar wind rather than unloading responses that occur at longer time lags due to magnetotail activity <cit.>) using a system of three nonlinear ordinary first-order differential equations with six independent constant parameters: τ_ADd AU_d(t)/dt + AU_d(t) = A_1 E_mpt_max(t), where τ_AD and A_1 are constant parameters, AU_d(t) is the eastward electrojet for the directly-driven response (d), and E_mpt_max(t) is the maximum reconnection electric field at the magnetopause. A similar form is assumed for the westward electrojet and relationship to the electric field in the central plasma sheet, with appropriate adjustments in the parameters. ISEE 3 solar wind data were used to compute the electric field at the magnetopause, the input to the model, and measured AE (composed of the AU and AL indices) was compared with predicted AE over May 18-19, 1979. A cross correlation of 0.9 was achieved (see Figure 7 from the original publication, the central result of the paper, reproduced here in Figure <ref> with permissions). Their model is strictly of the directly-driven response to the solar wind, and the agreement indicates little dependence on the unloading response <cit.>. There were periods of marked disagreement, attributed in large part to differences in the AL index, though these were isolated in time as a result of non-adiabatic phenomena so that the overall correlation remained high. Absent a term for these non-adiabatic dynamics, the model has been shown to perform poorly on different data intervals as pointed out by <cit.>. <cit.> produced a model that included analogues to both the directly-driven and unloading magnetospheric responses to the solar wind. Their model is a Faraday Loop for the time-dependent magnetotail convection with a superposed loading-unloading cycle. The model is built on the concept that if the loading rate on the dayside magnetosphere is above some level, then the unloading of the accreted energy through Earthward convection is insufficient to balance it and a substorm begins to grow. If loading continues, a critical point is surpassed whereafter explosive unloading follows. The resultant three-dimensional Faraday Loop Model (FLM) consists of dynamic variables for the cross-tail electric field, the flux content of the lobe, and a quantity dependent on the size and orientation of the tail. Their observable for comparison was again the electrojet indices, and they mapped the cross-tail electric field from their model to the westward electrojet in a simple way. After this study, it was implicit in the magnetosphere community that any dynamical model must account for both modes: directly-driven and unloading. Kilmas et al., cite the importance of the FLM as being a combination of numerous important statistical properties of geomagnetic activity: isolated substorms, steady loading, and time-dependent loading. A number of studies have used the FLM model. <cit.> showed that under steady loading the FLM model evolves to a periodic attractor, with a period of substorm recurrence of roughly one hour and that the period was independent of driver strength. <cit.> examined time scales of substorm occurrence for the passage of slowly varying magnetic clouds to understand substorm recurrence under various steady loading rates. They found that substorm recurrence ranged between 25 and 150 minutes, but with an average rate of approximately one substorm every 55 minutes (unloading recurrence rate). For this range of steady driving strengths the FLM model varies little from the 55 minute average unloading recurrence rate. <cit.> use of the FLM model seems to explain the <cit.> observations: 1) the nearly invariant unloading recurrence rate under steady driving and 2) the distribution about one hour of recurrence rates during extended periods of loading. The results have been extended to time-dependent driving, where a Poisson distribution of recurrence rates around the most probable value of one hour was found. Note that the Poisson distribution expresses the probability of a given number of events occurring in a fixed interval of time given a known constant mean rate and independently of the time since the last event. The latter assumption has received focus in substorm research in the intervening decades under various names, among them substorm recurrence rates <cit.>, waiting times <cit.>, and intersubstorm times <cit.>. Indeed waiting time statistics is used to identify Poisson random processes, self-organized criticality, intermittent turbulence, finite system size effects, or clusterization <cit.>. As a side note, <cit.> showed that both the epsilon parameter <cit.> and the VB_s <cit.> to be the most suitable measures of the rate at which energy is loaded into the magnetosphere by the solar wind. Developing some proxy of this loading rate is required to study the magnetospheric responses and has been the focus of much subsequent study (e.g., <cit.>). <cit.> performed an analysis of coupling functions and established best practices in their derivation and guidance on their limitations. They find the analysis of the persistence of solar wind parameters defines how best to compile a coupling function. Further, they comment on the best metrics for testing the capability of a coupling function, revealing shortcomings in correlation as a useful metric for some applications. Finally, they provide two criteria that coupling functions must describe to quantify integrated effects: 1) the large-event tail; and 2) the core of activity distributions. Improved representation of solar wind energy input to the magnetosphere, both in the form of solar wind coupling functions and multiple signatures of energy input, have led to improved understanding of substorm evolution <cit.> and even to improved prediction of their occurrence and intensity <cit.>. <cit.> calculated linear prediction filters (LPFs) to relate driving (VB_S) to response (AL index) for a number of intervals during a set of 1-2 day intervals. Their dataset is an important one for Complexity Heliophysics and is described in <ref>. They found a peak in these filters at ∼20 minutes and ∼one hour, which were ascribed to the directly-driven and unloading magnetospheric modes. Power spectra were created comparing FLM-modeled and measured AL index values to VB_S for an interval from the Bargatze data set. They are reproduced from <cit.> in Figure <ref> given their importance to subsequent work on power law relationships across the Heliophysics system. Figure <ref>a shows a break in the measured and modeled power law spectra at high frequencies, due to the fractal nature of the AL index at high frequencies. In Figure <ref>b the agreement has been recovered when the measured AL index has been smoothed over a 17.5 minute running average. The power law scaling is abundant in and indeed a hallmark of complex systems, indicating the presence of some underlying mechanism that creates self-similar, scale-invariant behavior. <cit.> marks an important recognition of power laws and their significance in Complexity Heliophysics. §.§ Input-Output Models: Computationally Mature Input-Output Models The progress and shortcomings of the linear prediction filters pointed to the next step in I-O modeling: capturing nonlinearities. The Klimas review provided an early look at two methods to specify nonlinearity: neural networks and local-linear prediction. Their advantage is that they are data-driven, meaning they determine relationships directly from the data rather than, as in the analogue approach to I-O modeling, from `preconceived notions of the processes that constitute geomagnetic activity.' In other words, they do not make a priori assumptions and instead allow the data to determine the conclusions. With the data-driven approach comes the challenge of physical interpretation of the derived relationships. Interpretability (making physical meaning) of magnetospheric models was, of course, already important even in relatively explainable simple models, but the awareness was exacerbated as data-driven approaches used models with more parameters, more complexity, and became more difficult to explain. Will increase representativeness, interpretability often suffers. The tradeoff of representativeness and interpretability between data- and physics-driven approaches has only intensified in Heliophysics in the 21st century. A few comments related to the approaches that proved most successful of characterizing the magnetospheric behavior, local linear prediction methods and a subset from the field of ML known as neural networks, from the Klimas review articulate the tension: * “To understand the physical content of this“model," it is necessary to understand the global structure of the nonlinear coupling surface. In this case, to extract the physical content of the local-linear predictor, it will be necessary to reconstruct the coupling surface from the many local approximations to it that are already available.” * “...it does appear that further research into extracting the physical content of the network is warranted.” * <cit.> “It is often said that neural networks yield no usable information on the physics of the system that they model. However, it appears that this prejudice may not be correct.” Compiling these comments has proven prescient of a deeper discussion that has emerged and is being shaped in our more modern era of computation and artificial intelligence (AI). Klimas et al., summarize this trend with the statement, “It is anticipated that in the future, a combined approach involving both analogue [physics-based] modeling and input-output data analysis will prove most effective for understanding the magnetospheric dynamics that couples solar wind input to geomagnetic activity output.” The early work on neural networks and local-linear predictions focused on geomagnetic activity prediction. <cit.> created a neural network prediction of the electrojet index, using the <cit.> VB_S-AL database. They created a state-space reconstruction (SSR) network, using input solar wind information and past output of the model (e.g., previously predicted AL index values) to predict future AL index values. As a point of comparison, they also created a nonlinear prediction filter (NPF) in which the predicted AL index values are produced based solely on solar wind input information. They determined that the SSR outperformed the NPF and that the neural networks performance were dependent on hyperparameters such as the activation function used. A severe limitation of the nets were the inability to predict large values of AL at all. Several works advanced the concept of a local-linear predictor (LLP) <cit.>. LLP is an extension of LPF to capture nonlinear coupling between the input and output. Figure <ref> shows the relationship. The LLP addresses the situation where the input and output vary over a small range of values. As that variation increases, or as the curve is more nonlinear, the LPF approximation fails. The LLP fixes the the point on the nonlinear coupling curve around which input-output data samples will be used to reconstruct the relationship. The process is one of defining the viable neighborhood. The approximation is valid only within the neighborhood and thus the process must be repeated at each step in time to predict the next step. Applications to the magnetosphere discovered an important extension that both the past and present inputs to the magnetosphere and its geomagnetic response need to be used as inputs to predict future outputs. The size of the neighborhood in the method reflects the degree of nonlinearity in the system, with the relevant neighborhood decreasing as the nonlinearity increases. <cit.> used the approach to examine the nonlinearity of the magnetosphere, finding only weak evidence. <cit.> is perhaps the seminal early work using LLP. Using the <cit.> dataset, they found that the best fit for an LLP is low-dimensional and a local fit to a nonlinear predictor. To apply the method, they build a large database of instances of the I-O data and use a pattern-matching approach to find the best values in each neighborhood of the case in question, varying the size of the neighborhood to find a minimum in the prediction error. As the nonlinearity of the I-O relationship increases, a smaller neighborhood for the LLP improves the prediction. Several important assumptions accompany the LLP method: * It is assumed that there is a set of variables, small in number, that adequately specifies the global state of the magnetosphere; * Although not all of the variables that specify the global state of the magnetosphere may be measurable, it is assumed that an equivalent state can be reconstructed from those that can be measured; * The next time step, X_n+1 = F(X_n, U_n), is differentiable, where F_n represents the magnetospheric dynamics that relates previous input and output to future output. They suggest that the low dimensionality and nonlinearity of their LLP are characteristics of the physical magnetosphere. They provide strong evidence for nonlinearity and low dimensionality, and the results indicate that the evidence is persistent over many different local phase space neighborhoods (i.,e., over many different solar wind and magnetospheric conditions). Their work is a foundation for geomagnetic activity prediction, using past to present geomagnetic activity indicators like the AL index to predict the indicator's evolution into the future (e.g., <cit.>). It is now also apparent that the Vassiliadis et al. work was a predecessor of modern data-mining approaches using larger data sets (e.g., <cit.>). Finally, LLPs suggest that, ultimately, a model of the magnetospheric dynamics can be low-dimensional, nonlinear, phenomenological. The Klimas review summarized and may have helped sparked deeper and wider exploration of characterizing the magnetosphere as a low dimensional system. One interesting trajectory through the subsequent literature has been toward ML methods to characterize the magnetosphere: 1) specification of a magnetospheric state vector: “the state of the magnetosphere, resulting from continuous but variable forcing of the solar wind and the interplanetary magnetic field (IMF), can be empirically specified by a magnetospheric state vector,consisting of a set of hourly-averaged magnetospheric driver and response parameters.” <cit.>; 2) including vector correlations between the solar wind and magnetospheric state vector <cit.>; and 3) toward the advent of high-dimensional machine learning (ML) models of the magnetosphere and its coupling to the solar wind upstream and the ionosphere downstream <cit.>. We will return to ML in Section <ref>. §.§ Important Themes through 1996 that set the stage for Complexity Heliophysics The studies reviewed represent launching off points for the Complexity Heliophysics paradigm that will be useful to readers (and in some cases supported by quotes from the Klimas article or others cited therein for convenience and context): * Dimensionality of the magnetosphere and self-organized criticality; * Relative success of input-output methods whereas autonomous systems methods were inconclusive; * Local linear predictions <cit.> as a foundation for geomagnetic activity prediction (uses past to present geomagnetic activity indicators like AL to predict the indicator's evolution into the future); * The wide-reaching utility of neural networks. “Little seems to have been done using neural networks to predict substorm indicators such as the electrojet indices. Nevertheless, Hernandex et al., [1993] <cit.> indicate that this is a promising direction for prediction. Further, in view of [the Klimas review] concerning the equivalence of neural network internal parameters to Volterra kernels, it does not appear that further research into extracting the physical content of a network is warranted.”; * Explainable AI (e.g., this review already raised the question of interpretability of the input-output models that proved most successful of characterizing the magnetospheric behavior (primarily neural networks and local linear prediction methods). These questions reach into modernity; * Converging the autonomous and the local linear prediction filter methods (gaining the benefits from each: interpretability and the success of the data-driven approach). “It is anticipated that in the future these local-linear predictor models will be studied carefully with the goal of organizing these bits and pieces into a global nonlinear predictor model. It may be advantageous to cast these predictor models as analogue models in order to maximize their physical interpretation.”; and * The assessments of the computationally mature input-output models point to a component of a risk formulation for Heliophysics models: By perturbing the input and witnessing the change one can create a measure of the sensitivity and thus a component of the resilience of the prediction technique. Additionally, the Klimas review has numerous connections to the dimensions of complexity science discussed in the introduction and reveals Heliophysics-specific items that will be thematic, including: self-organized criticality of the magnetosphere, system dimensionality, the variability of the solar wind, bi- or multi-modal behavior of the magnetosphere/geospace, implications of various observables or metrics for analyzing the system (e.g., the sufficiency or insufficiency of the auroral electrojet indices), nonlinear I-O modeling such as neural networks, and representativeness vs. interpretability of physics-based and data-driven modeling approaches. § EMERGENCE OF THE CONNECTION BETWEEN SELF-ORGANIZED CRITICALITY AND THE MAGNETOSPHERE The concept of self-organized criticality (SOC) provided Klimas et al. with an entry point for assessing the complexity science paradigm in Heliophysics. It's purpose is similar here, yielding a foundational concept from which we branch to other dimensions of complexity that are important to the history of Heliophysics, including power laws and scaling theory <cit.>, fractality <cit.>, network science <cit.>, emergence <cit.>, coupling between domains of the solar-terrestrial system, observational considerations, and machine learning. SOC has been well chronicled <cit.> and we will not attempt to recapitulate those excellent reviews, but will provide the necessary background for this review to be self-sufficient and point readers to the most valuable resources to discover SOC research in Heliophysics in more depth. <cit.> provided the world with the concept of SOC, an explanation for the ubiquitous 1/f power spectra characterized by a power law function P(ν) ∝ν that holds for any power spectra that is not purely random white noise (P(ν) ∝ν^0). The significance is that white noise represents traditional random processes with uncorrelated fluctuations, and the 1/f spectra is something else–an indication of non-random structures with long-range correlations in a time series. Bak used a sandpile as the example. Imagine that grains of sand are dropped onto a table. A cone-shaped pile will build up until a grain causes an avalanche. The longer a pile avoids an avalanche the larger the avalanche will be when it occurs. Bak attempted to determine the conditions under which these avalanches occur. He found that avalanches were unpredictable, dependent on the interactions between the individual grains of sand. Eventually the pile reaches a critical point at which the pile transforms into something more complex and properties emerge that are not part of the individual grains themselves. The tendency of the pile toward this critical state (self-organized criticality) was a new way of viewing nature <cit.>–out of balance, but in a poised state. Bak's avalanches are the non-random time structures represented by the P(ν) ∝ν^0 function. That seminal work sparked a new avenue of research, beginning first with numerical simulations of avalanches and cellular automata, iterative applications of a simple mathematical redistribution rule that yields complex spatio-temporal patterns <cit.>, and subsequently to widespread application in the physical sciences. <cit.> provides a review of cellular automata in the field of Solar Physics. Nearly ten years after and a continent away from where Klimas provided a backdrop for Complexity Heliophysics in large part beginning with the concept of the magnetosphere as a self-organized system, <cit.> offered a synthesis of the field's thinking that self-organization is an explanation for magnetospheric behavior. They addressed this history of the magnetosphere as a self-organized critical system, beginning with the premise that a complex system is one that is characterized by multiscale spatio-temporal (S-T) behavior. Their central question was how could the magnetosphere exhibit complexity at small-scales, but coherent and repetitive behavior at global scales? The canonical example is how there could exist self-similar turbulent behavior in the plasma sheet simultaneously with repeatable and coherent substorm phenomena. A key question in their history is what the threshold state between the predictable and unpredictable might be. Self-organized criticality provided the answer. Valdivia et al., brought the SOC equations into the magnetospheric context. Taking the AL index as an indicator of global magnetospheric energy dissipation, they describe three characteristics used to indicate self-organization: Δ E (energy dissipation; area under the AL curve), dt (time duration), and Δθ (time separation between events). Figure <ref> details each characteristic in AL index data. Their distributions cover two years of AL time series data between 1986 and 1988, taking the threshold of -100 nT to define an event (for a total of N=10365 events). The statistics show clear power laws in all three characteristics. Finding evidence for a self-organized state, they posit that it may provide a key for understanding substorm onset and derive a 1-D general dynamical model for the magnetic field in the diffusion region of the magnetotail to attempt to explain magnetospheric observations. Using their simple model they compile event statistics for the collective effects of many interacting instability sites (assuming a simple parameterization of the dissipation, derived from observations and data analysis), in a manner not dissimilar from microphysics and particle kinetics simulations. They identified ranges, a phase diagram of sorts, for the (𝐔×𝐁)_y term and a certain range in which robust critical behavior occurs. Their model cannot describe the details of the microphysics of the magnetotail, yet it serves to indicate that `the statistical behavior of many complex distributed systems is more a property of their self-organized state, if it is achieved, than the details of the physical processes that allow such state. This is a general characteristic of systems that are close to criticality where many systems belong to the same universality class, suggesting that it is probable that the statistics of substorms, pseudobreakups, and even the evolutions of the growth and expansion phases, are unrelated to the details of the dissipation process (Shay et al., 1998) <cit.> other than that dissipation allows for the establishment of a self-organized state.' Subsequently, a quite comprehensive review of SOC as applied to solar physics and astrophysics was created during two week-long meetings at the International Space Sciences Institute (ISSI) <cit.>. They reviewed self-organized criticality across these fields from 1989-2014, highlighting trends, open questions, and future challenges. The importance of SOC systems is punctuated by its application across domains, including: ecology <cit.>, evolutionary biology <cit.>, geology <cit.>, cognitive science <cit.>, computer science <cit.>, the social sciences <cit.>, economics and finance <cit.>, political science <cit.>. These diverse systems share common features that are linked through SOC: driven, dissipative, and far from equilibrium and releasing energy in a bursty intermittent manner on multiple scales with numerous routes to instability that lead to the energy release and reconfiguration <cit.>. The importance to this review is that SOC systems are inextricably connected to the statistics of nonlinear processes, which is signaled by power law-like size distributions. The exposition of SOC in Heliophysics was a hallmark of the complexity paradigm in the field in the years after 1996. § BEYOND 1996: COMPLEXITY HELIOPHYSICS §.§ Power laws in Heliophysics To set the stage for Heliophysicists' adoption of the concept of SOC, we begin with the origin of the idea itself. <cit.> documented peculiar features of the sandpile cellular automata in the discovery of the concept of self-organized criticality. The peculiarity is that the system responds to external perturbations by dissipating the stored energy in an avalanche, where the size of the avalanche is described by a power law distribution with 1/f noise. Power law behavior in investigations of the solar wind-magnetosphere system have been taken to be strong indicators of complexity and SOC in the magnetosphere’s dynamic evolution. <cit.> examined the power spectra of the AE index and the interplanetary magnetic field (IMF) north-south component (B_Z) over frequencies between 17 minutes and 28 hours using five minute averages between 1978-1980 (Figure <ref>). Notably, the power spectrum revealed a peak at 24 hours and a break in the spectrum at ∼ five hours. The spectra on either side of the break are fit using power laws with f^-1.00 at lower frequencies and f^2.2 at higher frequencies. The spectral break was found to be independent of the choice of data interval and averaging length. Overlapping the AE dataset that was used to create Figure <ref>(left), they computed the power spectrum for the IMF B_Z component, finding an unbroken power law that roughly follows a f^-1.42 slope (see Figure <ref>(right)). Thus, the break in the AE index behavior at around 4.7-5.2 hours was not explained by the solar wind IMF B_Z. They showed the ratio of the power of AE to the power of the IMF B_Z as a function of frequency and found a clear break at ∼4.6 hours, below which the ratio is independent of frequency and above which the ratio decreases at a rate of f^-0.5. The ∼5-hour break in the AE power spectrum is longer than substorm time scales of around 30 minutes for expansion phases and a couple of hours for total length. However, their results indicated no preferred period of B_Z (no break in the power spectrum) and no preferred period of substorms (no break in the spectrum below roughly five hours). The authors suggest a number of potential explanations as to why the AE index power diminishes with increasing B_Z, citing saturation mechanisms, but do not draw firm conclusions. Research to resolve the question of whether the magnetosphere is an SOC system was reinvigorated, perhaps as a result of the Klimas review, employing power law techniques as the prominent mechanism of investigation. <cit.> (and later extending the statistics in <cit.>) used the AE index and power law distributions to attempt to explain the intermittent nature of the magnetospheric dynamics (e.g., the rapid fluctuations in the AE index even in periods when the solar wind parameters were relatively smooth). They evaluated the distribution function for the intermittent burst behavior of the index. Defining the quiet time AE index level as L_AE = [45±15] nT, they derived the strength of a burst as s=∫_Ω(AE(t)-L_AE)dt, where Ω is the time interval over which the AE index is greater than the quiet time level. The distribution (D(s)) for a period of AE data covering around 3000 burst events was found to follow the power law form s^-τ with τ∼1 over more than four decades (10^1 - 10^5 nT·minute). They suggested an interpretation as an absence of characteristic length or time for the magnetospheric system. Their result related the magnetosphere to the clear demonstration that SOC systems exhibit a spontaneous organization towards something like a dynamical equilibrium. They extended the analysis to study the second important component of these systems that there are often various scaling regimes when the system is not quasistatically driven. To determine the relevance of the magnetosphere as an SOC system they explored the presence of a 1/f regime. They constructed the distribution of the full power spectral density (PSD) of the AE index, corroborating Tsurutani et al.'s results that the PSD divides into two power law regimes (1/f and 1/f^1.89) with a spectral break around 5.5×10^-5. The low frequency regime represents the random superposition of single burst events while the high frequency regime is the result of interaction among the bursts and is the regime of the SOC state of the magnetosphere. The high frequency regime is associated with minutes to hours time scales of magnetosphere dynamics, which are those associated to magnetic storms and substorms. <cit.> reifies many of the points made by <cit.>, recentering the questions of whether the magnetosphere is an SOC system, what evidence there is, and what can be measured to resolve the hypothesis. They note that magnetohydrodynamic (MHD) modeling is incapable of describing the highly intermittent and multifractal character of the magnetospheric dynamics during magnetic substorms and storms. As a result of the lack of a physical model through which to study these dynamics, there has been perhaps an over-reliance on the most readily available information: the AE index. They well-capture the background on this reliance. However, the index remained the best available diagnostic, and they used it to extend the statistics of <cit.> examining the burst size power and burst lifetime distributions of AE, with the goal to discriminate the impulsive dissipative events in the AE index from the enhancements that result due to convection (<cit.> directly-driven and unloading modes of the magnetosphere). They found power law scaling in the power distribution functions of the form D(x) ≈ x^τ with exponents 1.0 ≤τ≤ 1.5 (τ = [1.35 ± 0.06], and τ = [1.5 ± 0.1] for the burst size and lifetime distributions, respectively). These relationships were shown to hold over four and two decades, respectively, falling off only once the magnitude of the burst size or lifetime exceed points where the method is likely able to distinguish between the impulsive unloading and the convective dynamical modes of the AE index. The main result of the analyses is that AE index time behavior seems to be the possible occurrence of criticality in the Earth’s magnetospheric dynamics, again meaning that no characteristic scale or time represents the magnetotail dynamics and instead scale-free behavior is exhibited. These findings support previous ones (e.g., <cit.>) that the magnetotail may be an open, dissipative dynamical system at a critical state. Further, their analysis of the power spectral density (PSD) function of the AE index data revealed two distinct regions characterized by scaling exponents of -2 at high frequencies and -1 at low frequencies with a spectral break at f∼70 μ Hz. To clarify the origin of the 1/f-noise region at lower frequencies, the authors explore the relationship between the AE index and simultaneous solar wind parameters, attempting to unravel the how the solar wind behavior may be driving the magnetospheric response from the magnetosphere itself as an SOC system. The work to understand the relationship between the solar wind parameters and the auroral indices is more conclusively taken up in <cit.>, reviewed below. A final word of significance from this study. They issued a key warning that is prescient of future directions in Complexity Heliophysics: “to find scale-free distribution functions does not mean that the system is in a self-organized critical state. As a matter of fact, while SOC systems display scale-free distribution functions, many other physical mechanisms may produce scale-invariant distributions. In order to address this issue, we must investigate in great detail the physical mechanisms of this scale-free avalanche process in the magnetotail dynamics.” Tsurutani and Consolini et al. provided strong evidence through broken power law forms of the power spectrum of AE and then its burst lifetime and size distribution that the magnetosphere behaves as a system near its critical point. If the magnetosphere is a system near its critical point, it challenges the ability to predict its evolution as those dynamics are random. However, a system near its critical point may be confined to a sub-space characterized by a few dimensions and therefore could be well represented by a few parameters. <cit.> review the concepts and mathematical techniques for examining the deterministic chaos of low-dimensional nonlinear systems with fractal characteristics for the magtnetosphere. Chang begins from the idea that systems near critical configurations may exhibit low dimensionality: a dynamical system connected to a reduced number of relevant parameters. In the Heliophysics context this is a possible framework for the explanation of bursty bulk flows, low-dimensionality, and power law magnetic field spectra in the magnetotail, postulating the magnetotail to be an open, dissipative dynamical system near “forced- and/or self-organized criticality” (FSOC) <cit.>. Readers will recognize their postulate–it is a foundation of the arguments of <cit.> (reviewed at the end of Section <ref> above) in describing the magnetosphere using SOC. The magnetotail plasma being near the point of criticality and causing a substorm onset is like a fluid being at the critical point for equilibrium liquid/gas phase transitions–both are FSOC systems. They first emphasize that the magnetosphere is inherently multiscale and place focus on the mathematical tools to address the interplay of the kinetic, intermediate, and magnetohydrodynamic (MHD) scale fluctuations. In describing the merging of coherent magnetic structures in the magnetotail, they were perhaps the first to suggest that this process is the explanation of “bursty bulk flows,” a topic that remains an active area of research and more recent work seems to corroborate <cit.>. Of further note is that they open the possibility that the magnetic reconnection events leading to bursty bulk flows likely come from many if not all of the suggested microscopic instability mechanisms such as the collisionless tearing instability, or the cross-field two-stream instability. In such nonlinear systems there are likely overlapping and interacting mechanisms driving observed behavior and suggest multiple explanations rather than attribution to a single cause. Complexity thinking is open to multiple explanations over the traditional central and singular explanation. The Chang paper is a good introduction to the nontraditional mathematical techniques of dynamic merging of coherent structures, nonclassical nonlinear instability, path integrals, the theory of the renormalization-group, low-dimensional chaos, self-similarity and scaling, fractals, coarse-grained helicity and symmetry breaking and an excellent complement to this review. An important contribution is the introduction of powerful techniques for quantitatively and computationally studying dynamical systems far from equilibrium, such as the renormalization-group transformation procedure. Each of the ideas in the paper play an important role in Complexity Heliophysics. Coarse-graining (the representation of a physical system in which some of the fine-grained structure has been smoothed over without introducing external details, or in other words remaining true to the microscopic details[<https://www.edge.org/response-detail/27162>]), especially, emerges from this review as a central concept in Complexity Heliophysics. A rigorous understanding of coarse-graining is important to build compact representations of a system and thus to bridge the gap from complexity research in Heliophysics to decision-making based on the knowledge, which we further develop in Section <ref>. Subsequent to the knowledge of broken power laws in the auroral indices and the enumeration of the techniques to study them, particularly in defining measures of energy bursts <cit.>, were numerous studies that identified additional characteristics or anomalies in the distributions. Power laws still governed the distributions of burst magnitude and duration, but Consolini (in submitted work in the year 1999 that was not published entitled “Avalanches, scaling and 1/f noise in magnetospheric dynamics”) identified small `bumps' with characteristic values of magnitude and duration and suggested that a better fit to these distributions was a power law with an exponential cut-off plus a lognormal distribution. <cit.>, studying AU and |AL| data 1978-1988 paired with additional observations from the WIND satellite's <cit.> particle and magnetic field instruments between January 1995 and December 1998, showed a power law component of the burst lifetime distribution (P(T)) in two measures of solar wind-magnetosphere coupling, Akasofu’s epsilon and velocity*southward IMF (vB_s), but absent a bump. This close correspondence is illustrated in Figure <ref>, comparing curves for AU, |AL|, vB_s, and ϵ. They found that AU and |AL| distributions were fit well by power law with exponential cut-off plus lognormal distributions, and that there was no evidence of the log normal component in the solar wind variables (vB_s and ϵ). They examined each component of the AU and |AL| distributions and described the physical implication. First for the power law component: The power law exponent of the solar wind variables did not significantly differ from those of the AU and |AL| indices. This similarity of the input component (solar wind) to the output component (AU and |AL|) points to the system being `directly driven' (quasilinear relationship) by the solar wind at short (∼20 minutes) time lags. The similarity between AU and |AL| points to the fact that this component of the magnetospheric output acts throughout the auroral oval because the AU and AL indices rely on contributing magnetometers from different local times. Finally, because of similarity to the solar wind and the global distribution of these relationships, this power law burst lifetime component in the AE indices may be attributable to the Disturbance Polar type 2 (DP2) convection electrojets <cit.>. Next, the lognormal component: It was most prominent in the |AL| index for which the contributing magnetometers are concentrated in the post-midnight sector and acted over the characteristic magnetospheric timescale of 2-5 hours. Both were considered evidence that the lognormal lifetime component is the substorm unloading component associated with the DP1 electrojet, the `unloading' current system <cit.>. This component is not scale-free, but rather does have a characteristic timescale (2-5 hours). The work of Freeman et al. extended the exploration of the connection between the solar wind driver and magnetospheric response distributions (i.e., relating the spectral density of the output of the magnetosphere (e.g., AU and AE) to the input/driver (e.g., vB_s)) as a means to understand the magnetosphere's dynamical nature. A key conclusion was that the scale-free burst lifetime of AE is not conclusive evidence that the magnetosphere is an SOC system, and that additional observations are needed. This motivated work that utilized network analysis and imagery data to attempt to unravel what could not be unraveled from the time series alone. Their key comment that would lead to seminal work in the following years is, “...whilst scale-free behaviour in the system output is a feature of SOC systems, recent SOC models have been developed in which the scale-free behaviour is in the local or internal system output and not in the global or system wide output measured by the AE indices [Chapman et al., 1998] <cit.>. Thus in order to assess whether these models are an appropriate description of the magnetosphere, attention should turn to other observables that include spatially localised as well as global phenomena.” <cit.> took up the uncertainty in many of the previous authors' minds about the inability of the AE index to unambiguously determine the type of dynamical system that characterizes the magnetospheric behavior. They identify three main classes of `SOC' system relevant to the magnetosphere: 1) the original definition of Bak et al.; 2) forced SOC (F-SOC, <cit.>; and 3) a phenomenological definition based on observation of some or all of a set of possible SOC diagnostics (e.g., bursty time series, 1/f power spectra, avalanche distributions). Responding to previous observational evidence and inquiry, they ask how does one can reconcile the low dimensionality observed in the magnetosphere with the observed robust bursty evolution. Key is the knowledge that systems at criticality can also be low dimensional, and that therefore SOC as well as competing explanations for these systems must be distinguished by other means than dimensionality. For instance, avalanche models (strictly SOC models) must have fixed points around which the low dimensionality is observed. They argue that the complication stems from distinguishing SOC from SOC-like, “it is critical to understand to what extent measures of the system dynamics such as auroral indices also measure the solar wind driver directly and hence to quantify their appropriateness for such studies.” Systems that appear to exhibit SOC through some parameters used to proxy the state (e.g., the AE indices) may not be able to distinguish between a system with an internal attractor (SOC) from those that are driven to some state (F-SOC or SOC-like). The authors attempt to establish whether the idealized SOC state is in fact needed to account for the observed burstiness, self similarity and low dimensionality associated with magnetospheric dynamics. In exploring this distinction they elucidate the descriptions available for turbulent and other high-dimensional systems related to the different classes of SOC description. These are numerical models available to study avalanching and intermittency, including avalanche models and Coupled Map Lattice (CML) <cit.>, which is an approach that consists of decomposing the processes underlying the phenomena of interest into potentially nonlinear independent components (e.g., convection, diffusion), and then reducing each of these to simple parallel dynamics on a lattice. They present one result in an attempt to explain how systems at criticality can also be low dimensional, an attempt to understand the fact that observational evidence exists for low dimensionality in the dynamic magnetosphere while bursty evolution is also robust. Importantly, avalanche (sandpile) models, “...have robust emergent phenomenology that produces bursty time evolution with power law burst statistics as required but these systems are by construction high dimensional, in the same sense as CML. If in addition these systems exhibit fixed points, then close to the fixed points, that is, close to criticality, the behavior is low dimensional.” So for avalanche models to explain the magnetospheric observations they must have fixed points. They then demonstrate that avalanche models can be altered to exhibit low-dimensional behavior by introducing a `fluidisation parameter,' L_f, which is a fixed distance behind the leading edge of an avalanche that is flattened back, effectively moving the system away from a repulsive fixed point. Figure <ref> is a reproduction of their Figure 1 that reveals behavior of an avalanche model with varying L_f. Their model is one in which sand is redistributed when a critical gradient is exceeded locally. Redistribution in their model occurs across all sites within an ongoing avalanche by construction. The fluidisation parameter has the effect of flattening back the sand behind the leading edge of an ongoing avalanche for a fixed distance, L_f. They illustrate the central point that, under certain conditions, an originally high-dimensional sandpile model can exhibit low dimensional dynamics. When L_f is on the order of the system size, the behavior is that of the sandpile model–evolution is bursty and burst statistics are power law (Figure <ref>a). Reducing L_f significantly, the evolution becomes quasiregular, exhibiting a distinct loading-unloading cycle (Figure <ref>b). Statistics in this case are power law only over a restricted range. Thus, by changing certain conditions of the avalanche model, the high-dimensional sandpile model can exhibit low-dimensional dynamics. The implication for systems, the magnetosphere for instance, is that low dimensionality can signify a system close to criticality or certain classes of avalanching systems whose specifci parameters produce either intermittent, or quasiregular, time evolution. The complication of distinguishing SOC from SOC-like reveals the importance of understanding to what extent measures of the system dynamics such as auroral indices also measure the solar wind driver directly. Systems that appear to exhibit SOC through e.g., the AE indices as proxies for the state may not be able to distinguish between a system with an internal attractor (SOC) from those that are driven to some state (Forced-SOC or SOC-like). Chapman et al. ultimately conclude that auroral indices are not effective at distinguishing the internal dynamics of the magnetosphere from that of the intermittent solar wind driver. The statement from the piece that resounds across Complexity Heliophysics is, “Of principal concern in the magnetosphere is the variability of the driver and the extent to which any given observable yields the output of the system, the system's internal dynamics, or a mix of these with the driver superimposed.” Raising the implications of this paper to a broader level, the authors write that dealing with real observations exhibits complications that one must be aware of in studying the phenomenology of SOC for a given dynamical system (e.g., the magnetosphere). Encompassing works surrounding the beginning of the new millennium, <cit.> called SOC a new paradigm for magnetospheric understanding, implicitly naming power law analyses a vital diagnostic. Thus, numerous studies identify power law behavior as evidence of self-organized criticality in the internal magnetospheric dynamics (especially in connection with phenomena like substorms based on the AE index spectrum), though `the observed characteristics of the spectrum are also amenable to alternative interpretations' <cit.>, as will be seen in <cit.> and <cit.>. The openness of interpretation, the advent of computational capability, and the availability of observations from new missions and sensors spurred investigation of potentially richer and less ambiguous data, such as imagery. §.§ From time series to imagery Some studies have looked beyond the AE indices and their intrinsic limitations for observables/data to understand the magnetospheric, and ionospheric, complexity. A predominant source of observation for magnetospheric output is auroral optical activity or imagery. Imagery is a more direct measure of energy output from the magnetosphere than ground-based indices <cit.>. The intensity, color, and location of the aurora contain information about the magetospheric particles that cause them. The size, shape, and extent of the auroral region enables inference about the size and shape of the magnetosphere and the fluctuations in the solar wind driver. These data provide capabilities that indices or in-situ observations do not, e.g., observing over large spatial areas of the high-latitudes, but also require additional care in preparing and interpreting the data as we will see. There is rich and wide literature on the use of auroral imagery to study the magnetosphere and geospace. We will focus on those studies that have used these data to inquire about the nature of magnetospheric dynamics specifically. <cit.> is one of the early examples of examining auroral imagery data to infer the complex adaptive behavior of the magnetosphere. Motivating their study was the seeming distinct behavior of internal high-dimensional plasmasheet dynamics (e.g., burstiness) that drive small-scale auroral structures and global dynamics (storms and substorms) and the fact that the causal dynamics of the two modes are not understood. A number of studies demonstrated that these modes are inherently connected <cit.>. Principle among them and already discussed above is <cit.> who discussed forced self-organized criticality whereby global dynamics can be a consequence of high-dimensional SOC behavior and the high-dimensional plasmasheet behavior (driving small-scale aurora) can be an artifact of the global loading-unloading magnetospheric dynamics. Lui et al. attempted to use auroral imagery to monitor the total energy output of the magnetosphere across scales (small auroral arc-like, and global). They produced the first probability distributions of the power and spatial size of the magnetospheric energy output across scales from auroral emission regions. To determine the distribution of power and sizes of these auroral activity region they identified individual auroral blobs in the ultraviolet (UV) imager on the Polar spacecraft <cit.>, calculated the power in the intensity of the blob in the image <cit.> and the area of each blob, and compiled these values into statistics. Each image was manually inspected and classified as substorm or non-substorm to separately examine the statistics for these global and non-global cases. The distributions are shown in Figure <ref> (Figure 3 reproduced from <cit.>). Quiet time distributions (left column in Figure <ref>) display power law behavior across roughly four decades of dissipation size and power. Similar power laws (with slopes matching those of the quiet times within uncertainties) are found for the substorm intervals, but a peak above ∼10^5 km^2 and ∼ 5×10^8 Watt exists in the size and power, respectively. They interpret these power law regions to mean that there is an ever-present component of auroral activity which exhibits the scale-free behavior of an avalanche system and that this behavior exists regardless of the presence or absence of substorms. They interpret the behavior that is independent of the level of activity as `bursty, internal (localized) relaxations of the system.' On the other hand, the peaks noticed in the substorm intervals, but not in quiet times, are interpreted as global reconfigurations. Put another way, their interpretation was that there was a scale-free (power law) component of auroral activity that was always present (i.e., regardless of whether or not there is substorm activity), but that global events during substorm intervals superimpose on the scale-free behavior well-defined peaks in emitted power and size of emission regions. Their ultimate conclusion was that the magnetosphere acts as an avalanche system. There remained questions about the peaks found by Lui et al. Prominent among them was whether the avalanching magnetospheric system could exhibit power law (scale-free) behavior in the energy due to internal relaxations/burstiness while having a characteristic mean in the energy released with global reconfiguration that scales with the system size (e.g., the global extent of magnetospheric activity) <cit.>. <cit.> took up those questions and the idea that, “understanding the complexity in the magnetospheric behavior associated with critical phenomena appears to be necessary for a correct description of geomagnetic activity as a response to the solar wind driver,” suspecting that the absence of the temporal dimension might have contributed to the strange peaks. They extended Lui et al.'s method from a static spatial analysis to a spatiotemporal analysis using the same UV Imager data from Polar. This spatiotemporal representation was found to be vital to describe SOC dynamics in a strongly driven system <cit.>. It begs the question of why the spatiotemporal perspective is necessary? In avalanche models the main characteristics of an avalanche are its size and energy. These characteristics are found by integrating over both spatial and temporal coordinates. While in theoretical or laboratory avalanche experiments the driving can be closely controlled, this is not true of forcing in the real world. With a natural and uncontrollable source, the task to verify power law statistics requires more elaborate techniques to identify individual events. They cite two limiting cases permitting the use of laboratory SOC inference techniques (i.e., separating temporal and spatial domains) to physical systems: 1) the avalanching event lifetime is much shorter than the sampling time of the dataset and spatial propagation characteristics of the event are well known, then a purely spatial analysis is warranted; and 2) in the absence of spatial information but one can be certain that there are not multiple avalanches evolving simultaneously, then avalanche distributions can be calculated from time series of the output characteristics of the system. Neither case is true for the magnetosphere and auroral emissions. For instance, the low driving rate condition in effect requires that only one reconnection site in the plasmasheet be active at any one time given that the frequency of reconnection in the plasmasheet is high relative to the driving of the magnetosphere. However, it is well-established that there are often multiple reconnection sites over an extended spatial region <cit.>. This leads to a key statement of the text, “Therefore, the low driving rate assumption is not satisfied in the magnetosphere and so the results of previously reported time series analyses related to the hypothesis of SOC in the magnetosphere <cit.> are insufficient for obtaining correct avalanche distributions in terms of a rigorous SOC approach. Moreover, since the lifetime of many auroral activations is longer than the sampling time of the Polar UVI image series, the static spatial analysis reported by Lui et al. [2000] <cit.> is also inappropriate.” The key to their spatiotemporal approach is to identify and distinguish multiple simultaneous avalanches (auroral emisions) and calculate their individual properties <cit.>. They analyzed more than 30,000 Polar UVI images from January–February 1997 and January–February 1998 and largely underwent a similar preprocessing as the Lui et al. data. The sampling time of the images was 184 seconds, also including a period where the sampling occurred at a higher rate (37 seconds). They compiled statistics for active auroral regions that persisted longer than the sampling time and thus across two or more consecutive images, tracking that region through the images up to five hours. They treated events that split but had a unique source as a single event, and merged events with spatially distinct sources as separate events, following <cit.>. For each event, they calculated lifetime T, integrated size S, and integrated energy E as well as maximum active surface area A and maximum energy deposition rate W. Figure <ref> reproduces their principal result. The central finding is that, across UV images and a variety of solar wind conditions, they found no characteristic time, size, or energy scales within the entire available range of studied parameters. The auroral events exhibited well-defined power law statistics over a broad range of scales. They did not find the distribution peaks that Lui et al. reported, concluding that the peaks are an artifact of an incomplete avalanche detection methodology that missed the temporal component. Thus, auroral emissions exhibit statistical properties of avalanches in SOC models. In relating the magnetosphere dynamics through auroral observation and the behavior they observed, they were able to draw analogies to the avalanche model itself: auroral activations are the avalanches while reconnection events in the plasmasheet are the internal dissipation of SOC models. In addition to addressing methodological issues that may have produced the peaks in Lui et al., Uritsky et al. contributed several other key findings, including: * The power law distribution, and thus SOC-like behavior, was exhibited over many orders of magnitude for the duration, power, and size of auroral activations and therefore the magnetosphere behaved as an SOC system across wide ranges of geomagnetic activity; * The dynamics comprising all levels of magnetospheric activity remains scale invariant; and * The magnetosphere as represented by the spatiotemporal evolution of auroral emissions operates in a self-organized state. The dynamics of auroral perturbations corresponds well to avalanche dynamics at criticality. The lasting discovery is that one can expect cross-scale coupling effects to play a significant, if not crucial, role in the development of geomagnetic disturbances and that large-scale properties of the magnetotail plasma sheet depend critically on the statistical hierarchy of small- and intermediate-scale perturbations associated with sporadic localized magnetic reconnections, current sheet disruptions, and other localized plasma instabilities <cit.> A question raised by this work for the community is whether statistics of mesoscale magnetosphere simulations match observed statistics from the SOC paradigm. A general comment can be made from the Lui-to-Uritsky development: the spatiotemporal domain is required for identifying SOC dynamics in a strongly driven system. To make appropriate comparison, identification of the auroral activations had to be conducted in spatiotemporal space. Uritsky et al. found that the spatial-only analysis of Lui et al. produced `bumps' in the power law distribution that were entirely methodological and were incorrectly interpreted to be unique behavior during periods of high perturbation. In order to truly understand the system, data across the full spectrum of system activity were critical. Following the demonstration of the importance of the spatiotemporal perspective in concert with the advent of imaging platforms, Complexity Heliophysics began to use imagery data more regularly, a trend that persists into the 2020s and is expected to continue <cit.>. Indeed the NASA mission intended to resolve longstanding fundamental questions about the nature of substorms, the Time History of Events and Macroscale Interactions during Substorms (THEMIS), included All Sky Imagers (ASIs) in the ground observatories that accompanied the magnetospheric spacecraft <cit.>. Imagery, as well as multi-modal observational systems, will continue to be a vital component of unraveling the complexity of the solar wind-magnetosphere-ionosphere system, and the capabilities of these systems will grow (e.g., see the University of Calgary's Transition Region Explorer (TREx) sensor web <cit.>). Of course, imagery data have relatively shorter histories and pose their own processing and analysis challenges, so time series analyses remain important sources of new complexity studies <cit.>. Prominent among the continuing body of research are results related to the complexity of the geospace system made possible by the Swarm mission, which we do not provide a detailed review of in this manuscript but point readers to several important works <cit.>. As far as processing challenges, more recently groups have been bringing tools from artificial intelligence and machine learning (AI/ML) to bear (on auroral imagery <cit.> and solar imagery <cit.>). Section <ref> discusses the intersection between complexity science and AI/ML, positioning it as a key challenge for 21st century Heliophysics and indeed all of science. § EMERGING LITERATURE: TOPICS AND TRENDS The following section examines emerging literature (largely drawn from publications after the year 2010) and extracts topics and trends. These perceived topics and trends are organized by section. This section is a departure from the previous ones in that along with the literature review, subjective assessment of areas that might be important to Heliophysics in the coming years are provided. These could be interpreted as predictions and should thus be treated with a degree of uncertainty or openness to interpretation. We also draw extensively from complexity science literature outside of the field of Heliophysics to establish trends. §.§ Metrics and Diagnostics of Complexity The first observation is that the complexity paradigm in Heliophysics is shared across disciplines (physics, in general, biology, social sciences, etc.), and understanding the topics common to each of these versions of the complexity paradigm informs the future research avenues for each of them. The first has to do with how we quantify and make legible complexity. The means of 'metric-ing' complexity will be a good transition into this section as it will call back to themes of the literature review above and point to trends we identify below. There is no single metric of complexity, just as there is no single metric suitable to understand the capability of a model. <cit.> gives three dimensions along which to measure complexity: * How hard is it to describe? * How hard is it to create? * What is its degree of organization? Many metrics have been proposed for assessing and quantifying a complex system (see <cit.> for an informative, but non-exhaustive enumeration), and the list continues to grow. A complex system is by definition multi-faceted. No single measure could describe it adequately. Complexity pioneer and Nobel laureate physicist, Murray Gell-Mann, noted as much “A variety of different measures would be required to capture all our intuitive ideas about what is meant by complexity” <cit.>. Complexity is a collection of features, not a single phenomenon <cit.>. Therefore, a science of complexity must understand the various metrics available to quantify aspects of complexity, their capabilities and shortcomings. We have already encountered numerous measures of complexity in the literature review above, namely self-organization and power laws. Here we will review several other measures, filtered for inclusion based on their relevance to Heliophysics. The literature suggests that the degree of organization dimension perhaps dominates in the domain. We will not address the more computational/computer science metrics such as logical depth and algorithmic complexity. <cit.> (Chapter 7) provides a more general development of the topic of complexity metrics. The most basic complexity metric is numerosity, or counting entities and interactions between them. Numerosity is common to all science. Next are measures of order/disorder in a system. Disorder is mathematically represented through probability distributions and their measures of dispersion such as variance. Related to these measures of disorder and one of the important core measures of complexity science is Shannon entropy, measuring the amount of uncertainty in a probability distribution <cit.>: H(X) ≡ - ∑_x ∈𝒳 P(x)logP(x), where X is a random variable with probability distribution P over events x. Shannon entropy quantifies the difficulty of predicting an actual outcome given possible outcomes (x) or, in a temporal context, predicting future outcomes given past events (x). Shannon entropy is inextricable from uncertainty quantification. In some domains, `diversity' is used as opposed to disorder. Shannon entropy is a part of most measures of diversity in these domains (e.g., ecology) <cit.>. Numerosity and entropy allude to statistical physics, more generally, as an approach to quantifying complexity <cit.>. The next feature of complex systems that requires measurement is feedback. Feedbacks are loops of interactions across a system. Feedbacks are created when a change in a component of a system affects the rate of change of that same component <cit.>. There is no measure of feedback. However, feedbacks support the persistence or disappearance of a behavior over time such that their signatures are present in the system. In the computational fields, feedbacks are defined by outputs of a process being put back into an input of the same process. Feedbacks reveal themselves in physical systems through patterns, structures, and nonlinearities such that measures to quantify the effects of feedback are those of structure formation and nonlinearity. Fractals themselves, well studied in Heliophysics as indicated by the volume of publications into their statistical signature–the power law–in the solar-terrestrial system, are a result of repeating a simple process over and over in a feedback loop. A widely used computational tool for studying feedback is the agent-based model, a simulation of collections of `agents' of entities of the system in which the agents follow certain rules for interactions and their evolution is studied. The closest example in Heliophysics are test particle simulations (e.g., <cit.>). Attempts to understand the societal impact of Heliophysics research, i.e., space weather, might be an arena for agent-based models (ABMs) of human behavior that might help understand preparedness to respond to space weather storms, especially in the context of potential compounding effects such as terrestrial weather or related system failure <cit.>. This would be one way to integrate Heliophysics systems models with human behavior models. ABMs, capaciously defined, is an intriguing future direction for Heliophysics and space weather sciences. The results of ABMs are multidimensional data characterized by interactions between agents. Their structure is inherently a graph or a network. Therefore, making sense of those data depends on graph theory, which enjoy well-defined and mathematically rigorous formalisms. Section <ref> introduces graphs and discusses important quantitative measures to understand them. In short, graph theory provides a way to study the geometry and evolution of a network through centrality measures, community structure, and modularity <cit.>. These measures offer deeper insight into a system and should be considered core metrics of complexity. Finally, among the most essential ideas in complexity science is self-organization, that order can spontaneously arise from many uncoordinated interactions <cit.>. Self-organization is measured through the order of the system. This is perhaps the measure best explored in Heliophysics to this point. Correlation measures, from linear Pearson correlation to covariance to information theoretic calculations like mutual information and transfer entropy, reveal order in a system. Nonlinear order is often studied with power law relationships. A way of measuring complex systems that has not been as widely explored is robustness and resilience. Robustness refers to the ability of a system to maintain its structure or function in the presence of perturbation. Sans the requirement that structure is maintained, resilience refers to the property of a system to accommodate changes and reorganize itself while maintaining the crucial attributes that give the system its unique characteristics <cit.>. Tools to study robustness and resilience include dynamical systems theory and theories of phase transitions <cit.>, such as stability analysis <cit.>, critical slowing down <cit.>, and tipping points <cit.>. Resilience offers a way that decisions can be made based on complex systems understanding, perhaps permitting a new framework for bridging research and operations in Heliophysics and Space Weather. The measures identified above have in some manner been used in Heliophysics. However, the metrology of complexity is a young field, and therefore rapidly changing <cit.>. Paying attention to the advent of complexity metrics might lead our research to tools for better representing the phenomena we observe. The lesson from this brief enumeration of measures of complexity is that the multi-feature nature of Heliophysics now requires multi-faceted approaches to measurement and evaluation. The geospace community more recently has recognized this, advocating <cit.> and providing frameworks <cit.> for more robust evaluation of our models across numerous metrics and levels. This review suggests that a similar approach must be taken for future work to quantitatively evaluate complexity in the system. §.§ Coarse-Graining <cit.> describe `coherent structures' as a characteristic of dynamical complexity, phenomena that result from the nonlinear interactions of their constituent parts and are dramatically different than the behavior of those parts. The truism alluded to is that the whole is more than the sum of the parts. What the authors describe explicitly is present across the complexity Heliophysics literature, albeit often implicit and unnamed: there is a process of `coarse-graining' to identify the relevant macrostates of a system. <cit.> defines a coarse-grained description of a system as one in which some of the fine microscopic behavior of a physical system has been smoothed over. She emphasizes that this is a principled smoothing, not arbitrarily reducing the granularity, but instead based on whether information remains important to a descriptive or predictive task at hand and does not introduce outside information. She writes that this is a `lossy, but true' process. Coarse-graining is a process of integrating over parts, results in a compact representation of a system, and provides the basis for an effective theory. The preeminent example is temperature: a macroscopic description of a fluid that is an integration of microscopic behavior of particles. Averages are but one method of coarse-graining and there are many more, some much more complicated. The concept of coarse-graining is relevant across domains. In physics, renormalization theory is one remarkable example <cit.>. As early as the 1970s in biology, coarse-graining has been used in molecular modeling of biomolecules <cit.>. Even in art it has a long history in drawing attention to the scale at which one witnesses the world. The artist Piet Mondrian painted series of representations of trees, displaying the trees as increasingly geometric and abstract until the tree itself could scarcely be recognized <cit.>. His work, like representations in science, explore the inherent patterning and ordering of nature. The prevalence of coarse-graining in science and society suggests an essential role of this process and that a review of complexity in a domain of science should address. Despite the utility of macroscopic effective theories like thermodynamics and statistical physics, there exists long-standing debate in physics and biology about the right level at which to describe a system, or in how far `down' we need to go. Associated with this debate are questions related to mappings between microscopic to macroscopic states. Though much of this debate has been staged in fundamental physics and biological domains, it speaks to unresolved issues in Heliophysics <cit.>, conversations punctuated by a perceived need to reconcile particle-level behavior with system-level phenomena and multiscale quandaries. The debate has shown up in contrasting modeling approaches, e.g., particle-in-cell and magnetohydrodynamic ways of describing the solar and magnetospheric systems. On the multiscale understanding side, questions abound about the `right' level at which to look at the system and how to study the relationships between scales. The literature suggests that what is needed are hybrid models (e.g., note progress made in unifying particle and magnetohydrodynamic modeling <cit.>) and new methods for conducting multiscale analyses (e.g., explore the advances made in comparing scales and studying relationships between scales <cit.>). It is quite possible that multiscale understanding and ideas about the right level at which to represent the Heliophysics system will bear on the outstanding desire to bridge the gap between research and operations <cit.>. Research and operations function at different scales. Where research may need to look at the finest scales technologically possible to advance the boundaries of knowledge, operations requires an efficient representation of the adequate knowledge to make a decision. In some sense, the research to operations (and operations to research) gap is the problem of developing an effective theory or a compact representation that permits moving between scales. Presaging a discussion that concludes this review, this is the same tension that exists between scientific understanding and prediction. These separate regimes of knowledge discovery and science dictate concomitant separate regimes of representation. In Section <ref> we center this discussion in terms of fundamental science vs. prediction-oriented science (e.g., basic science vs. applied science; physics-based modeling vs. artificial intelligence/machine learning) and suggest this is inextricable from the future of Heliophysics research. In the sense that coarse-graining is capturing underyling structure and pattern in complex systems, two forms of coarse-graining are particularly important to Heliohpysics: information theory and network science. We discuss them in sequence next. §.§ Disentangling drivers and parameters amidst nonlinearities: Information theory Like coarse-graining, information theory is often used to simplify a complex system, to understand the more parsimonious description of its functioning. Indeed, in the context of entropy, the two concepts are quite similar. There is a growing body of research within Heliophysics that suggests that information theory is a useful form of coarse-graining for our field. When a system's drivers are nonlinearly correlated, and the parameters of the system that they effect are numerous, it is a challenge to untangle their relative effects. There are an immense number of frameworks with which to find and investigate causality relations among different time series; information theory has proven powerful for the detection of causal information flows in complex systems. Information theory provides a mathematical framework for quantifying nonlinear flow of information from drivers to system parameters and between system parameters. It assumes that a given domain of interest can be described by coupled subsystems that interact (exchange information) with one another. Information theoretic approaches then use a number of possible measures to extract the direction of the information flow and to infer causality. Recent literature has revealed promise for information theory to provide observational constraints that can help guide the development of the theories and physics-based models and for feature selection to create more accurate data-driven models <cit.>. <cit.> discusses a method for quantifying the strength and direction of the coupling between the solar wind and the magnetosphere-ionosphere system. The authors introduce a new measure of information transfer to the solar wind-magnetosphere-ionosphere domain called the Transfer Entropy (TE), which is useful for the nonlinear analysis of the relationship between two time series. TE is based on transition probabilities between two random processes, X and Y, obtained by inserting the Markov condition into the conditioned Kullback-Leibler distance such that the information flow from X to Y is accounted for. TE is a quantitative way of determining if the past history of X is predictive of the future Y. TE also provides a means to distinguish bidirectional information flow, thus providing evidence about feedback processes, mentioned above as a difficult thing to measure in complex systems. They show that TE is a useful measure for capturing relationships between the solar wind and magnetosphere. They showed a strong information transfer from the vertical component of the interplanetary magnetic field B_Z into the geomagnetic indices, with time delays of about 30 to 60 minutes. Further, they inferred that substorms drive geomagnetic storms from a strong observed information flow from the AE index into the SYM-H index (analogous to the Dst index). Similarly, <cit.> were interested in measures from information theory that could illuminate the relationship between the solar wind and magnetosphere-ionosphere system. Their measure was Granger causality. They found a tighter temporal relationship between B_Z and the AE index, a delay of only 10 minutes, and a 30 minutes delay with SYM-H commensurate with <cit.>. They, however, did not find any relationship between AE and Sym-H, casting uncertainty on the claim that substorms might drive geomagnetic storms. In addition to solar wind-magnetosphere coupling applications, two areas of Heliophysics have demonstrated the physical discovery potential of information theory: radiation belt dynamics <cit.> and solar cycle dynamics <cit.>. Researchers in the decade following 2010 have been applied the theory to understand the influence and the timing of the solar activity on the near-Earth environment, extending its utility to the space weather domain <cit.>. A caution and a difficulty of information theoretic analyses is that they require robust support in the available data. Care must be exercised in assessing the data in numerous ways, including: 1) data sufficiency: the available data must be statistically representative of the system, sampling the full space; 2) relevant variables: the data must include all relevant variables that affect the transmission and reception of information in the system; and data diversity: the data must be diverse enough to capture the full range of possible transmission scenarios and conditions, which often requires integrating data from multiple platforms and sources. Information theory is promising, but realizing its potential suggests that Heliophysicists develop or strengthen certain literacies such as probability, statistics, noise quantification, and systems science. §.§ Network Science: A future for Heliophysics and Space Weather Networks surround us. Network structure, or a high number of dynamic interacting units, characterizes much of our society, from molecules that constitute biological organisms to our communication infrastructure like the internet to the power grid. Networks even describe the structure of how we interact with one another–social networks. We have been grappling with the prevalence of networks in our society for many years <cit.> and more recently recognizing their capacity to represent the complex world around us <cit.>. The reason for this is that networks are how complex systems are represented <cit.>. Networks are the lingua franca of complexity, so to speak. The profound importance of this discovery is that network analysis rests on a well-defined domain of discrete mathematics known as graph theory (graph, in this context, is synonymous with network), dating back to Leonhard Euler's solution to the Königsberg bridge problem <cit.>. Thus, if one can figure out how to encode a system as a network, then a robust domain of mathematics can be applied to study it and discovering important properties such as community structures, key nodes in the flow of information in the network, and the type of network that reveals properties of its functioning (e.g., random networks <cit.>, small-world networks <cit.>, and scale-free networks <cit.>). First, a very brief primer on graphs and networks. We will hereafter use the term network to represent both–indeed they are synonymous, with the only difference being that some communities prefer graph (e.g., mathematics) while others prefer network (e.g., social sciences). Networks show connections between things. The `things' are called nodes or vertices and can represent people as in a social network or any other definition of the entity for a domain. The connections are called edges and they represent a relationship between nodes. In a social network the vertices might indicate if two people know one another. The quantitative means that one chooses to define a vertex is central to the construction of the network. Another key concept is the adjacency matrix. The adjacency matrix is a square matrix with rows and columns corresponding to every node in the network. The corresponding matrix element will be either 1 or 0 according to whether those nodes are connected or not. This is one form of visualizing or understanding the network. Other means to derive important sub-networks (collections of a few of the nodes of the full network and their connections) and aggregations of the network are available and important given the complexity that most real-world networks display. The purpose of these aggregations are to permit an understanding of the network topology or geometry. <cit.> provide a review of methods to encode a system as a network. Graph theory applied to complex systems, possessing irregular structure and dynamically evolving over time, gave rise to the term `complex network analysis.' In this review, we use the parsimonious `network analysis' to encompass network studies of any sort. The methods for encoding and subsequently analyzing systems as networks have been empirically, if nonlinearly and cobbled together from across disciplines, determined and were accelerated in the 20th century. The 1920s witnessed a sophistication of social network analysis–networks of relationships among social entities (e.g., trade between nations, communication between members of a group) <cit.>. It has found more recent application in biological, engineering, and geophysical systems <cit.>. The power of the network representation is that understanding and approaches from the mathematical field of graph theory can be used to explore and understand the networks. Indeed network measures exist to define the geometry and evolution of the network. Responding to Section <ref>, network measures are being used as new metrics for complexity, in effect means to aggregate the system in less lossy ways (e.g., better coarse-graining). One could consider three levels of measuring a network: 1) micro: the level of nodes and edges; 2) macro: distributions aggregating quantities; and 3) mesoscale: a large class in between <cit.>. At the micro level, one considers the nodes and edges individually or in small groups. A node's degree is the number of edges it has. At the macro level, one aggregates the entire network and studies statistical distributions that attempt to describe it. Common macro measures include diameter (the length of the longest geodesic path between any pair of nodes in the network for which a path actually exists), average path length (average shortest path between all pairs of nodes in the network), degree distribution (the frequency distribution of the degree of the network nodes), and clustering coefficient (average probability that two neighbors of a vertex are themselves neighbors). Together, these are ways of understanding the geometry of a network, which is the complement of it's size, connectivity, efficiency, and homogeneity/heterogeneity. The characteristics of the degree distribution, including its higher order moments, are a fundamental way that networks of different types and behavior are distinguished and it conveys information about the functioning of the system. The mesoscale level covers everything in between. A topic that has received much attention in the network science literature is centrality, which is the attempt to quantify the most important or central nodes in a network. There are numerous different ways to think about and thus to calculate centrality, some quite simple like degree centrality (looking at the degree of a node with respect to others in the network), and others more involved and considering neighborhoods around a node such as eigenvector centrality, betweenness centrality, and closeness centrality <cit.>. Another mesoscale structure that receives much attention is the community (a group of nodes relatively densely connected to each other but sparsely connected to other dense groups in the network <cit.>). Identifying communities in networks has significant implications for making discoveries about systems, though there is no consensus on technique for detecting them. It is perhaps in the mesoscale where undiscovered insight into Heliophysics systems and processes lays. Indeed, this is reflected in the Heliophysics network science literature reviewed below. The ability to capture multiscale relationships and behavior using a network structure or representation provides an exciting opportunity to improve multiscale understanding of the Heliophysics system <cit.>. Beginning in the mid-2010s a few pioneering works began to realize the potential of network analysis for space physics, Heliophysics, and space weather applications. Traditional approaches to systems analyses in Heliophysics and space weather have attempted to track energy flow from the Sun to the Earth or other planet through a collection of time series. This approach does not generalize to multi-event studies nor to extracting statistics. Networks liberate the systems approach from the limitations inherent in this approach. They allow one to track energy flow and dynamic changes across a system for multiple events in a principled manner and to quantify them using well-established measures from graph theory and network analysis. Not only do network approaches reveal new parameters by which to understand the system, they enable quantifying the likelihood of those measures that can become the basis of a risk quantification system <cit.> and ultimately understanding how to create technologies and society resilient to the threats of space weather. Thus, we believe it important to provide a brief review of those works here. We anticipate that this field will evolve rapidly in the coming years and intend to provide a basis for researchers to become oriented and trace some of the important history here. <cit.> applied network analysis to >200 distributed ground-based magnetometers that are indexed in the Super Magnetometer Initiative (SuperMAG) <cit.>. Already a debate in the Heliohpysics community whether ground-based indices such as the AE and Disturbance storm-time (Dst), themselves aggregates of small numbers of ground-based magnetometers, were capable measures for understanding magnetospheric dynamics, Dods et al. studied whether a network representation of ground-based magnetometer data can quantitatively extend our qualitative understanding of magnetospheric substorms, creating the first application of network analysis to SuperMAG data and one of the early applications to space weather in general. The observations are vector magnetometer time series data at 1 minute cadence from the SuperMAG database (all stations from the northern hemisphere). Canonical correlation <cit.> was used to establish similarity between the pair of vector time series as a function of time. To construct the network, they defined magnetometers as nodes and correlation above a threshold between the vector magnetometer time series from pairs of stations within a running time window as edges. Figure <ref> reproduces their visual explanation of the network construction. They investigated four substorms. For each event, they form dynamical networks of connected stations in magnetic local time (MLT)-MLAT space in the Northern Hemisphere. Importantly, the correlation threshold between any two stations might be different, so they created a global threshold that effectively normalizes the likelihood of being connected to the network. This is an important step of network analyses for Heliophysics and Space Weather applications where observations are distributed and baselines are commonly distinct. They are able to then construct networks for any given event. Networks are reconstructed in the paper for four selected substorm events (defined according to <cit.>) and one steady magnetospheric convection (SMC) event. From each network dimensionless parameters were obtained that quantitatively characterize the network and by extension, the spatiotemporal dynamics of the substorm under observation. They found several typical signatures of the isolated substorm: * Before onset, the network has few connections; * Connectivity rapidly and clearly responds to the onset, characterized by high-latitude connections, but not without low- and cross-latitude connections; * In the recovery phase, connection structure switches from high-latitude dominant to low-latitude dominant; and * The normalized total number of connections and the average geodesic connection distance (physical distance) of the substorm period networks are greater than those for the SMC event and much greater than during quiet times. Thus, they discovered that network responses to substorms, SMCs, and quiet times are quantitatively distinct. They conclude also that their technique may have applicability to other magnetospheric phenomena. That supposition was examined by <cit.>, who apply the same methodology to the response of the quiet time (no substorms or storms) large-scale ionospheric transient equivalent currents to north-south and south-north IMF turnings. The calculation of the correlations between station pairs is identical to <cit.> but they also map the network onto a regular grid and aggregate the network responses over more than 350 events (between 1998–2004) to obtain an averaged response as a function of geomagnetic location (MLT-MLAT) and of the time delay since the occurrence of the IMF north-south and south-north turnings. For both north-south and south-north IMF turnings they examined short-range (station-pair connections with geodesic separation <4000 km) and long-range (>4000 km) connections. Their results indicate magnetometer correlation network responses are distinct for north-south and south-north turnings and the sign of IMF B_Y component. Demonstrating the potential of network analysis, they provided new information on two competing concepts for reconfiguration of ionospheric currents in response to a change in the north-south component of the IMF: a fast initiation of the transient currents associated with ubiquitous and near-simultaneous response at the high-latitudes (e.g., <cit.>) and a gradual reconfiguration spreading from an initial response on the dayside followed more gradually by the nightside (e.g., <cit.>). The network response shows near-simultaneous responses (∼8-10 minutes) between magnetopause impact and network response, consistent with <cit.>. They discuss tentative evidence for a two-step process: fast initiation of change in ionospheric equivalent currents between day and night followed by a more gradual reconfiguration that first appears on the dayside and then the nigthside. Using spatial maps of the edges of the network, they found that turnings are associated with increases in connectivity (correlation) in the areas known to be associated with the two-cell convection pattern <cit.>. Ultimately, this was one of the first studies to reveal that dynamic correlation networks can characterize the spatiotemporal ionospheric response observed in large numbers of ground-based magnetometers. <cit.> advanced the analysis of the spatiotemporal evolution of substorm ionospheric current systems in SuperMAG data with networks by introducing lags in the canonical cross correlation. Considering lags, rather than zero lag correlation as Dods et al. had done, permitted construction of a directed network that captured not only the formation of coherent patterns observed by magnetometers but also the direction of information propagation of those coherent structures. They used the directed networks to test different proposed methods for how the ionospheric current system evolves during a substorm. To assess the direction of propagation, they divide the nightside auroral ionosphere (18 MLT to 6 MLT, passing through midnight; 60-75^∘ MLAT) into three zones of six hours MLT, a typical extent of the substorm current wedge (SCW) <cit.>. They conclude that the magnetic perturbations are consistent with the SCW formation during substorm onset and westward expansion into a coherent current system in the premidnight (MLT) sector (see their Figure 2). Subsequently, a coherent correlation pattern emerges that spans the entire nightside ionosphere. <cit.> took the analysis a step further, taking advantage of a wider set of network science analysis techniques to understand the properties of a network. Namely, they studied community structure in the SuperMAG networks in which a community is defined consistently with <cit.>: an area of a network more densely connected to one another than to the rest of the network. They detect communities in SuperMAG networks across 41 isolated substorms with 1-min resolution data. Primary results are illustrated in Figure <ref>. In the networks, multiple discrete current systems exist prior to onset (see Figure <ref>a-d) and progressively transition into a coherent SCW (see Figure <ref>f-h), notably a transition to a coherent large-scale spatially extended structure rather than flux accumulation of incoherent small-scale wedgelets. The same pattern is observed across numerous algorithms for community detection. Thus, the SCW is a characteristic part of substorm evolution, potential resolution for long and ongoing controversy in substorm science. The spatially extended communities they observed cannot be obtained by having many, small, spatially localized wedgelets, which are internally correlated, but lack cross correlation with each other. Of immense societal relevance for ground-based magnetometer observations of space weather activity are the corresponding potential hazard to grounded technology like the power grid. The threat to the power grid is quantified by geomagnetically induced currents (GIC). A pair of studies have explored network analysis for GICs, both providing insight into the GIC hazard and revealing network analysis as important for bridging from Heliophysics insight to space weather risk assessment and societal relevance. <cit.> produced networks connecting SuperMAG magnetometers to newly available GIC data collected by power utilities through the the Electric Power Research Institute (EPRI) SUNBURST project. They calculate probability multipliers for all pairs of magnetometer-GIC sensors, information that would be useful to using magnetometer observations to determine risk the power grid. Overall, there is a factor of 1.83 increase in the GIC increase given magnetometer changes. On a sensor-to-sensor comparison, however, the magnetometers that provide the most information for a given GIC sensor are often not those in closest geographic proximity, meaning networks reveal non-intuitive relationships. <cit.> used SuperMAG ground magnetic perturbation measurements as input to a model of the high-voltage (HV) power grid in the United Kingdom (UK), which output GICs at the grid transformers. They quantified the spatiotemporal response of the GICs in a manner similar to <cit.>. A number of conclusions were drawn, including: * the entire physical power grid is spanned by coherent connections with long-range correlations at intense storm times; * the GIC networks are not a simple response to the rate of change of the magnetic field; * during storms, networks have intermittent quiescent periods in which distinct sub-networks form; and * GIC networks are distinct from the physical networks of the HV grid, exhibiting characteristics unlike the exponential and small-world nature of the physical grid. Their work offers a direct connection to space weather risk assessment: “The GIC response networks that we have determined here have significantly different properties to that of the physical HV grid. This is important since it implies that previous studies that focus on stability of the physical grid to the failure of individual network connections may not fully inform the assessment of space weather risk.” Concurrently and drawing inspiration from Dods and Orr et al.'s revelations about the utility of networks in geospace science, <cit.> applied network techniques to another important data set: total electron content (TEC). Global, high-latitude response of TEC is the result of numerous complex geospatial processes, each with unique spatial and temporal scales <cit.>. Despite being rich with information about the Earth's space environment, their characteristics at high latitudes are not well understood, and the complex nature of the processes in this regime requires innovative and sophisticated approaches to (1) understand the information content of these data and (2) gain the most scientific utility from them. These were motivation to attempt to understand the spatiotemporal characteristics of TEC in the high-latitude regime. In their application, nodes were defined by the magnetic coordinate system grid points (physical locations) and edges by spatiotemporal correlations exceeding a threshold between them. It was the first application of such techniques to TEC data. Their data are hourly averages of TEC data compiled from the worldwide system of ground-based GPS receivers binned into a geographic 1^∘ latitude × 1^∘ longitude grid (rebinned in this work into equal area magnetic coordinates). Data from winter and summer seasons in 2016 were used. Data were studied separately for the northern and southern hemispheres and all data were separated into interplanetary magnetic field (IMF) clock angle bins, the angle between geocentric solar magnetic (GSM) north and the projection of the IMF vector onto the GSM Y-Z plane, to determine dependence on the solar wind forcing. Figure <ref> visually details the network construction steps. Using predominately the network measures of degree centrality, median geodesic separation distance, and local clustering coefficient, their analyses suggested that the Northern Hemisphere exhibits correlations over shorter distances but is more spatially uniform while the Southern Hemisphere correlations typically extend over larger distances but that the hemisphere as a whole is more spatially fragmented. Their resultant maps indicate the scale sizes important to characterize the ionosphere during geomagnetic activity depend on season and hemisphere. The proof of concept study was exciting and pinpoints several ideas for follow-on inquiry. These studies provide a framework through which to analyze the complex magnetosphere-ionosphere-thermosphere system free of the limiting assumption that phenomena can be described by interpolating distributed observations onto a grid. As <cit.> write network analysis enables the characterization of, “... the spatiotemporal correlation pattern for [any] event directly from the spatially nonuniform original observations, then aggregate many such patterns onto a single grid to give a complete spatial coverage.” How does network analysis relate to risk and resiliency? Risk is probability times likelihood. Risk is a distribution. In the terms of the insurance industry, hazard is actual cost incurred from a risk. To effectively quantify risk for space weather we must use the most informative parameters and calculate their likelihoods. The important parameters will be hazard-specific (e.g., the relevant parameters for power grid risk will be different from those for communications systems risk). Traditionally, and as a result of observational limitations, we have relied on indices like Dst and AE as the important parameters. Yet we know them to contain inadequacies (e.g., see Section <ref> and the discussion of the use of AE to represent the magnetosphere). Network measures are potentially much more directly related to the physical phenomena relevant to a given risk and thus an exciting pathway to better risk quantification, upon which resilience is built. Finding new applications for Heliohpysics is a trend that continues, particularly in connecting fundamental research with applied outcomes as evidenced by <cit.>. §.§ The role of Natural Language Processing Natural Language Processing (NLP) is concerned with programming computers to process, analyze, and respond to large amounts of natural language data. The importance of augmenting human research and activities with automated analysis of text is now undeniable, given the sheer volume of relevant scientific literature. It is beyond the capacity of an individual researcher to understand all of the relevant scientific information about a topic. The growth rate in scientific literature is exacerbating and compounding the problem <cit.>. For inter- or trans-disciplinary science, the kind that complexity demands, the amount of relevant information grows exponentially simply through the need to incorporate more than one domain's knowledge. To some extent the history of disciplinary science has been to put up boundaries on the science one attempts to answer as a means of reducing the seemingly infinite amount of information that must be considered. Complexity science exposes those boundaries as artificial. Therefore, it requires new methods for handling more voluminous information. This review has already demonstrated some of those methods for data analysis, but NLP is vital for textual analyses. There are many applications of NLP that have already or are poised to have an impact on scientific research and the process of science. Common tasks include, each applied to a given piece of natural language (e.g., a publication): * Named entity recognition (NER): identify and locate entities in unstructured text such as person, organization, location; * Information retrieval: searching for information contained in a document; * Keyword generation: extract or identify the most relevant words, phrases, or ideas in a document; * Summarization; * Sentiment analysis: identify the affective state and the subjective foundation of a piece of text; and * Question and answer: return an answer to a natural language question a user poses based on a document or collection of documents. Many of these `downstream tasks' <cit.> rely on a language model. A language model is a model that assigns a probability to a sequence of words <cit.>. Language models take a collection of words and conditioned on that information assign the probability of another word or collection of words. For a simple example, perhaps you want to predict the next word in a sentence given the ones that precede it. A language model can fulfill that task. However, this basic functionality can extend to much more complicated tasks such as taking an entire document and predicting descriptive keywords or creating a summary. For a number of reasons modern language models have been changing rapidly. First, the growth of textual data on the internet is growing exponentially, providing vast volumes of data for training these models, which typically have millions of parameters that need to be determined and require heretofore unimaginable volumes of data to constrain. Second, computational power is making it possible to process those data. Finally, AI research produced new modeling approaches such as recurrent neural networks <cit.>, transformer architectures <cit.>, and self-supervised learning (e.g., <cit.>). Together, these influencing factors produced step changes in performance of language models on many tasks <cit.>. Improvements in capability coupled with wider awareness owing to much greater accessibility of language models through services like ChatGPT[<https://chat.openai.com/chat>], early 2023 has become a cultural moment for NLP and language models (e.g., <cit.>). In 2018 Google developed the Bidirectional Encoder Representations from Transformers (BERT) language model <cit.>. Judging the general language models incapable of deeply contextual scientific support, scientists subsequently recognized the opportunity to tailor the baseline models like BERT for their domains or narrower applications. The first result was SciBERT, a fine-tuning of the BERT model that focused on scientific papers from the Semantic Scholar corpus[<https://www.semanticscholar.org/>] <cit.>. SciBERT is part of a growing list of models that adapt BERT to specific domains and tasks, for which perhaps the most relevant to this review is that created for the astrophysics and astronomy domain, astroBERT <cit.>. One of the most sophisticated examples, likely due to the availability of large volumes of training data in the biology and biomedical domain and a more sophisticated baseline language model from a family of models known as generative pre-trained transformers (GPT) <cit.>, Bio-GPT has demonstrated improvement across six biomedical NLP tasks <cit.>. Progress in Heliophysics is nascent [see `SMD Knowledge Graph Discovery` at <https://www.calameo.com/read/0055032805743f9fd8bf6>] and most of the NLP work being done is on downstream tasks using existing language models rather than training our own. Despite not yet realizing the potential that some have stated exists for these domain-specific models, the concept of foundation models <cit.> remains enticing. Capable language models for Heliophysics research are far from settled and available, yet may be a core component of searching and discovering vast amounts of literature and research artifacts available. Encompassing the diverse disciplinary integration needed for complex systems analyses will require sophisticated NLP capabilities, including perhaps entirely new approaches to building language models. This review combined traditional literature review processes with NLP to create a hybrid review article. The NLP used was relatively simple, mining the NASA Astrophysics Data System (ADS) for articles, two-fold filtering based first on selected Heliophysics journals and second on a manually-created Complexity Heliophysics glossary, but the approach created a much larger corpus and discovered articles that were not included in the hand-selected corpus for this review. Thus, the hybrid approach produced a richer coverage of the literature. Appendix <ref> describes the NLP approach. It is useful to future Complexity Heliophysics work to chronicle briefly the outcomes and open questions from the application of NLP to augment this review. A more detailed taxonomy of uses for NLP in scientific research has emerged. For encoder-like language models, tasks fall generally under five categories: * Question answering; * Text classification; * Semantic equivalence; * Named entity extraction; and * Knowledge extraction. For generative models, language models support tasks related to conversational artificial intelligence, conversion of data to text, and text summarization <cit.> (and personal correspondence: Muthukumaran Ramasubramanian; March 2023). There are also many open questions. A few of importance to Heliophysics are: 1) given that Heliophysics is inherently a systems science, how might we extract relationships from scientific literature that guide us to incorporate new knowledge in our science?; 2) what role might NLP play in improving our information search and discovery processes?; 3) to what extent can NLP support efforts to better represent Heliophysics knowledge in human- and machine-readable ways (e.g., support building semantic technologies and capabilities) <cit.>? The advent of large language models (LLMs) and the discussion of their use in scientific research is pointing to a larger conversation that is unfolding in the future of science: what is the intersection of complexity science with AI/ML? We deliberate on this question in Section <ref> with the hope of seeding important conversations Heliophysicists need to have, placing it in the context of conversations all scientists are grappling with. §.§ Areas of complexity science that have not yet been widely explored in Heliophysics There are tools viewed by the complexity science community as necessary to understand complexity <cit.> that have not yet been widely employed in Heliophysics. Two areas conclude this section on emerging literature. First, agent-based modeling (ABM). ABM is a model that simulates the actions and interactions of agents, most traditionally autonomous, individual elements with properties and capable of actions. ABMs have been used extensively in the social sciences, biology, and ecology <cit.> and their efficacy owes to their combination of elements of game theory, sociology, evolutionary programming, and emergence. Their operating principle is to give agents relatively simple operating rules, simulate their interaction, and study the emergent collective phenomena. Though ABM is often associated with modeling behavior of living organisms, perhaps more important to Heliophysics it is also a technique for modeling the behavior and interactions of things such as particles in the magnetosphere. Particle simulations, therefore, can be understood as a form of agent-based model and the connection might allow computational approaches to Heliophysics learn from a rich domain of research. There may also be application agent-based modeling for human responses to events such as space weather storms, though outside of `simulation game' activities <cit.> this is virtually unexplored. However, understanding human behavior in disaster situations is an important component to quantifying risk and establishing resiliency in our societal systems. Second, collective intelligence. If ABMs are the tools, collective intelligence is the study of their output data. It is a nascent field of study to understand collective behavior, that is adaptive, wise, or clever structures and behaviors by groups, in physical, biological, social, and many engineered systems <cit.>. Discovery in disciplines as diverse as biology and ecology to psychology and economics point to cross-disciplinary utility. At present there is exists essentially no conversation about the use of methods and techniques from collective intelligence. We urge researchers to think capaciously about how collective intelligence might impact Heliophysics, perhaps even providing new solutions to long-standing questions. Two areas, in particular, might be fruitful: 1) interpreting particle simulations and 2) studying responses to natural hazards as a way of more accurately predicting societal impacts of extreme space weather events. Anticipating the final section of this paper, there is a growing literature that exists at the intersection of collective and artificial intelligence <cit.>, a trend indicative of most sciences. § KEY CHALLENGE FOR 21ST CENTURY SCIENCE: THE INTERSECTION OF COMPLEXITY AND ARTIFICIAL INTELLIGENCE AND A FRAMEWORK TO EXPLORE IT Artificial intelligence (AI) and machine learning (ML) are not new. They are traced as fields of study to the 1950s, when Alan Turing worked on the concept of intelligent machines and how to create them <cit.> and the conference that many credit with coining the term artificial intelligence was held <cit.>; and their origins as areas of thought predate even that by many decades (in science fiction writing <cit.> as in philosophy and the mechanical manipulation of symbols <cit.>). In the 1950s, hyperbolic beliefs about nearness to genuine artificial intelligence led to an `AI winter' when those hopes failed to materialize and lasted from the 1970s to the 2000s. However, in the past several decades the rise of internet-scale data, computing power consistent with a doubling in the number of transistors in an integrated circuit every two years (Moore's Law <cit.>), and improvement in algorithms have brought a renewed zeal and concomitant advance to AI/ML. A quick note that ML is a sub-domain of AI where AI is the ability to accomplish complex goals <cit.> while ML is leveraging data to improve computer performance on some task or set of tasks <cit.>. Since existing studies in Heliophysics do not truly address AI, but the broader concept is important in this section, we will adopt the shorthand AI/ML to refer to the full set of methods. Heliophysics and space weather have been a part of the passionate exploration, adopting and applying AI/ML advances coming out of industry. <cit.> review the state of AI/ML in Heliophysics research, including prominent applications in forecasting: geomagnetic indices, relativistic electrons at geosynchronous orbits, solar flares occurrence, coronal mass ejection propagation time, and solar wind speed. Their synopsis of the field led them to conclude that there is a need to shift forecasting in Heliophysics to probabilistic approaches centered on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches. Their discussion echoes a long-standing conversation that has been staged across the sciences (e.g., <cit.>) and indeed across culture more broadly, becoming a common subject of popular science writing, science fiction literature, and futurism e.g., <cit.>. The two sides of this debate have been given different names over time: hypothesis-driven vs. empiricism, deductive vs. inductive reasoning, theory-driven vs. data-driven. Our review, too, leads us to the need to find common ground between these poles of approach to growing scientific knowledge, now perhaps with the language of AI/ML vs. complexity science. Words create worlds <cit.> and it deserves some inquiry into what new perspectives these new words for the old debate may create. <cit.> and <cit.> taxonomized AI/ML as a part of the broader field of data science, defining the latter it as scalable architectural approaches, techniques, software and algorithms which alter the paradigm by which data are collected, managed and analyzed and communicated. They point to a similar need to integrate knowledge of the physical domain with data-driven approaches. The trend in the literature around AI/ML and data science in Heliophysics is clear: the future of Heliophysics must explore the intersection between data-driven approaches with theory-driven science <cit.>. <cit.>, where we began this review, points to this key challenge for Complexity Heliophysics: converging the autonomous and the local linear prediction filter methods, merging the benefits of interpretability with the success of data-driven approaches. <cit.> writes, “It is anticipated that in the future these local-linear predictor models will be studied carefully with the goal of organizing these bits and pieces into a global nonlinear predictor model. It may be advantageous to cast these predictor models as analogue models in order to maximize their physical interpretation.” Klimas et al. are pointing to a reconciliation that is at the heart of a debate that spans the sciences in the 21st century: between first principles, physics-based models and data-driven, artificial intelligence or machine learning AI/ML algorithms. They represent different reasoning approaches: inductive and deductive, and correspond to different capabilities to explain the result found. Physics-based models are inherently explainable–the behavior arises from understood laws stitched together in traceable logical reason. AI/ML models discover patterns directly from data, but are less clearly interpretable. This review has chronicled the fact that complexity introduces, sometimes extreme, uncertainty to physics-based understanding and precludes predictability <cit.>. However, the advent of AI/ML (and other data-mining approaches) and the requisite computation has produced, in some cases, capabilities to represent complicated systems more accurately than physics-based equations <cit.>. These models may be capturing some as yet unknown physical properties. This quality is similar to the way that power laws in complexity science capture some underlying mechanism that acts across scales <cit.>. Reconciling the first principles with data-driven approaches, physics and complexity with artificial intelligence, is a grand challenge for the 21st century. Indeed, the consilience of physics with data-driven approaches is important to all areas of inquiry <cit.>. Domains tend to embrace this call to integrate the two when the fundamental and applied components of their science come into contact: understanding pathologies of diseases in medicine vs. predicting if and when a disease will occur; understanding the Earth system vs. predicting natural disasters; understanding the solar-terrestrial system (Heliophysics) vs. predicting its consequences for our technological systems (space weather). In science there is a `tangled relationship between prediction and understanding' <cit.>. Recasting the debate about theory- and data-driven science as a tension between AI/ML and complexity science, the body of literature in this review suggests that the concepts of risk and resilience can provide a new framework. Risk is likelihood of occurrence multiplied by the consequence of the occurrence; resilience is the capacity of a system to recover from a disturbance <cit.>. Risk requires specification or prediction of events; resilience requires understanding the systemic impacts of those events. Together, they are a way of formulating grand scientific and societal challenges in a way that AI/ML and complexity science converge. Other fields have demonstrated this convergence: Dynamical Systems: <cit.>; Climate: <cit.>; Ecology: <cit.>; Socio-Ecological Systems: <cit.>; Disaster Research: <cit.>; Medical Anthropology: <cit.>; Health: <cit.>; among others: see <cit.> for a review of the concept of resilience across disciplines. How might a risk and resiliency framework draw on traditions of theory- and data-driven progress in science to create fertile ground for new discovery? This is an open question to which we do not attempt an answer. Rather, we conclude by better defining risk and resiliency to help this review article serve future researchers in defining the meaningful and productive directions. From the characteristics of the literature across this wide number of disciplines, a risk and resilience framework may proceed in two parts: clear definition of the dimensions of risk and the methods of quantifying resilience. For the first component, three well-defined priority dimensions of Heliophysics risk are: * Technical risk: risk to infrastructure due to space weather; * Capability and capacity risk: societal infrastructure-/individual-/business-specific risk based on different conditions in understanding and responding to space weather; * Interconnected risk: risk due to externalities acting co-temporally and/or co-spatially to space weather such as extreme terrestrial weather. For the second component, one must quantify resilience. This portion of the framework must provide an objective and transparent basis for managing and mitigating each dimension of the risk. Resilience is a systemic phenomenon. At the highest level resilience is defined as a system's accommodation of changes and reorganization of itself while maintaining the crucial attributes that give the system its unique characteristics <cit.>. Statistical numeracy is the strongest single predictor of risk and resilience literacy <cit.>. Indeed, one of the twin pillars of risk is likelihood, the shadow of which is uncertainty. Statistical assessment is a core competency in risk assessment and communication. In parallel, statistics and probability play essential roles in virtually all approaches to explainable AI, or a machine's ability to explain its decisions and actions to human users <cit.>. Thus, the lingua franca of risk and resiliency and explainability, those bridges between physics and data, between complexity and AI/ML, is statistics and probability. This is the conclusion drawn by <cit.> in outlining the connection between physics and prediction they call the `gray box' approach. Given the potential link that risk and resilience provide between physical understanding and data-driven specification and prediction, it is important to understand the ways that risk and resilience are studied and used as a framework for research in various scientific disciplines. That is left as a call for Heliophysicists to understand approaches of `sister' disciplines and how they might be impactful to Heliophysics research. In concluding this review, we guide that effort by relating a few of the ways risk and resilience might relate to and might be used in Heliophysics. They fall generally into three categories: 1) critical transitions; 2) components of resilience; and 3) convergence research. §.§ Critical Transitions A risk and resilience framework implies a need to study critical transitions, or tipping-points, in the Heliophysics system. Tipping-points that represent critical transitions are points at which a dynamical system abruptly shifts from one state to another. They are notoriously challenging to predict. However, exciting research points to the existence of generic properties (and the way that they can be measured) for systems near critical transitions <cit.>: * Critical slowing down <cit.>: As the system approaches such critical points, it becomes increasingly slow in recovering from small perturbations. In these regimes, the dynamical system will show an increase in lag-1 autocorrelation and increased variance in the pattern of fluctuations as the recovery rate from a small perturbation is reduced; * Skewness and flickering before transitions: Asymmetry in fluctuations increase near a critical transition. Flickering is indicated in the frequency distribution of states as increased variance and skewness as well as bimodality <cit.>; and * Increased spatial coherence: In systems consisting of numerous coupled units in which each of the units tends to take the state of the units to which it is connected a gradual increase in external forcing factor (e.g., the solar wind), the distribution of system states may transform in characteristic ways. Given the challenge to predict Heliophysics phenomena that has been observed in various stellar-planetary systems and data (e.g., <cit.>) and that is similar in many respects to systems in which the science of critical transitions has had a great impact (such as medicine <cit.>, finance <cit.>, and climate <cit.>), the science of risk, resiliency, and critical transitions may have important utility to future leaps in our understanding. It should be noted that the general properties preceding critical transitions can be associated with a wide variety of transition in complex systems <cit.>. Application of these theories in Heliophysics is nascent, but may represent an important frontier. §.§ Components of Resilience There are several components of resilience that shed light on Heliophysics research that might bridge complexity and AI/ML. * Both predictive and responsive: Resilience acknowledges that all responses are dual, including both the pre-emptive actions possible and those that must be responsive to the existing conditions). One could think about these dual responses as the pre-emptive actions accommodating the more or less deterministic signals while the responsive actions are those taken in the face of uncertainty and are inherently unpredictable. * Translational: Resilience requires translation between the science and the responses available (hence the requisite understanding of capability and capacity risk); * Multi-level: Resilience also requires a multi-level understanding of the system and the different responses for each level <cit.>. For instance, a power grid operator monitoring the grid in Washington, DC and an individual at the Department of Energy tasked with the health of the country's grid as a whole will have distinct responses to a National Oceanic and Atmospheric Administration (NOAA) warning; * Interdependent: Resilience connects a system to the external systems that may amplify or attenuate a particular effect <cit.>, which reveals the final component; * Semantic: To understand interconnections, relationships between domains must become first class citizens to enable agents (whether a human or an intelligent machine) to navigate between them and integrate information from across them <cit.>. This requires understanding information flow, in the physical, socio-cultural, and technological senses <cit.>. §.§ Convergence Research Pioneering domains like bioengineering <cit.> are scientific forebears in the creation of resilience frameworks that treat the system as complex and integrate data-driven analyses. Their examples reveal that such a framework facilitates transdisciplinary approaches <cit.>. One such approach is convergence research. In 2016, the National Science Foundation (NSF) named “Growing Convergence Research” as one of its 10 Big Ideas for prioritizing future investments in science and engineering. At its most general, convergence has been defined “an approach to problem solving that cuts across disciplinary boundaries. It integrates knowledge, tools, and ways of thinking from life and health sciences, physical, mathematical, and computational sciences, engineering disciplines, and beyond to form a comprehensive synthetic framework for tackling scientific and societal challenges that exist at the interfaces of multiple fields” <cit.>. With convergence comes a new spectrum of challenges involving how we work across disciplinary lines, collaborate meaningfully in large groups, and develop healthy–meaning open, participatory, and resilient–connections among diverse stakeholders. Risk and hazard domains have been proving grounds for convergence research <cit.>. Indeed, tackling the problems we face as a society, whether global pandemics or climate change or complex systems, requires new levels of cooperation, facilitation, and synthesis <cit.>. In Heliophysics, these transdisciplinary convergent approaches offer the potential to integrate space physics, space weather, and society. A risk and resilience framework may be poised intellectually and institutionally to conduct complexity science <cit.>, bridge between predictive methods like AI/ML and fundamental science, and bring convergence research to Heliophysics. These inspire further studies on the nature of resilience from a systems level perspective of the Solar-Terrestrial connection, taking up the call of the 21st century “to integrate the sciences of complexity with machine learning and artificial intelligence” <cit.>. § CONCLUSION The 21st century will be marked by complexity, according to Stephen Hawking, and this is especially true for the field of Heliophysics. Heliophysics has traditionally been characterized by categorizing and separating domains, but with the advent of new sensing capabilities, data analysis, and computational tools, there is a growing need for a paradigm shift towards Complexity Heliophysics. Complexity science is the study of phenomena that emerge from a collection of interacting objects and requires a plurality of frameworks that move between levels. This lived and living review detailed the network of complexity studies in Heliophysics and provided a definition of the Complexity Heliophysics paradigm. This review first outlined the dimensions of complexity science. Then, the analysis of the existing literature mapped into three parts: 1) a pivotal year for the paradigm: 1996; 2) transitional years that established dimensions of the paradigm between 1996-2010; and 3) emergent literature largely after 2010. For the final ternate, we drew on a much wider base of literature to situation Complexity Heliophysics in the broader context and to establish trends and gaps. Several are proposed: First, the ability to capture underlying structure and patterns in complex systems through coarse-graining is crucial in Heliohpysics. Two forms of coarse-graining, namely information theory and network science, are particularly important for the future of the field. Second, reconciling the first principles with data-driven approaches, physics and complexity with artificial intelligence, is a grand challenge for the 21st century. We centered the discussion of Complexity Heliophysics in the tension between fundamental science vs. prediction-oriented science (e.g., basic science vs. applied science; physics-based modeling vs. artificial intelligence/machine learning) and suggest that this history of complexity science within Heliohpysics is instrumental in finding pathways between these extremes, therefore becoming inextricable from the future of Heliophysics research. The trend is clear: the future of Heliophysics and its applied counterpart, space weather, must explore the intersection between data-driven approaches with theory-driven science. Indeed, <cit.>, where the review begins, pointed to a key challenge for Complexity Heliophysics: converging the autonomous and the local linear prediction filter methods, merging the benefits of interpretability with the success of data-driven approaches. We provided a vision for this path in a potential framework that adopts risk and resiliency as organizing concepts and invokes the methods of convergence research to create a methodology. Ultimately, this review provides a foundation for how complexity science can help address outstanding questions in Heliophysics and space weather science. The artifacts from this work include this review article; a glossary of terms that define Complexity Heliophysics and can be useful to search and discovery of related resources, individuals, and groups; and a new corpus of Complexity Heliophysics that is likely full of further discovery and generative of new research questions. With the paradigm shift, we will gain new capacities to understand the Heliophysics system, and this will guide researchers towards directions that are better equipped to respond to the challenges of the 21st century. Supplementary information Four appendices are included with this review piece, each included to facilitate more comprehensive understanding of this review and curated material to act on it: 1) acronyms; 2) questions identified in the papers reviewed in this work to guide future research; 3) generation and analysis of the automated corpus of Complexity Heliophysics; and 4) key datasets explicitly mentioned in this review or that factor prominently in the results examined. Additionally, software to create the automated corpus along with the corpus itself are provided in a Github repository (<https://github.com/rmcgranaghan/Complexity_Heliophysics>). Acknowledgments Wide and deep thanks is due to a community of conversants, whose time, attention, and ideas were intellectual exhilaration and generativity for this piece. An incomplete list of those individuals includes (in no particular order): Joseph Borovksy, Juan Valdivia, Simon Wing; Eric Donovan, Massimo Materassi, John Dorelli; Jeffrey Thayer, Vadim Uritsky, Sandra Chapman, Jay Johnson, Josh Semeter, Seebany Datta-Barua, Olga Verkhoglyadova, Giuseppe Consolini, Elizabeth Butler, Anthony Mannucci, Paul Wong, Jacob Bortnik, Enrico Camporeale, Barbara Thompson, Madhulika Guhathakurta, Jesper Gjerloev, Nick Watkins, Xing Meng, Brian Thomas, and numerous past guests on the Origins Podcast <https://www.originspodcast.co/>. Several events were also formative for this work. To the organizers, conveners, and attendees I am deeply grateful: “Exploring Systems-Science Techniques for the Earth’s Magnetosphere-Ionosphere-Thermosphere” (July 2018 <cit.>), the Santa Fe Institute's Complexity Interactive held January 2022 <https://www.santafe.edu/engage/learn/programs/complexity-interactive>, the Lorentz Center event “Space Weather: A Multidisciplinary Approach” held in September 2017 <https://event.cwi.nl/spaceweather2017/>, the series of NASA Living With a Star Jack Eddy Symposia (especially the 3rd event held in June 2022 <https://cpaess.ucar.edu/meetings/eddy-symposium-2022>), and the National Science Foundation (NSF) Convergence Hub for the Exploration of Space Science (CHESS) event “Simulating Space Weather Extremes: A Workshop to Identify Research Needs to Improve Power Grid Resilience to Geomagnetic Activity” held April 2022 (<https://www.nsf.gov/awardsearch/showAward?AWD_ID=2131047>). The author gratefully acknowledges the support of the NASA Early Career Investigator Program (ECIP) Program (NASA Grant Number: 80NSSC21K0622) for the resources to research, pursue, and write articles on the philosophy of Heliophysics science and how to make breakthroughs in our epistemology such as this one. Additionally, the author is deeply appreciative of the NASA Center for HelioAnalytics, funded by NASA ISFM, for supporting this work and creating a community in which conversations like these occur. Data and software supporting this review are available from a Github repository (<https://github.com/rmcgranaghan/Complexity_Heliophysics>) that provides information about generating the corpus for automated or natural language processing in support of this work as well as the glossary used to filter the articles and automated corpus itself. We acknowledge Omar Shalaby for the development of tools to programmatically query ADS and organize results. Thank you to the NASA Center for HelioAnalytics and Heliophysics Digital Resource Library for creating, maintaining, and making available HelioCloud, a cloud computing service for Heliophysicists. The corpus generation and natural language processing analysis for this publication were carried out on that platform. Inordinate thanks is due to Semantic Scholar <https://www.semanticscholar.org/> built by the Allen Insitute for AI <https://allenai.org/> for its aid in finding literature and appropriately and conveniently citing it for inclusion in this review. Many thanks to the Lingo4G <https://carrotsearch.com/lingo4g/> team for support in getting their software up and running and for providing numerous trial licenses to complete the analysis detailed in this manuscript. The research was in-part carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). 2023. California Institute of Technology. Government sponsorship acknowledged. § DECLARATIONS Conflict of interest/Competing interests: Not applicable § APPENDIX A: ACRONYMS Term Abbreviation agent-based modeling ABM artificial intelligence and machine learning AI/ML auroral electrojet AE conditional mutual information CMI Coupled Map Lattice CML disturbance storm-time Dst European Space Agency ESA interplanetary magnetic field IMF Geocentric Solar Ecliptic GSE Geocentric Solar Magnetic GSM geomagnetically induced currents GIC large language model LLM local-linear predictor LLP mutual information MI named entity recognition NER National Aeronautics and Space Administration NASA natural language processing NLP nonlinear prediction filter NPF Polar Ultraviolet Imager Polar UVI self-organized criticality SOC state-space reconstruction SSR substorm current wedge SCW total electon content TEC Time History of Events and THEMIS transfer entropy TE § APPENDIX B: QUESTIONS IDENTIFIED IN THE PAPERS REVIEWED IN THIS WORK TO GUIDE FUTURE RESEARCH This appendix lists a few of the most potent questions either explicitly identified or implied by the studies reviewed for this paper. They are meant to be generative of future Complexity Heliophysics work. * <cit.> What constitutes successful prediction of some observed activity?; * (From a collection of articles that each speak to this point) What are appropriate measurables for magnetospheric dynamics (e.g., AE)?; * In distinguishing magnetospheric dynamics from those convolved with solar wind driving, are bursts in AU/AL causally related to those in ϵ and vB_s (e.g., <cit.>)? As we bring the complexity paradigm from the solar wind and magnetosphere into the upper atmosphere, what methods permit to connect or distinguish causal driving from internal dynamics?; * (Collection of questions) “Nine Outstanding Questions of Solar Wind Physics” <cit.>; * (Collection of questions) “Outstanding questions in magnetospheric plasma physics” <cit.>; * Have observational networks capable of finer resolution (e.g., local time-dependent auroral electrojet indices from the SuperMAG network <cit.>) provided the necessary data to resolve open questions about the dynamic behavior of the magnetosphere that have been ambiguous in existing data (e.g., limitations of the AE index <cit.>? * <cit.> Given the observed power law statistics of the dynamic magnetosphere in auroral data and the corresponding inference of stationary critical dynamics in the magnetosphere, what can be learned about the role of cross-scale coupling in the development of geomagnetic disturbances?; * <cit.> Do statistics of mesoscale magnetosphere simulations match observed statistics from the SOC paradigm?; * Which of our models of the magnetosphere and geospace exhibit complexity (e.g., emergent behavior)? How can those behaviors be observed; * What are the time scales for the phenomena that define Heliophysics in the Sun-Earth system (e.g., solar flares, coronal mass ejections, bursty bulk flows, substorms, auroral precipitation, ionospheric conductivity) and how are these consistent across a wide continuum of stellar-planetary systems? * <cit.>Are topologies of Heliophysics systems (e.g., solar corona, ionosphere, magnetic environment) qualitatively different between low, moderate, and extreme periods of activity? * <cit.> To what extent can the study of space weather be transformed by adopting frameworks for natural hazards that enable convergence research and objectify resilience? * <cit.> What are the new literacies required of Heliophysicists and space scientists to embrace the Complexity paradigm? § APPENDIX C: GENERATION AND ANALYSIS OF THE COMPLEXITY HELIOPHYSICS CORPUS The references that constitute this review article were largely manually curated. The reference list includes well over 300 articles. However, current best estimates <cit.> place annual growth rate of scientific literature at 4% with a doubling time every 17 years. The impact is a significantly larger number of papers a researcher must read to be up to date on a field. More revealing is the growth in the number of papers a researcher must sift through to decide what to read. This `to read' pile can and does easily become thousands of papers long. The growth in publication and number of research artifacts is a trend that will only continue. The complexity paradigm exacerbates the problem, in which systems understanding demands information from more numerous and diverse sources to be integrated. It is likely that the effective growth rate when many disciplines must be considered is even greater. The outcome is that researchers must augment their traditional manual approach to curating and synthesizing literature with automated methods using databases of literature and research artifacts (e.g., Clarivate’s Web of Science[<https://www.webofknowledge.com>] and those tailored to more specific contexts like NASA’s Astrophysics Data System (ADS)[<https://ui.adsabs.harvard.edu/>]). Artificial Intelligence and Machine Learning (AI/ML) will play a role in augmentation and synthesis. Indeed, researchers are already squarely in the middle of this challenge to keep pace with the academic literature. To make this review piece an example of how to augment manual methods with automated approaches, we have used natural language processing (NLP) to examine the NASA ADS to construct a Complexity Heliophysics corpus (database of documents). We refer to this as the automated corpus, which is in addition to the manual corpus that is the reference list of this review. We describe our approach to create the automated corpus and provide the end product as an artifact with which future researchers can examine this new paradigm and also to experiment with AI/ML methods. The process is as follows: * Construct a ‘Complexity Heliophysics’ terms list. These terms attempt to encompass relevant topics, phenomena, and concepts of the paradigm. The terms list contains 153 terms and is provided in the Github repository that accompanies this manuscript (<https://github.com/rmcgranaghan/Complexity_Heliophysics>); * Select a database of literature and research artifacts to explore. We chose the NASA ADS for its close context to Heliophysics and space physics research as opposed to something like Web of Science, which is much broader. We consider the years 1996 (when we picked up the thread of Complexity Heliophysics with <cit.>) to the present in this review; * Apply a `Heliophysics filter.’ From ADS we identify a subset of artifacts by selecting Heliophysics journals from the collection of journals that ADS indexes [<https://adsabs.harvard.edu/abs_doc/journals1.html>]. We manually and in cooperation with the Heliophysics community selected 33 journals. We intentionally broadly conceived of Heliophysics in selecting these journals, effectively casting a wide net for articles we would identify. The journals are listed in the Github repository. * Next we applied a ‘complexity science filter.’ For each article, we looked at the abstract and title and performed a simple match: if the abstract and title contained >N terms in the Complexity Heliophysics terms list, then it was considered a match and added to the corpus. N can be tuned based on desired inclusivity/exclusivity in the final corpus. Figure <ref> shows the number of documents in the corpus for each threshold between three and ten, falling off roughly exponentially with the number of terms required to match. The graph indicates a slight knee between four and six matching terms. In an attempt to balance number of documents with relevance to Heliophysics, we chose five terms (i.e., only documents whose abstracts and titles include five or more distinct terms in the Heliophysics glossary were included). This give an automated database that is roughly one order of magnitude greater than the number considered manually (∼3000 vs. 300). In our analysis, ∼120k articles were obtained from ADS between 1996-present. Among those articles, Geophysical Research Letters (36k), Advances in Space Research (15k), Journal of Geophysical Research (14k), and Journal of Geophysical Research: Atmospheres (14k) contributed more than 13k articles. After matching based on the glossary, the automated corpus consisted of /sim2600 documents. The corpus can then be examined manually or programmatically. We provide a few examples that give a glimpse into the corpus. Readers can freely explore the corpus via the Github repository <https://github.com/rmcgranaghan/ComplexityHelio_LivingReviews/tree/main/data>. First, Table <ref> provides all words from the glossary that were found more than 100 times in the corpus (remember that this corpus only keeps papers with ≥5 words from the glossary, so this table represents the number of times that glossary word appears among such papers). Glossary Word Number of Appearances system 1424 dynamics 1015 evolution 944 distribution 773 nonlinear 772 systems 694 dynamical 674 instability 565 complex 531 turbulence 511 interactions 497 phenomena 449 instabilities 406 turbulent 400 critical 379 scaling 375 equilibrium 292 environment 276 component 255 bifurcation 187 threshold 186 heterogeneity 171 complexity 161 phenomenon 151 network 149 stationary 128 chaos 127 fluctuation 121 fractal 119 feedback 113 nonlinearity 111 community 109 networks 103 directed 102 multifractal 101 boundaries 100 exponential 96 As an example, we used a text clustering tool Lingo4G[<https://carrotsearch.com/lingo4g/>] applied to the automated corpus to determine document clusters and topics. The mode used was Lingo4G's phrase extractor that extracts frequent words and sequences of words from the corpus (in this case the titles and abstacts from the corpus, not their full texts). Those frequent words and sequences are assigned as labels. Figure <ref> shows a cluster map created from the automated corpus. Groups of thematically related labels are provided as high-level themes (overview plot on the left) and sub-themes (zoomed-in plot on the right). These clusters are helpful in myriad ways, including identifying content-wise similar documents (you can find each document within the cluster map), for identifying outliers as possible themes in need of more exploration, and `bridge' themes that link two or more prominent themes in the corpus. To further explore the cluster map, we created a network map of the clusters. Each cluster was identified by an exemplar label, which serves as the description for the entire cluster. Then, related clusters and labels are connected to one another. The exemplar label of one cluster can be a member of another cluster. Figure <ref> illustrates the network map created for the `magnetic' label. Here, `flux,' `flow,' `particle,' etc. are all members of the magnetic cluster, but also are exemplars of their own clusters. One use of this network map would be to discover resources related to time series fluctuations that are also part of the broader class of magnetic phenomena. This permits rapid identification of documents in specific areas and gap analysis for what has not been extensively explored. Topic modeling methods like latent dirichlet allocation (LDA) <cit.> could easily be applied to the corpus as an engine for recommendation and insight and different semantic mapping of the information in the corpus (e.g., <cit.>). Additionally, more complex analyses of the automated corpus are possible. We next analyzed the automated corpus using network analysis. To construct the network, we considered each paper in the automated corpus using the threshold of five (i.e., only papers whose titles and abstracts contained five distinct terms from the Complexity Heliophysics glossary). For each paper, we looked at all other papers and a connection was defined if two papers shared more than four terms from the glossary. For instance, `paper1' and `paper2' would be connected if their respective set of terms from the glossary are: 1) [`complexity', 'systems', 'fractal', 'multiscale', 'network', 'nonlinearity', 'boundaries'] and 2) [exponential, bifurcation, `complexity', 'systems', 'dynamical', `equilibrium', 'multiscale', 'network', 'nonlinearity']. We constructed a directed network (meaning the edges have a preferred direction) that points from the earlier paper to the later paper, perhaps more capably capturing a flow of ideas. To analyze and interpret the network, we constructed a random network containing the same number of nodes and connections as the resultant network. Perhaps unsurprisingly, we find a much higher average clustering coefficient <cit.> for the automated corpus than the random network, indicating the presence of much more densely connected clusters or communities and stronger local structure. This network contains rich potential for discovery. To enable the community to explore the possibilities, we have visualized and made fully interactive the network using <kumu.io>, which can be accessed at <https://embed.kumu.io/82d58b5453d1c4ba4f05d7240d142102>. Figure <ref> shows a few screenshots of the kumu visualization. Any node can be selected (Figure <ref>b), revealing title, abstract, and words from the glossary found in them. Any word can be selected and all nodes that contain it in the network will be highlighted (Figure <ref>c). Zooming in, one can explore local structures and clusters (Figure <ref>d). The automated corpus and topic analysis augmented the manual identification and review of articles and is an additional artifact of this review piece; perhaps even one that can become standard for future reviews. It should be considered a resource that complements the extensive references cited in the body of this review and contains high potential for discovering trends and knowledge about Complexity Heliophysics. It is important to note that the manual and automated corpora are not disjoint nor is the manual corpus strictly a subset of the automated corpus. Many references are shared across them, lending validation to the process of generating the automated set, but there are many references in the manual set that are not included in the automated one. This points to the flexibility of the scientist-driven discovery process, pulling in relevant references and material that might be more distant or irregularly connected to the research at hand than the necessarily more rigid automated process. This review, in particular, read widely in gathering material, many connections of which an automated approach would likely not have captured. The point is there must be an intersection of manual and automated gathering of resources, the manual approach benefiting from flexibility and capacity to range widely and be discerning and the automated approach benefiting from the volume of resources it can examine. Finally, given the corpus of articles from both manual and automated compilation, `mind maps' <cit.> were constructed from the main ideas in the articles. We do not present those mind maps, but a direct result from that mapping activity is the structure of this review such that the very sections and progression of this document reflects the achronological development of the ideas of Complexity Heliophysics. § APPENDIX C: KEY DATASETS This appendix compiles a list of important datasets, and their original appearance in the literature, that appear across this review to aid readers who wish to compile datasets and explore data-driven research across the datasets that have factored importantly in the Complexity Heliophysics paradigm. This list is merely an introduction, certainly not exhaustive. Citation(s) Description 2*<cit.> 34 intervals of high time resolution Solar Wind (IMP8)–AL index dataset. These data are the standard or benchmark dataset from most of the work addressed in <cit.> <cit.> Polar spacecraft ultraviolet imager (UVI) observations. <cit.> High quality global images of auroral activity. 3*<cit.> 15,500 POLAR UVI frames showing activity in the nighttime sector of the aurora (55 to 90^∘ MLAT, 2000 to 0400 MLT) in the Lyman-Birge-Hopfield-long filter mode. These data permitted a spatiotemporal technique that proves vital in auroral data analyses 5*<cit.> A worldwide collaboration of organizations and national agencies that operate more than 200 ground-based magnetometers. Provides measurements of magnetic field perturbations from all available stations in common coordinate frames, identical time resolution, and a common baseline removal <cit.> Fluctuations of the Earth's magnetic field as observed in-situ. <cit.> European Space Agency (ESA) Swarm satellites vector field and absolute scalar magnetometers. 330 [Nis(2022)]Nishimura_2022 (2022) Chapter 1 - multiscale processes in the m-i-t system. In: Nishimura Y, Verkhoglyadova O, Deng Y, et al (eds) Cross-Scale Coupling and Energy Transfer in the Magnetosphere-Ionosphere-Thermosphere System. Elsevier, p 1–63, https://doi.org/10.1016/B978-0-12-821366-7.00007-X, <https://www.sciencedirect.com/science/article/pii/B978012821366700007X> [Akasofu(1979)]Akasofu_1979 Akasofu SI (1979) Interplanetary energy flux associated with magnetospheric substorms. Planetary and Space Science 27(4):425–431. https://doi.org/10.1016/0032-0633(79)90119-3, <https://www.sciencedirect.com/science/article/pii/0032063379901193> [Akasofu(1980)]Akasofu_1980 Akasofu SI (1980) The solar wind-magnetosphere energy coupling and magnetospheric disturbances. Planetary and Space Science 28(5):495–509. https://doi.org/10.1016/0032-0633(80)90031-8, <https://www.sciencedirect.com/science/article/pii/0032063380900318> [Akasofu(1981)]Akasofu1981EnergyCB Akasofu SI (1981) Energy coupling between the solar wind and the magnetosphere. Space Science Reviews 28:121–190 [Albert and Barabási(2002)]Barabasi2002 Albert R, Barabási AL (2002) Statistical mechanics of complex networks. Rev Mod Phys 74:47–97. 10.1103/RevModPhys.74.47, <https://link.aps.org/doi/10.1103/RevModPhys.74.47> [Anderson(2008)]Anderson2008TheEO Anderson C (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired <https://www.wired.com/2008/06/pb-theory/> [Anderson(1972)]Anderson1972MoreID Anderson PW (1972) More is different. Science 177 4047:393–6 [Angeler et al(2018)Angeler, Allen, Garmestani, Pope, Twidwell, and Bundschuh]Angeler_2018 Angeler DG, Allen CR, Garmestani A, et al (2018) Resilience in environmental risk and impact assessment: Concepts and measurement. Bulletin of environmental contamination and toxicology 101:543–548. 10.1007/s00128-018-2467-5 [Angelopoulos et al(1999)Angelopoulos, Mozer, Mukai, Tsuruda, Kokubun, and Hughes]Angelopoulos_1999 Angelopoulos V, Mozer FS, Mukai T, et al (1999) On the relationship between bursty flows, current disruption and substorms. Geophysical Research Letters 26(18):2841–2844. https://doi.org/10.1029/1999GL900601, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/1999GL900601>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL900601https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL900601 [Armstrong and Fletcher(2019)]Armstrong2019FastSI Armstrong JA, Fletcher L (2019) Fast solar image classification using deep learning and its importance for automation in solar physics. Solar Physics 294:1–23 [Aschwanden(2011)]Aschwanden2011SelfOrganizedCI Aschwanden M (2011) Self-Organized Criticality in Astrophysics: The Statistics of Nonlinear Processes in the Universe. Springer Praxis Books, Springer Berlin Heidelberg, <https://books.google.com/books?id=eNM1OylCljIC> [Aschwanden(2019)]Aschwanden2019SelforganizedCI Aschwanden MJ (2019) Self-organized criticality in solar and stellar flares: Are extreme events scale-free? The Astrophysical Journal 880 [Aschwanden and McTiernan(2010)]Aschwanden2010RECONCILIATIONOW Aschwanden MJ, McTiernan JM (2010) Reconciliation of waiting time statistics of solar flares observed in hard x-rays. The Astrophysical Journal 717:683 – 692 [Aschwanden et al(2014a)Aschwanden, Crosby, Dimitropoulou, Georgoulis, Hergarten, McAteer, Milovanov, Mineshige, Morales, Nishizuka, and et al.]Aschwanden_2014 Aschwanden MJ, Crosby NB, Dimitropoulou M, et al (2014a) 25 years of self-organized criticality: Solar and astrophysics. Space Science Reviews 198(1-4):47–166. 10.1007/s11214-014-0054-6 [Aschwanden et al(2014b)Aschwanden, Xu, and Jing]Aschwanden2014GlobalEO Aschwanden MJ, Xu Y, Jing J (2014b) Global energetics of solar flares: I. magnetic energies. arXiv: Solar and Stellar Astrophysics [Asimov(1942)]AsimovRunaround Asimov I (1942) Runaround. Astounding Science-Fiction <https://isfdb.org/cgi-bin/pl.cgi?57563> [Axelrod(1997)]Axelrod1997TheCO Axelrod R (1997) The complexity of cooperation: Agent-based models of competition and collaboration. Canadian Journal of Political Science 31:612 – 614 [Baevski et al(2020)Baevski, Zhou, rahman Mohamed, and Auli]Baevski2020wav2vec2A Baevski A, Zhou H, rahman Mohamed A, et al (2020) wav2vec 2.0: A framework for self-supervised learning of speech representations. ArXiv abs/2006.11477 [Bak(1997)]bak1997 Bak P (1997) How Nature Works: The Science of Self-organized Criticality. Oxford University Press, <https://books.google.com/books?id=Bth4QgAACAAJ> [Bak and Tang(1989)]Bak1989EarthquakesAA Bak P, Tang C (1989) Earthquakes as a self‐organized critical phenomenon. Journal of Geophysical Research 94:15,635–15,637 [Bak et al(1987)Bak, Tang, and Wiesenfeld]Bak_1987 Bak P, Tang C, Wiesenfeld K (1987) Self-organized criticality: An explanation of the 1/f noise. Phys Rev Lett 59:381–384. 10.1103/PhysRevLett.59.381, <https://link.aps.org/doi/10.1103/PhysRevLett.59.381> [Bak-Coleman et al(2021)Bak-Coleman, Alfano, Barfuss, Bergstrom, Centeno, Couzin, Donges, Galesic, Gersick, Jacquet, Kao, Moran, Romanczuk, Rubenstein, Tombak, Van Bavel, and Weber]BakColeman_2021 Bak-Coleman JB, Alfano M, Barfuss W, et al (2021) Stewardship of global collective behavior. Proceedings of the National Academy of Sciences 118(27). 10.1073/pnas.2025764118, <https://www.pnas.org/content/118/27/e2025764118>, https://arxiv.org/abs/https://www.pnas.org/content/118/27/e2025764118.full.pdfhttps://arxiv.org/abs/https://www.pnas.org/content/118/27/e2025764118.full.pdf [Baker et al(1979)Baker, Belian, Higbie, and Hones]Baker_1979 Baker DN, Belian RD, Higbie PR, et al (1979) High‐energy magnetospheric protons and their dependence on geomagnetic and interplanetary conditions. Journal of Geophysical Research 84:7138–7154 [Baker et al(1981a)Baker, Higbie, and Belian]Baker1981GlobalPO Baker DN, Higbie PR, Belian RD (1981a) Global properties of the magnetosphere during a substorm growth phase [Baker et al(1981b)Baker, Hones, Payne, and Feldman]Baker1981AHT Baker DN, Hones EW, Payne JB, et al (1981b) A high time resolution study of interplanetary parameter correlations with ae. Geophysical Research Letters 8:179–182 [Baker et al(1986)Baker, Bargatze, and Zwickl]Baker1986MagnetosphericRT Baker DN, Bargatze L, Zwickl RD (1986) Magnetospheric response to the imf - substorms. Journal of geomagnetism and geoelectricity 38:1047–1073 [Barabási and Albert(1999)]Barabsi1999EmergenceOS Barabási, Albert (1999) Emergence of scaling in random networks. Science 286 5439:509–12 [Bargatze et al(1985)Bargatze, Baker, McPherron, and Hones Jr.]Bargatze_1985 Bargatze LF, Baker DN, McPherron RL, et al (1985) Magnetospheric impulse response for many levels of geomagnetic activity. Journal of Geophysical Research: Space Physics 90(A7):6387–6394. https://doi.org/10.1029/JA090iA07p06387, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/JA090iA07p06387>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA090iA07p06387https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA090iA07p06387 [Becker et al(1995)Becker, de Vries, and Eckhardt]Becker_1995 Becker T, de Vries H, Eckhardt B (1995) Dynamics of a stochastically driven running sandpile. Journal of Nonlinear Science 5:167–188 [Beltagy et al(2019)Beltagy, Lo, and Cohan]Beltagy2019SciBERTAP Beltagy I, Lo K, Cohan A (2019) Scibert: A pretrained language model for scientific text. In: Conference on Empirical Methods in Natural Language Processing [Bentley et al(2011)Bentley, Brooke, Csillaghy, Fellows, Le Blanc, Messerotti, Perez-Su´rez, Pierantoni, and Soldati]BentleyHelio Bentley R, Brooke J, Csillaghy A, et al (2011) HELIO: Discovery and Analysis of Data in Heliophysics. In: 2011 IEEE Seventh International Conference on eScience, pp 248–255, 10.1109/eScience.2011.42 [Berditchevskaia et al(2022)Berditchevskaia, Maliaraki, and Stathoulopoulos]Berditchevskaia2022ADA Berditchevskaia A, Maliaraki E, Stathoulopoulos K (2022) A descriptive analysis of collective intelligence publications since 2000, and the emerging influence of artificial intelligence. Collective Intelligence 1 [Bhamra et al(2011)Bhamra, Dani, and Burnard]Bhamra2011ResilienceTC Bhamra R, Dani S, Burnard KJ (2011) Resilience: the concept, a literature review and future directions. International Journal of Production Research 49:5375 – 5393 [Biffl and Sabou(2016)]Biffl2016SemanticWT Biffl S, Sabou M (2016) Semantic web technologies for intelligent engineering applications. Cambridge International Law Journal [Biggs et al(1986)Biggs, Lloyd, and Wilson]biggs1986graph Biggs N, Lloyd E, Wilson R (1986) Graph Theory, 1736-1936. Clarendon Press, <https://books.google.com/books?id=XqYTk0sXmpoC> [Blei et al(2003)Blei, Ng, and Jordan]Blei_2003 Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3(null):993–1022 [Boccaletti et al(2006)Boccaletti, Latora, Moreno, Chavez, and Hwang]Boccaletti_2006 Boccaletti S, Latora V, Moreno Y, et al (2006) Complex networks: Structure and dynamics. Physics Reports 424(4):175–308. https://doi.org/10.1016/j.physrep.2005.10.009, <https://www.sciencedirect.com/science/article/pii/S037015730500462X> [Bommasani et al(2021)Bommasani, Hudson, Adeli, Altman, Arora, von Arx, Bernstein, Bohg, Bosselut, Brunskill, Brynjolfsson, Buch, Card, Castellon, Chatterji, Chen, Creel, Davis, Demszky, Donahue, Doumbouya, Durmus, Ermon, Etchemendy, Ethayarajh, Fei-Fei, Finn, Gale, Gillespie, Goel, Goodman, Grossman, Guha, Hashimoto, Henderson, Hewitt, Ho, Hong, Hsu, Huang, Icard, Jain, Jurafsky, Kalluri, Karamcheti, Keeling, Khani, Khattab, Koh, Krass, Krishna, Kuditipudi, Kumar, Ladhak, Lee, Lee, Leskovec, Levent, Li, Li, Ma, Malik, Manning, Mirchandani, Mitchell, Munyikwa, Nair, Narayan, Narayanan, Newman, Nie, Niebles, Nilforoshan, Nyarko, Ogut, Orr, Papadimitriou, Park, Piech, Portelance, Potts, Raghunathan, Reich, Ren, Rong, Roohani, Ruiz, Ryan, R'e, Sadigh, Sagawa, Santhanam, Shih, Srinivasan, Tamkin, Taori, Thomas, Tramèr, Wang, Wang, Wu, Wu, Wu, Xie, Yasunaga, You, Zaharia, Zhang, Zhang, Zhang, Zhang, Zheng, Zhou, and Liang]Bommasani2021OnTO Bommasani R, Hudson DA, Adeli E, et al (2021) On the opportunities and risks of foundation models. ArXiv abs/2108.07258 [Börner(2015)]Brner2015AtlasOK Börner K (2015) Atlas of knowledge: Anyone can map [Bornmann et al(2020)Bornmann, Mutz, and Haunschild]Bornmann2020GrowthRO Bornmann L, Mutz R, Haunschild R (2020) Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanities and Social Sciences Communications 8:1–15 [Bornmann et al(2021)Bornmann, Mutz, and Haunschild]Bornmann2021GrowthRO Bornmann L, Mutz R, Haunschild R (2021) Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanities and Social Sciences Communications 8:1–15 [Borovsky(2013)]Borovsky_2013 Borovsky JE (2013) Physical improvements to the solar wind reconnection control function for the earth's magnetosphere. Journal of Geophysical Research: Space Physics 118(5):2113–2121. https://doi.org/10.1002/jgra.50110, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/jgra.50110>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/jgra.50110https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/jgra.50110 [Borovsky and Denton(2018)]Borovsky2018ExplorationOA Borovsky JE, Denton MH (2018) Exploration of a composite index to describe magnetospheric activity: Reduction of the magnetospheric state vector to a single scalar. Journal of Geophysical Research: Space Physics 123:7384 – 7412 [Borovsky and Osmane(2019)]Borovsky2019CompactingTD Borovsky JE, Osmane A (2019) Compacting the description of a time-dependent multivariable system and its multivariable driver by reducing the state vectors to aggregate scalars: the earth's solar-wind-driven magnetosphere. Nonlinear Processes in Geophysics 26:429–443 [Borovsky and Yakymenko(2017)]Borovsky_2017 Borovsky JE, Yakymenko K (2017) Substorm occurrence rates, substorm recurrence times, and solar wind structure. Journal of Geophysical Research: Space Physics 122(3):2973–2998. https://doi.org/10.1002/2016JA023625, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2016JA023625>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2016JA023625https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2016JA023625 [Borovsky et al(2020)Borovsky, Delzanno, Valdivia, Moya, Stepanova, Birn, Blum, Lotko, and Hesse]Borovsky2020OutstandingQI Borovsky JE, Delzanno GL, Valdivia JA, et al (2020) Outstanding questions in magnetospheric plasma physics: The pollenzo view. Journal of Atmospheric and Solar-Terrestrial Physics 208:105,377 [Bortnik et al(2016)Bortnik, Li, Thorne, and Angelopoulos]Bortnik2016AUA Bortnik J, Li W, Thorne RM, et al (2016) A unified approach to inner magnetospheric state prediction. Journal of Geophysical Research: Space Physics 121:2423 – 2430 [Brillinger(2001)]Brillinger2001TimeS Brillinger DR (2001) Time series - data analysis and theory [Brittnacher et al(1997)Brittnacher, Spann, Parks, and Germany]BRITTNACHER19971037 Brittnacher M, Spann J, Parks G, et al (1997) Auroral observations by the polar ultraviolet imager (uvi). Advances in Space Research 20(4):1037–1042. https://doi.org/10.1016/S0273-1177(97)00558-9, <https://www.sciencedirect.com/science/article/pii/S0273117797005589>, results of the IASTP Program [Brown et al(2022)Brown, Svoboda, Meredith, Lane, and Horne]Brown_2022 Brown EJE, Svoboda F, Meredith NP, et al (2022) Attention-based machine vision models and techniques for solar wind speed forecasting using solar euv images. Space Weather 20(3):e2021SW002,976. https://doi.org/10.1029/2021SW002976, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021SW002976>, e2021SW002976 2021SW002976, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002976https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002976 [Brunk(2001)]Brunk2001SelfOrganizedCA Brunk GG (2001) Self-organized criticality: A new theory of political behaviour and some of its implications. British Journal of Political Science 31:427–445 [Buzan and Buzan(1994)]Buzan1994TheMM Buzan T, Buzan B (1994) The mind map book: How to use radiant thinking to maximize your brain's untapped potential [Camporeale(2019)]Camporeale_2019 Camporeale E (2019) The challenge of machine learning in space weather: Nowcasting and forecasting. Space Weather 17(8):1166–1207. https://doi.org/10.1029/2018SW002061, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018SW002061>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018SW002061https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018SW002061 [Carpenter et al(2001)Carpenter, Walker, Anderies, and Abel]Carpenter2001FromMT Carpenter S, Walker B, Anderies JM, et al (2001) From metaphor to measurement: Resilience of what to what? Ecosystems 4:765–781 [Carpenter and Brock(2006)]Carpenter2006RisingVA Carpenter SR, Brock WAB (2006) Rising variance: a leading indicator of ecological transition. Ecology letters 9 3:311–8 [Casdagli(1992)]Casdagli1992ADS Casdagli M (1992) A dynamical systems approach to modeling input-output systems [Castiglione et al(2008)Castiglione, Falcioni, Lesne, and Vulpiani]Castiglione2008ChaosAC Castiglione P, Falcioni M, Lesne A, et al (2008) Chaos and Coarse Graining in Statistical Mechanics [Chang(1992a)]Chang1992PathID Chang T (1992a) Path integrals, differential renormalization-group, and stochastic systems near criticality. International Journal of Engineering Science 30:1401–1405 [Chang(1998)]Chang_1998 Chang T (1998) Sporadic localized reconnections and intermittent turbulence in the magnetotail, AGU [Chang and Wu(2007)]Chang_2007 Chang T, Wu C (2007) Dynamical complexity, intermittent turbulence, coarse‐grained dissipation, criticality and multifractal processes. AIP Conference Proceedings 932(1):161–166. 10.1063/1.2778959, <https://aip.scitation.org/doi/abs/10.1063/1.2778959>, https://arxiv.org/abs/https://aip.scitation.org/doi/pdf/10.1063/1.2778959https://arxiv.org/abs/https://aip.scitation.org/doi/pdf/10.1063/1.2778959 [Chang et al(2003)Chang, Tam, Wu, and Consolini]Chang_2003 Chang T, Tam SW, Wu CC, et al (2003) Self-organized criticality: An explanation of the 1/f noise. Space Science Reviews 107(1572-9672). 10.1023/A:1025502023494, <https://doi.org/10.1023/A:1025502023494> [Chang(1999)]Chang_1999 Chang TN (1999) Self-organized criticality, multi-fractal spectra, sporadic localized reconnections and intermittent turbulence in the magnetotail [Chang(1992b)]Chang1992LowdimensionalBA Chang TTS (1992b) Low-dimensional behavior and symmetry breaking of stochastic systems near criticality-can these effects be observed in space and in the laboratory? IEEE Transactions on Plasma Science 20:691–694 [Chapman and Watkins(2000)]Chapman2000AvalanchingAS Chapman SC, Watkins NW (2000) Avalanching and self-organised criticality, a paradigm for geomagnetic activity? Space Science Reviews 95:293–307 [Chapman et al(1998)Chapman, Watkins, Dendy, Helander, and Rowlands]Chapman_1998 Chapman SC, Watkins NW, Dendy R, et al (1998) A simple avalanche model as an analogue for magnetospheric activity. Geophysical Research Letters 25 [Chapman et al(1999)Chapman, Watkins, and Rowlands]Chapman1999SignaturesOD Chapman SC, Watkins NW, Rowlands G (1999) Signatures of dual scaling regimes in a simple avalanche model for magnetospheric activity. Journal of Atmospheric and Solar-Terrestrial Physics 63:1361–1370 [Chapman et al(2004)Chapman, Dendy, and Watkins]Chapman2004RobustnessAS Chapman SC, Dendy R, Watkins NW (2004) Robustness and scaling: key observables in the complex dynamic magnetosphere. Plasma Physics and Controlled Fusion 46 [Charbonneau et al(2001)Charbonneau, McIntosh, Liu, and Bogdan]Charbonneau_2001 Charbonneau P, McIntosh SW, Liu HL, et al (2001) Avalanche models for solar flares. Solar Physics 203:321–353. 10.1023/A:1013301521745 [Chiang(2000)]Chiang2000CatchingCF Chiang TK (2000) Catching crumbs from the table. Nature 405:517–517 [Chu et al(2017)Chu, Bortnik, Li, Ma, Denton, Yue, Angelopoulos, Thorne, Darrouzet, Ozhogin, Kletzing, Wang, and Menietti]Chu2017ANN Chu X, Bortnik J, Li W, et al (2017) A neural network model of three‐dimensional dynamic electron density in the inner magnetosphere. Journal of Geophysical Research: Space Physics 122:9183 – 9197 [Cilliers(2000)]Cilliers2000 Cilliers P (2000) Knowledge, complexity, and understanding. Emergence 2(4):7–13. 10.1207/S15327000EM0204_03, <https://doi.org/10.1207/S15327000EM0204_03>, https://arxiv.org/abs/https://doi.org/10.1207/S15327000EM0204_03https://arxiv.org/abs/https://doi.org/10.1207/S15327000EM0204_03 [Clausen and Nickisch(2018)]Clausen_2018 Clausen LBN, Nickisch H (2018) Automatic classification of auroral images from the oslo auroral themis (oath) data set using machine learning. Journal of Geophysical Research: Space Physics 123(7):5640–5647. https://doi.org/10.1029/2018JA025274, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA025274>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025274https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025274 [Cokely et al(2012)Cokely, Galesic, Schulz, Ghazal, and García‐Retamero]Cokely2012MeasuringRL Cokely ET, Galesic M, Schulz E, et al (2012) Measuring risk literacy: The berlin numeracy test. Judgment and Decision Making [Consolini(1997)]Consolini_1997 Consolini G (1997) Sandpile cellular automata and magnetospheric dynamics [Consolini(2002)]Consolini_2002 Consolini G (2002) Self-organized criticality: A new paradigm for the magnetotail dynamics. Fractals 10(03):275–283. 10.1142/S0218348X02001397, <https://doi.org/10.1142/S0218348X02001397>, https://arxiv.org/abs/https://doi.org/10.1142/S0218348X02001397https://arxiv.org/abs/https://doi.org/10.1142/S0218348X02001397 [Consolini and Chang(2001)]Consolini2001MagneticFT Consolini G, Chang TS (2001) Magnetic field topology and criticality in geotail dynamics: Relevance to substorm phenomena. Space Science Reviews 95:309–321 [Consolini et al(2008)Consolini, Michelis, and Tozzi]Consolini2008OnTE Consolini G, Michelis PD, Tozzi R (2008) On the earth’s magnetospheric dynamics: Nonequilibrium evolution and the fluctuation theorem. Journal of Geophysical Research 113 [Consolini et al(2021)Consolini, Quattrociocchi, D’Angelo, Alberti, Piersanti, Marcucci, and De Michelis]Consolini_2021 Consolini G, Quattrociocchi V, D’Angelo G, et al (2021) Electric field multifractal features in the high-latitude ionosphere: Cses-01 observations. Atmosphere 12(5). 10.3390/atmos12050646, <https://www.mdpi.com/2073-4433/12/5/646> [Coppes and Jansen(2022)]Coppes2022BeyondCO Coppes W, Jansen L (2022) Beyond categorisation: On piet mondrian’s artistry and success (1911-1919). Oud Holland – Journal for Art of the Low Countries [Council(2014)]NAP18722 Council NR (2014) Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond. The National Academies Press, Washington, DC, 10.17226/18722, <https://nap.nationalacademies.org/catalog/18722/convergence-facilitating-transdisciplinary-integration-of-life-sciences-physical-sciences-engineering> [Davis and Sugiura(1966a)]Davis_1966 Davis TN, Sugiura M (1966a) Auroral electrojet activity index ae and its universal time variations. Journal of Geophysical Research (1896-1977) 71(3):785–801. https://doi.org/10.1029/JZ071i003p00785, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/JZ071i003p00785>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JZ071i003p00785https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JZ071i003p00785 [Davis and Sugiura(1966b)]Davis1966AuroralEA Davis TN, Sugiura M (1966b) Auroral electrojet activity index ae and its universal time variations. Journal of Geophysical Research 71:785–801 [de Bruijn et al(2017)de Bruijn, Buurman, Mens, Dahm, and Klijn]DeBruijn_2017 de Bruijn K, Buurman J, Mens M, et al (2017) Resilience in practice: Five principles to enable societies to cope with extreme weather events. Environmental Science & Policy 70:21–30. https://doi.org/10.1016/j.envsci.2017.02.001, <https://www.sciencedirect.com/science/article/pii/S1462901116305202> [Demirel and Gerbaud(2019)]DEMIREL2019573 Demirel Y, Gerbaud V (2019) Chapter 12 - stability analysis. In: Demirel Y, Gerbaud V (eds) Nonequilibrium Thermodynamics (Fourth Edition), fourth edition edn. Elsevier, p 573–602, https://doi.org/10.1016/B978-0-444-64112-0.00012-5, <https://www.sciencedirect.com/science/article/pii/B9780444641120000125> [Denton(2021)]Denton_2021 Denton MH (2021) Some Unsolved Problems of Magnetospheric Physics, American Geophysical Union (AGU), chap 46, pp 743–751. https://doi.org/10.1002/9781119815624.ch46, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/9781119815624.ch46>, https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119815624.ch46 [Denton et al(2016a)Denton, Borovsky, Stepanova, and Valdivia]Denton2016PrefaceUP Denton MH, Borovsky JE, Stepanova M, et al (2016a) Preface: Unsolved problems of magnetospheric physics. Journal of Geophysical Research: Space Physics 121:10,783 – 10,785 [Denton et al(2016b)Denton, Borovsky, Stepanova, and Valdivia]Denton_2016 Denton MH, Borovsky JE, Stepanova M, et al (2016b) Preface: Unsolved problems of magnetospheric physics. Journal of Geophysical Research: Space Physics 121(11):10,783–10,785. https://doi.org/10.1002/2016JA023362, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2016JA023362>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2016JA023362https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2016JA023362 [Descartes(1968)]Descartes1968-DESDOM Descartes R (1968) Discourse on Method. Harmondsworth, Penguin [Devlin et al(2019)Devlin, Chang, Lee, and Toutanova]Devlin2019BERTPO Devlin J, Chang MW, Lee K, et al (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv abs/1810.04805 [Dods et al(2015)Dods, Chapman, and Gjerloev]Dods_2015 Dods J, Chapman SC, Gjerloev JW (2015) Network analysis of geomagnetic substorms using the supermag database of ground-based magnetometer stations. Journal of Geophysical Research: Space Physics 120(9):7774–7784. https://doi.org/10.1002/2015JA021456, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2015JA021456>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2015JA021456https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2015JA021456 [Dods et al(2017)Dods, Chapman, and Gjerloev]Dods_2017 Dods JE, Chapman SC, Gjerloev JW (2017) Characterizing the ionospheric current pattern response to southward and northward imf turnings with dynamical supermag correlation networks. Journal of Geophysical Research: Space Physics 122:1883 – 1902 [Donges et al(2009)Donges, Zou, Marwan, and Kurths]Donges_2009 Donges JF, Zou Y, Marwan N, et al (2009) The backbone of the climate network. EPL (Europhysics Letters) 87:48,007 [Donovan et al(2006)Donovan, Mende, Jackel, Frey, Syrjäsuo, Voronkov, Trondsen, Peticolas, Angelopoulos, Harris, Greffen, and Connors]Donovan2006TheTA Donovan EF, Mende SB, Jackel B, et al (2006) The themis all-sky imaging array—system design and initial results from the prototype imager. Journal of Atmospheric and Solar-Terrestrial Physics 68:1472–1487 [Dungey(1961)]Dungey_1961 Dungey JW (1961) Interplanetary magnetic field and the auroral zones. Physical Review Letters 6:47–48 [Emardson et al(2013)Emardson, Jarlemark, Johansson, and Schäfer]Emardson2013SpatialVI Emardson R, Jarlemark P, Johansson JM, et al (2013) Spatial variability in the ionosphere measured with gnss networks. Radio Science 48:646 – 652 [Erdos and Rényi(1984)]Erdos1984OnTE Erdos PL, Rényi A (1984) On the evolution of random graphs. Transactions of the American Mathematical Society 286:257–257 [Farmer and Sidorowichl(1989)]Farmer1989ExploitingCT Farmer JD, Sidorowichl JJ (1989) Exploiting chaos to predict the future and reduce noise. Evolution, Learning and Cognition pp 277–330 [Farrugia et al(1993)Farrugia, Freeman, Burlaga, Lepping, and Takahashi]Farrugia1993TheEM Farrugia CJ, Freeman MP, Burlaga LF, et al (1993) The earth's magnetosphere under continued forcing - substorm activity during the passage of an interplanetary magnetic cloud. Journal of Geophysical Research 98:7657–7671 [Finn(2017)]Finn2017WhatAW Finn E (2017) What Algorithms Want: Imagination in the Age of Computing [Fischer et al(2022)Fischer, Rings, Tabar, and Lehnertz]Fischer2022TowardsAD Fischer T, Rings T, Tabar MRR, et al (2022) Towards a data-driven estimation of resilience in networked dynamical systems: Designing a versatile testbed. Frontiers in Network Physiology 2 [Flack and Mitchell(2021)]flack_mitchell_2021 Flack J, Mitchell MM (2021) Complex systems science allows us to see new paths forward. AEON <https://aeon.co/essays/complex-systems-science-allows-us-to-see-new-paths-forward> [Flack(2017)]Flack_2017 Flack JC (2017) Coarse-graining as a downward causation mechanism. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375(2109):20160,338. 10.1098/rsta.2016.0338, <https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2016.0338>, https://arxiv.org/abs/https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2016.0338https://arxiv.org/abs/https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2016.0338 [Flack et al(2022)Flack, Ipeirotis, Malone, Mulgan, and Page]Flack2022EditorialTT Flack JC, Ipeirotis P, Malone TW, et al (2022) Editorial to the inaugural issue of collective intelligence. Collective Intelligence 1 [Fortunato(2009)]Fortunato2009CommunityDI Fortunato S (2009) Community detection in graphs. ArXiv abs/0906.0612 [Foster(2011)]Hooker_2011 Foster J (2011) Economic systems. In: Hooker C (ed) Philosophy of Complex Systems, Handbook of the Philosophy of Science, vol 10. North-Holland, Amsterdam, p 509–530, https://doi.org/10.1016/B978-0-444-52076-0.50018-3, <https://www.sciencedirect.com/science/article/pii/B9780444520760500183> [Freeman and Morley(2004)]Freeman2004AMS Freeman MP, Morley SK (2004) A minimal substorm model that explains the observed statistical distribution of times between substorms. Geophysical Research Letters 31 [Freeman et al(2000)Freeman, Watkins, and Riley]Freeman2000 Freeman MP, Watkins NW, Riley DJ (2000) Evidence for a solar wind origin of the power law burst lifetime distribution of the ae indices. Geophysical Research Letters 27(8):1087–1090. https://doi.org/10.1029/1999GL010742, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/1999GL010742>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL010742https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL010742 [Fung and Shao(2008)]Fung2008SpecificationOM Fung SF, Shao X (2008) Specification of multiple geomagnetic responses to variable solar wind and imf input. Annales Geophysicae 26:639–652 [Gabrielse et al(2014)Gabrielse, Angelopoulos, Runov, and Turner]Gabrielse2014StatisticalCO Gabrielse C, Angelopoulos V, Runov A, et al (2014) Statistical characteristics of particle injections throughout the equatorial magnetotail. Journal of Geophysical Research: Space Physics 119:2512 – 2535 [Galvez et al(2019)Galvez, Fouhey, Jin, Szenicer, Muñoz-Jaramillo, Cheung, Wright, Bobra, Liu, Mason, and Thomas]Galvez2019AML Galvez R, Fouhey DF, Jin M, et al (2019) A machine learning dataset prepared from the nasa solar dynamics observatory mission. ArXiv abs/1903.04538 [Gell-Mann(1995)]Gell-Mann_1995 Gell-Mann M (1995) What is complexity? remarks on simplicity and complexity by the nobel prize-winning author of the quark and the jaguar. Complexity 1(1):16–19. https://doi.org/10.1002/cplx.6130010105, <https://onlinelibrary.wiley.com/doi/abs/10.1002/cplx.6130010105>, https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/cplx.6130010105https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/cplx.6130010105 [Gell-Mann and Low(1954)]GellMann1954QUANTUMEA Gell-Mann M, Low FE (1954) Quantum electrodynamics at small distances. Physical Review 95:1300–1312 [Germany et al(1997)Germany, Parks, Brittnacher, Cumnock, Lummerzheim, Spann, Chen, Richards, and Rich]Germany_1997 Germany GA, Parks GK, Brittnacher M, et al (1997) Remote determination of auroral energy characteristics during substorm activity. Geophysical Research Letters 24(8):995–998. https://doi.org/10.1029/97GL00864, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/97GL00864>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/97GL00864https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/97GL00864 [Gjerloev(2009)]Gjerloev_2009 Gjerloev JW (2009) A global ground-based magnetometer initiative. Eos, Transactions American Geophysical Union 90(27):230–231. https://doi.org/10.1029/2009EO270002, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2009EO270002>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009EO270002https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009EO270002 [Gjerloev and Hoffman(2014)]Gjerloev2014TheLC Gjerloev JW, Hoffman R (2014) The large‐scale current system during auroral substorms. Journal of Geophysical Research: Space Physics 119:4591 – 4606 [Gjerloev et al(2007)Gjerloev, Hoffman, Sigwarth, and Frank]Gjerloev2007StatisticalDO Gjerloev JW, Hoffman R, Sigwarth JB, et al (2007) Statistical description of the bulge-type auroral substorm in the far ultraviolet. Journal of Geophysical Research 112 [Goertz et al(1993)Goertz, Shan, and Smith]Goertz1993PredictionOG Goertz CK, Shan LH, Smith RA (1993) Prediction of geomagnetic activity. Journal of Geophysical Research 98:7673–7684 [Golovchanskaya et al(2008)Golovchanskaya, Kozelov, Sergienko, Brandstrom, Nilsson, and Sandahl]Golovchanskaya2008ScalingBO Golovchanskaya I, Kozelov BV, Sergienko T, et al (2008) Scaling behavior of auroral luminosity fluctuations observed by auroral large imaging system (alis). Journal of Geophysical Research 113 [Granovetter(1973)]Granovetter1973TheSO Granovetter MS (1973) The strength of weak ties. American Journal of Sociology 78:1360 – 1380 [Gregersen(2002)]Gregersen2002FromCT Gregersen NH (2002) From Complexity to Life: On The Emergence of Life and Meaning [Grèzes et al(2021)Grèzes, Blanco-Cuaresma, Accomazzi, Kurtz, Shapurian, Henneken, Grant, Thompson, Chyla, McDonald, Hostetler, Templeton, Lockhart, Martinovic, Chen, Tanner, and Protopapas]Grzes2021BuildingAA Grèzes F, Blanco-Cuaresma S, Accomazzi A, et al (2021) Building astroBERT, a language model for astronomy & astrophysics. ArXiv abs/2112.00590 [Gunderson(2000)]gunderson2000ecological Gunderson LH (2000) Ecological resilience–in theory and application. Annual review of ecology and systematics pp 425–439 [Gunning et al(2021)Gunning, Vorm, Wang, and Turek]GunningXAI Gunning D, Vorm E, Wang JY, et al (2021) Darpa's explainable ai (xai) program: A retrospective. Applied AI Letters 2(4):e61. https://doi.org/10.1002/ail2.61, <https://onlinelibrary.wiley.com/doi/abs/10.1002/ail2.61>, https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/ail2.61https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/ail2.61 [Haiducek et al(2019)Haiducek, Welling, Morley, Ganushkina, and Chu]Haiducek2019UsingMS Haiducek JD, Welling DT, Morley SK, et al (2019) Using multiple signatures to improve accuracy of substorm identification. Journal of Geophysical Research: Space Physics 125 [Halley(1996)]Halley1996EcologyEA Halley JM (1996) Ecology, evolution and 1 f -noise. Trends in ecology & evolution 11 1:33–7 [Hernandez et al(1993)Hernandez, Tajima, and Horton]Hernandez_1993 Hernandez JV, Tajima T, Horton W (1993) Neural net forecasting for geomagnetic activity. Geophysical Research Letters 20(23):2707–2710. https://doi.org/10.1029/93GL02848, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/93GL02848>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/93GL02848https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/93GL02848 [Heschel and Heschel(1989)]Heschel1989MoralGA Heschel AJ, Heschel S (1989) Moral grandeur and spiritual audacity: Essays [Hidalgo(2015)]Hidalgo_2015 Hidalgo C (2015) Why Information Grows: The Evolution of Order, from Atoms to Economies. Penguin Books Limited, <https://books.google.com/books?id=hgOyBQAAQBAJ> [Hobson et al(2018)Hobson, Ferdinand, Kolchinsky, and Garland]Hobson2018RethinkingAS Hobson EA, Ferdinand V, Kolchinsky A, et al (2018) Rethinking animal social complexity measures with the help of complex systems concepts. Animal Behaviour 155:287–296 [Holland(1995)]Holland1995HiddenOH Holland JH (1995) Hidden order: How adaptation builds complexity [Holland(2000)]holland2000emergence Holland JH (2000) Emergence: From chaos to order. OUP Oxford [Hones(1979)]Hones_1979 Hones EW (1979) Transient phenomena in the magnetotail and their relation to substorms. Space Science Reviews 23:393–410 [Hughes et al(2022)Hughes, McGranaghan, Kellerman, Bortnik, Arrit, Venkataramani, Perry, McCormick, Ngwira, and Cohen]Hughes_2022 Hughes J, McGranaghan R, Kellerman AC, et al (2022) Revealing novel connections between space weather and the power grid: Network analysis of ground-based magnetometer and geomagnetically induced currents (gic) measurements. Space Weather 20(2):e2021SW002,727. https://doi.org/10.1029/2021SW002727, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021SW002727>, e2021SW002727 2021SW002727, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002727https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002727 [Hultman et al(2010)Hultman, Hassenzahl, and Rayner]Hultman_2010 Hultman NE, Hassenzahl DM, Rayner S (2010) Climate risk. Annual Review of Environment and Resources 35(1):283–303. 10.1146/annurev.environ.051308.084029, <https://doi.org/10.1146/annurev.environ.051308.084029>, https://arxiv.org/abs/https://doi.org/10.1146/annurev.environ.051308.084029https://arxiv.org/abs/https://doi.org/10.1146/annurev.environ.051308.084029 [Hwa and Kardar(1992)]Hwa_1992 Hwa, Kardar (1992) Avalanches, hydrodynamics, and discharge events in models of sandpiles. Physical review A, Atomic, molecular, and optical physics 45 10:7002–7023 [Jurafsky and Martin(2000)]Jurafsky2000SpeechAL Jurafsky D, Martin JH (2000) Speech and language processing - an introduction to natural language processing, computational linguistics, and speech recognition. In: Prentice Hall series in artificial intelligence [Kambhu et al(2007)Kambhu, Weidman, and Krishnan]Kambhu2007NewDF Kambhu J, Weidman ST, Krishnan N (2007) New directions for understanding systemic risk: a report on a conference cosponsored by the federal reserve bank of new york and the national academy of sciences. Economic and Policy Review 13:83 [Kamide and Kokubun(1996)]Kamide_1996 Kamide Y, Kokubun S (1996) Two-component auroral electrojet: Importance for substorm studies. Journal of Geophysical Research: Space Physics 101(A6):13,027–13,046. https://doi.org/10.1029/96JA00142, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/96JA00142>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/96JA00142https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/96JA00142 [Kamide et al(1999)Kamide, Kokubun, Bargatze, and Frank]Kamide_1999 Kamide Y, Kokubun S, Bargatze L, et al (1999) The size of the polar cap as an indicator of substorm energy. Physics and Chemistry of the Earth, Part C: Solar, Terrestrial & Planetary Science 24(1):119–127. https://doi.org/10.1016/S1464-1917(98)00018-X, <https://www.sciencedirect.com/science/article/pii/S146419179800018X>, international Symposium on Solar-Terrestrial Coupling Processes [Kaneko(1993)]Kaneko_1993 Kaneko K (1993) Theory and Applications of Coupled Map Lattices. Nonlinear Science: Theory and Applications, Wiley, <https://books.google.com/books?id=pqvvAAAAMAAJ> [Karpatne et al(2016)Karpatne, Atluri, Faghmous, Steinbach, Banerjee, Ganguly, Shekhar, Samatova, and Kumar]Karpatne2016TheoryGuidedDS Karpatne A, Atluri G, Faghmous JH, et al (2016) Theory-guided data science: A new paradigm for scientific discovery from data. IEEE Transactions on Knowledge and Data Engineering 29:2318–2331 [Kauffman(1993)]Kauffman_1993 Kauffman S (1993) The Origins of Order: Self-organization and Selection in Evolution. Oxford University Press, <https://books.google.com/books?id=lZcSpRJz0dgC> [Kauffman and Johnsen(1991)]Kauffman1991CoevolutionTT Kauffman SA, Johnsen S (1991) Coevolution to the edge of chaos: coupled fitness landscapes, poised states, and coevolutionary avalanches. Journal of theoretical biology 149 4:467–505 [Kelly(2016)]Kelly2016TheIU Kelly KF (2016) The inevitable: Understanding the 12 technological forces that will shape our future [Klein(2023)]Klein2023 Klein E (2023) This changes everything. New York Times <https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html> [Klimas et al(1992)Klimas, Baker, Roberts, Fairfield, and Büchner]Klimas1992AND Klimas AJ, Baker DN, Roberts DA, et al (1992) A nonlinear dynamical analogue model of geomagnetic activity. Journal of Geophysical Research 97:12,253–12,266 [Klimas et al(1994)Klimas, Baker, Vassiliadis, and Roberts]Klimas1994SubstormRD Klimas AJ, Baker DN, Vassiliadis D, et al (1994) Substorm recurrence during steady and variable solar wind driving: Evidence for a normal mode in the unloading dynamics of the magnetosphere. Journal of Geophysical Research 99:14,855–14,861 [Klimas et al(1996)Klimas, Vassiliadis, Baker, and Roberts]Klimas1996 Klimas AJ, Vassiliadis D, Baker DN, et al (1996) The organized nonlinear dynamics of the magnetosphere. Journal of Geophysical Research: Space Physics 101(A6):13,089–13,113. https://doi.org/10.1029/96JA00563, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/96JA00563>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/96JA00563https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/96JA00563 [Klimas et al(2000a)Klimas, Uritsky, Valdivia, Vassiliadis, and Baker]Klimas2000OnTC Klimas AJ, Uritsky VM, Valdivia JA, et al (2000a) On the compatibility of the coherent substorm cycle with the complex plasma sheet. In: Wilson A (ed) 5th International Conference on Substorms, pp 165–168 [Klimas et al(2000b)Klimas, Valdivia, Vassiliadis, Baker, Hesse, and Takalo]Klimas_2000b Klimas AJ, Valdivia JA, Vassiliadis D, et al (2000b) Self-organized criticality in the substorm phenomenon and its relation to localized reconnection in the magnetospheric plasma sheet. Journal of Geophysical Research: Space Physics 105(A8):18,765–18,780. https://doi.org/10.1029/1999JA000319, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/1999JA000319>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA000319https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA000319 [Klimas et al(2010)Klimas, Uritsky, and Donovan]Klimas2010MultiscaleAE Klimas AJ, Uritsky VM, Donovan EF (2010) Multiscale auroral emission statistics as evidence of turbulent reconnection in earth's midtail plasma sheet. Journal of Geophysical Research 115 [Kozelov et al(2004)Kozelov, Uritsky, and Klimas]Kozelov2004PowerLP Kozelov BV, Uritsky VM, Klimas AJ (2004) Power law probability distributions of multiscale auroral dynamics from ground‐based tv observations. Geophysical Research Letters 31 [Krakauer(2018)]krakauer2018worlds Krakauer D (2018) Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, 1984-2019. Santa Fe Institute of Science, <https://books.google.com/books?id=yAMwtwEACAAJ> [Krakauer(2019)]krakauer_parallax Krakauer D (2019) Beyond borders: New complexity economics. <https://sfi-edu.s3.amazonaws.com/sfi-edu/production/uploads/publication/2019/10/22/SFI-Parallax-Fall-2019.pdf> [Krakauer(2020)]Krakauer_aeon Krakauer D (2020) At the limits of thought. Aeon [Kuhn(1962)]Kuhn_1962 Kuhn T (1962) The structure of scientific revolutions. International Encyclopedia of Unified Science 2(2) [Kvammen et al(2020)Kvammen, Wickstrøm, McKay, and Partamies]Kvammen_2020 Kvammen A, Wickstrøm K, McKay D, et al (2020) Auroral image classification with deep neural networks. Journal of Geophysical Research: Space Physics 125(10):e2020JA027,808. https://doi.org/10.1029/2020JA027808, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2020JA027808>, e2020JA027808 10.1029/2020JA027808, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2020JA027808https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2020JA027808 [Ladyman et al(2020)Ladyman, Lambert, and Wiesner]Ladyman2020WhatIA Ladyman J, Lambert J, Wiesner K (2020) What is a complex system? European Journal for Philosophy of Science 3:33–67 [Langton et al(1991)Langton, Taylor, Farmer, and Rasmussen]Langton1991ArtificialLI Langton CG, Taylor CE, Farmer JD, et al (1991) Artificial life ii [Leger et al(2015)Leger, Jager, Bertrand, Hulot, Brocco, Vigneron, Lalanne, Chulliat, and Fratter]Leger2015InflightPO Leger JM, Jager T, Bertrand F, et al (2015) In-flight performance of the absolute scalar magnetometer vector mode on board the swarm satellites. Earth, Planets and Space 67:1–12 [Lent(2017)]Lent2017ThePI Lent J (2017) The Patterning Instinct: A Cultural History of Humanity's Search for Meaning [Lenton et al(2008)Lenton, Held, Kriegler, Hall, Lucht, Rahmstorf, and Schellnhuber]Lenton2008TippingEI Lenton TM, Held H, Kriegler E, et al (2008) Tipping elements in the earth's climate system. Proceedings of the National Academy of Sciences 105:1786 – 1793 [Levin et al(2021)Levin, Anderies, Adger, Barrett, Bennet, Cárdenas, Carpenter, Crépin, Ehrlich, Fischer, Folke, Kautsky, Kling, Nyborg, Polasky, Scheffer, Segerson, Shogren, van den Bergh, Walker, Weber, and Wilen]Levin2021GovernanceIT Levin SA, Anderies JM, Adger WN, et al (2021) Governance in the face of extreme events: Lessons from evolutionary processes for structuring interventions, and the need to go beyond. SSRN Electronic Journal [Levitt and Warshel(1975)]Levitt1975ComputerSO Levitt M, Warshel A (1975) Computer simulation of protein folding. Nature 253:694–698 [Liemohn et al(2018)Liemohn, McCollough, Jordanova, Ngwira, Morley, Cid, Tobiska, Wintoft, Ganushkina, Welling, Bingham, Balikhin, Opgenoorth, Engel, Weigel, Singer, Buresová, Bruinsma, Zhelavskaya, Shprits, and Vasile]Liemohn2018ModelEG Liemohn MW, McCollough JP, Jordanova VK, et al (2018) Model evaluation guidelines for geomagnetic index predictions. Space Weather 16:2079 – 2102 [Liemohn et al(2021)Liemohn, Shane, Azari, Petersen, Swiger, and Mukhopadhyay]Liemohn2021RMSEIN Liemohn MW, Shane AD, Azari AR, et al (2021) Rmse is not enough: Guidelines to robust data-model comparisons for magnetospheric physics. Journal of Atmospheric and Solar-Terrestrial Physics [Liou et al(2018)Liou, Sotirelis, and Richardson]Liou_2018 Liou K, Sotirelis T, Richardson I (2018) Substorm occurrence and intensity associated with three types of solar wind structure. Journal of Geophysical Research: Space Physics 123(1):485–496. https://doi.org/10.1002/2017JA024451, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017JA024451>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024451https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024451 [Litt et al(2001)Litt, Esteller, Echauz, D'Alessandro, Shor, Henry, Pennell, Epstein, Bakay, Dichter, and Vachtsevanos]Litt2001EpilepticSM Litt B, Esteller R, Echauz JR, et al (2001) Epileptic seizures may begin hours in advance of clinical onset a report of five patients. Neuron 30:51–64 [Lloyd(2001)]Lloyd_2001 Lloyd S (2001) Measures of complexity: a nonexhaustive list. IEEE Control Systems Magazine 21(4):7–8. 10.1109/MCS.2001.939938 [Lockwood(2022)]Lockwood_2022 Lockwood M (2022) Solar wind-magnetosphere coupling functions: Pitfalls, limitations, and applications. Space Weather 20(2):e2021SW002,989. https://doi.org/10.1029/2021SW002989, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021SW002989>, e2021SW002989 2021SW002989, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002989https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002989 [Lockwood et al(1986)Lockwood, van Eyken, Bromage, Willis, and Cowley]Lockwood_1986 Lockwood M, van Eyken AP, Bromage BJI, et al (1986) Eastward propagation of a plasma convection enhancement following a southward turning of the interplanetary magnetic field. Geophysical Research Letters 13(1):72–75. https://doi.org/10.1029/GL013i001p00072, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/GL013i001p00072>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/GL013i001p00072https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/GL013i001p00072 [Longden et al(2014)Longden, Chisham, and Freeman]Longden2014MagneticLT Longden N, Chisham G, Freeman MP (2014) Magnetic local time variation and scaling of poleward auroral boundary dynamics. Journal of Geophysical Research: Space Physics 119:10,006 – 10,022 [López-Ruiz et al(1995)López-Ruiz, Mancini, and Calbet]LpezRuiz1995ASM López-Ruiz R, Mancini H, Calbet X (1995) A statistical measure of complexity. ArXiv abs/1009.1498 [Lui(2001)]Lui_2001 Lui ATY (2001) Current controversies in magnetospheric physics. Reviews of Geophysics 39(4):535–563. https://doi.org/10.1029/2000RG000090, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2000RG000090>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2000RG000090https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2000RG000090 [Lui et al(2000)Lui, Chapman, Liou, Newell, Meng, Brittnacher, and Parks]Lui_2000 Lui ATY, Chapman SC, Liou K, et al (2000) Is the dynamic magnetosphere an avalanching system? Geophysical Research Letters 27(7):911–914. https://doi.org/10.1029/1999GL010752, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/1999GL010752>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL010752https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999GL010752 [Lundstedt and Wintoft(1994)]Lundstedt1994PredictionOG Lundstedt H, Wintoft P (1994) Prediction of geomagnetic storms from solar wind data with the use of a neural network. Annales Geophysicae 12:19–24 [Luo et al(2022)Luo, Sun, Xia, Qin, Zhang, Poon, and Liu]Luo2022BioGPTGP Luo R, Sun L, Xia Y, et al (2022) Biogpt: Generative pre-trained transformer for biomedical text generation and mining. Briefings in bioinformatics [Maimaiti et al(2019)Maimaiti, Kunduri, Ruohoniemi, Baker, and House]Maimaiti2019ADL Maimaiti M, Kunduri BSR, Ruohoniemi JM, et al (2019) A deep learning‐based approach to forecast the onset of magnetic substorms. Space Weather 17:1534 – 1552 [Malik et al(2011)Malik, Bookhagen, Marwan, and Kurths]Malik_2011 Malik N, Bookhagen B, Marwan N, et al (2011) Analysis of spatial and temporal extreme monsoonal rainfall over south asia using complex networks. Climate Dynamics 39:971–987 [Manshour et al(2021)Manshour, Balasis, Consolini, Papadimitriou, and Palus]Manshour2021CausalityAI Manshour P, Balasis G, Consolini G, et al (2021) Causality and information transfer between the solar wind and the magnetosphere–ionosphere system. Entropy 23 [Martignon(2001)]Smelser_2001 Martignon L (2001) Information theory. In: Smelser NJ, Baltes PB (eds) International Encyclopedia of the Social & Behavioral Sciences. Pergamon, Oxford, p 7476–7480, https://doi.org/10.1016/B0-08-043076-7/00608-2, <https://www.sciencedirect.com/science/article/pii/B0080430767006082> [Materassi et al(2011)Materassi, Ciraolo, Consolini, and Smith]Materassi2011PredictiveSW Materassi M, Ciraolo L, Consolini G, et al (2011) Predictive space weather: An information theory approach. Advances in Space Research 47:877–885 [Mazzocchi(2015)]Mazzocchi2015CouldBD Mazzocchi F (2015) Could big data be the end of theory in science? EMBO reports 16 [McAteer et al(2015)McAteer, Aschwanden, Dimitropoulou, Georgoulis, Pruessner, Morales, Ireland, and Abramenko]McAteer201525YO McAteer RTJ, Aschwanden MJ, Dimitropoulou M, et al (2015) 25 years of self-organized criticality: Numerical detection methods. Space Science Reviews 198:217–266 [McCarthy et al(2006)McCarthy, Minsky, Rochester, and Shannon]McCarthy2006APF McCarthy J, Minsky M, Rochester N, et al (2006) A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag 27:12–14 [McGranaghan(2022)]Mcgranaghan2022TheEO McGranaghan R (2022) The evolution of heliophysics: Complexity, community, and open science. Frontiers in Astronomy and Space Sciences [McGranaghan et al(2018)McGranaghan, Borovsky, and Denton]Mcgranaghan2018HowDW McGranaghan R, Borovsky JE, Denton MH (2018) How do we accomplish system science in space? Eos [McGranaghan et al(2020)McGranaghan, Kellerman, Arritt, Bortnik, Cohen, Venkataramani, McCormick, Hughes, Ngwira, and Perry]McGranaghan_2020 McGranaghan R, Kellerman A, Arritt R, et al (2020) The heliophysics and space weather open knowledge network: The convergence hub for the exploration of space science (CHESS) 10.1002/essoar.10503724.1, <https://doi.org/10.1002%2Fessoar.10503724.1> [McGranaghan et al(2021a)McGranaghan, Camporeale, Georgoulis, and Anastasiadis]Mcgranaghan2021SpaceWR McGranaghan R, Camporeale E, Georgoulis MK, et al (2021a) Space weather research in the digital age and across the full data lifecycle: Introduction to the topical issue. Journal of Space Weather and Space Climate [McGranaghan et al(2022)McGranaghan, Kellerman, and Olson]Mcgranaghan2022ConvergingTS McGranaghan R, Kellerman AL, Olson MW (2022) Converging toward solutions to grand challenges. Eos [McGranaghan et al(2017a)McGranaghan, Bhatt, Matsuo, Mannucci, Semeter, and Datta-Barua]McGranaghan_2017b McGranaghan RM, Bhatt A, Matsuo T, et al (2017a) Ushering in a new frontier in geospace through data science. Journal of Geophysical Research: Space Physics 122(12):12,586–12,590. https://doi.org/10.1002/2017JA024835, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017JA024835>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024835https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024835 [McGranaghan et al(2017b)McGranaghan, Mannucci, and Forsyth]McGranaghan_2017a McGranaghan RM, Mannucci AJ, Forsyth C (2017b) A comprehensive analysis of multiscale field-aligned currents: Characteristics, controlling parameters, and relationships. Journal of Geophysical Research: Space Physics 122(12):11,931–11,960. https://doi.org/10.1002/2017JA024742, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017JA024742>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024742https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024742 [McGranaghan et al(2017c)McGranaghan, Mannucci, Verkhoglyadova, and Malik]McGranaghan_2017c McGranaghan RM, Mannucci AJ, Verkhoglyadova O, et al (2017c) Finding multiscale connectivity in our geospace observational system: Network analysis of total electron content. Journal of Geophysical Research: Space Physics 122(7):7683–7697. https://doi.org/10.1002/2017JA024202, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017JA024202>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024202https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024202 [McGranaghan et al(2021b)McGranaghan, Ziegler, Bloch, Hatch, Camporeale, Lynch, Owens, Gjerloev, Zhang, and Skone]McGranaghan_2021 McGranaghan RM, Ziegler J, Bloch T, et al (2021b) Toward a next generation particle precipitation model: Mesoscale prediction through machine learning (a case study and framework for progress). Space Weather 19(6):e2020SW002,684. https://doi.org/10.1029/2020SW002684, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2020SW002684>, e2020SW002684 2020SW002684, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2020SW002684https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2020SW002684 [McPherron(1970)]McPherron_1970 McPherron RL (1970) Growth phase of magnetospheric substorms. Journal of Geophysical Research (1896-1977) 75(28):5592–5599. https://doi.org/10.1029/JA075i028p05592, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/JA075i028p05592>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA075i028p05592https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA075i028p05592 [McPherron and Rostoker(1993)]McPherron1993CommentO McPherron RL, Rostoker G (1993) Comment on “prediction of geomagnetic activity” by c. k. goertz, lin‐hua shan, and r. a. smith. Journal of Geophysical Research 98:7685–7686 [McPherron et al(1973)McPherron, Russell, and Aubry]McPherron_1973 McPherron RL, Russell CT, Aubry MP (1973) Satellite studies of magnetospheric substorms on august 15, 1968: 9. phenomenological model for substorms. Journal of Geophysical Research (1896-1977) 78(16):3131–3149. https://doi.org/10.1029/JA078i016p03131, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/JA078i016p03131>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA078i016p03131https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/JA078i016p03131 [McPherron et al(2015)McPherron, Hsu, and Chu]McPherron_2015 McPherron RL, Hsu TS, Chu X (2015) An optimum solar wind coupling function for the AL index. Journal of Geophysical Research: Space Physics 120(4):2494–2515. https://doi.org/10.1002/2014JA020619, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2014JA020619>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2014JA020619https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2014JA020619 [Meadows and Wright(2008)]meadows2008thinking Meadows D, Wright D (2008) Thinking in Systems: A Primer. Chelsea Green Pub., <https://books.google.com/books?id=CpbLAgAAQBAJ> [Mendillo and Klobuchar(2006)]Mendillo2006TotalEC Mendillo M, Klobuchar JA (2006) Total electron content: Synthesis of past storm studies and needed future work. Radio Science 41 [Meng and Verkhoglyadova(2021)]Meng_2021 Meng X, Verkhoglyadova OP (2021) Quantifying contributions of external drivers to the global ionospheric state. Space Weather 19(9):e2021SW002,752. https://doi.org/10.1029/2021SW002752, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021SW002752>, e2021SW002752 2021SW002752, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002752https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002752 [Merkin et al(2019)Merkin, Panov, Sorathia, and Ukhorskiy]Merkin2019ContributionOB Merkin VG, Panov EV, Sorathia KA, et al (2019) Contribution of bursty bulk flows to the global dipolarization of the magnetotail during an isolated substorm. Journal of Geophysical Research Space Physics 124:8647 – 8668 [Merriam-Webster(2023)]mw:systems Merriam-Webster (2023) Systems. <https://www.merriam-webster.com/dictionary/system> [de Michelis et al(2015)de Michelis, Consolini, and Tozzi]deMichelis2015MagneticFF de Michelis P, Consolini G, Tozzi R (2015) Magnetic field fluctuation features at swarm's altitude: A fractal approach. Geophysical Research Letters 42:3100 – 3105 [Milgram(1967)]Milgram1967TheSW Milgram S (1967) The small world problem. Psychology today 2:60–67 [Miller and Page(2009)]Miller2009ComplexAS Miller JH, Page SE (2009) Complex adaptive systems - an introduction to computational models of social life. In: Princeton studies in complexity [Milne(1998)]Milne1998MotivationAB Milne BT (1998) Motivation and benefits of complex systems approaches in ecology. Ecosystems 1:449–456 [Mitchell(2009)]Mitchell2009ComplexityA Mitchell M (2009) Complexity - A Guided Tour [Mitchell(1997)]mitchell1997machine Mitchell T (1997) Machine Learning. McGraw-Hill International Editions, McGraw-Hill, <https://books.google.com/books?id=EoYBngEACAAJ> [Moore(1998)]Moore1998CrammingMC Moore GE (1998) Cramming more components onto integrated circuits. Proceedings of the IEEE 86:82–85 [Nanjo et al(2022)Nanjo, Nozawa, Yamamoto, Kawabata, Johnsen, Tsuda, and Hosokawa]Nanjo2022AnAA Nanjo S, Nozawa S, Yamamoto M, et al (2022) An automated auroral detection system using deep learning: real-time operation in tromsø, norway. Scientific Reports 12 [Narock and Fox(2012)]NAROCK2012248 Narock T, Fox P (2012) From science to e-science to semantic e-science: A heliophysics case study. Computers & Geosciences 46:248–254. https://doi.org/10.1016/j.cageo.2011.11.018, <https://www.sciencedirect.com/science/article/pii/S0098300411004080> [Nersessian(2022)]nersessian2022 Nersessian NJ (2022) Interdisciplinarity in the Making: Models and Methods in Frontier Science. MIT Press [Newell and Gjerloev(2014)]Newell2014LocalGI Newell PT, Gjerloev J (2014) Local geomagnetic indices and the prediction of auroral power. Journal of Geophysical Research: Space Physics 119:9790 – 9803 [Newell and Gjerloev(2011a)]Newell2011EvaluationOS Newell PT, Gjerloev JW (2011a) Evaluation of supermag auroral electrojet indices as indicators of substorms and auroral power. Journal of Geophysical Research 116 [Newell and Gjerloev(2011b)]Newell2011SubstormAM Newell PT, Gjerloev JW (2011b) Substorm and magnetosphere characteristic scales inferred from the supermag auroral electrojet indices. Journal of Geophysical Research 116 [Newell et al(2007)Newell, Sotirelis, Liou, Meng, and Rich]Newell_2007 Newell PT, Sotirelis T, Liou K, et al (2007) A nearly universal solar wind-magnetosphere coupling function inferred from 10 magnetospheric state variables. Journal of Geophysical Research: Space Physics 112(A1). https://doi.org/10.1029/2006JA012015, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2006JA012015>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2006JA012015https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2006JA012015 [Newman(2018)]Newman_2018 Newman M (2018) Networks. OUP Oxford, <https://books.google.com/books?id=YdZjDwAAQBAJ> [Newman(2010)]Newman2010NetworksAI Newman MEJ (2010) Networks: An Introduction [Newman et al(2002)Newman, Watts, and Strogatz]Newman2002RandomGM Newman MEJ, Watts DJ, Strogatz SH (2002) Random graph models of social networks. Proceedings of the National Academy of Sciences of the United States of America 99:2566 – 2572 [Niazi and Hussain(2011)]Niazi2011AgentbasedCF Niazi MA, Hussain A (2011) Agent-based computing from multi-agent systems to agent-based models: a visual survey. Scientometrics 89:479–499 [Nishida et al(1966)Nishida, Iwasaki, and Nagata]Nishida_1966 Nishida A, Iwasaki N, Nagata T (1966) Origin of fluctuations in the equatorial electrojet: A new type of geomagnetic variation. Ann Geophys, 22: 478-84(July-Sept 1966) <https://www.osti.gov/biblio/4445673> [Nishimura et al(2021)Nishimura, Deng, Lyons, McGranaghan, and Zettergren]Nishimura_2021 Nishimura Y, Deng Y, Lyons LR, et al (2021) Multiscale Dynamics in the High-Latitude Ionosphere, American Geophysical Union (AGU), chap 3, pp 49–65. https://doi.org/10.1002/9781119815617.ch3, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/9781119815617.ch3>, https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119815617.ch3 [Obayashi and Nishida(1968)]Obayashi1968LargescaleEF Obayashi T, Nishida A (1968) Large-scale electric field in the magnetosphere. Space Science Reviews 8:3–31 [Orr et al(2019)Orr, Chapman, and Gjerloev]Orr_2019 Orr L, Chapman SC, Gjerloev JW (2019) Directed network of substorms using supermag ground-based magnetometer data. Geophysical Research Letters 46(12):6268–6278. https://doi.org/10.1029/2019GL082824, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019GL082824>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL082824https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL082824 [Orr et al(2021a)Orr, Chapman, and Beggan]Orr_2021GIC Orr L, Chapman SC, Beggan CD (2021a) Wavelet and network analysis of magnetic field variation and geomagnetically induced currents during large storms. Space Weather 19(9):e2021SW002,772. https://doi.org/10.1029/2021SW002772, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021SW002772>, e2021SW002772 2021SW002772, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002772https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2021SW002772 [Orr et al(2021b)Orr, Chapman, Gjerloev, and Guo]Orr_2021b Orr L, Chapman SC, Gjerloev JW, et al (2021b) Network community structure of substorms using supermag magnetometers. Nature Communications 12 [Osborne and Provenzale(1989)]Osborne1989FiniteCD Osborne AR, Provenzale A (1989) Finite correlation dimension for stochastic systems with power-law spectra. Physica D: Nonlinear Phenomena 35:357–381 [Ottino and Mau(2022)]ottino2022nexus Ottino J, Mau B (2022) The Nexus: Augmented Thinking for a Complex World–The New Convergence of Art, Technology, and Science. MIT Press, <https://books.google.com/books?id=fLZNEAAAQBAJ> [Page(2011)]Page_2011 Page S (2011) Diversity and Complexity. Princeton University Press, Princeton, doi:10.1515/9781400835140, <https://doi.org/10.1515/9781400835140> [Palmerio et al(2022)Palmerio, Lee, Mays, Luhmann, Lario, S'anchez-Cano, Richardson, Vainio, Stevens, Cohen, Steinvall, Möstl, Weiss, Nieves‐Chinchilla, Li, Larson, Heyner, Bale, Galvin, Holmström, Khotyaintsev, Maksimović, and Mitrofanov]Palmerio2022CMEsAS Palmerio E, Lee CO, Mays ML, et al (2022) Cmes and seps during november‐december 2020: A challenge for real‐time space weather forecasting. Space Weather 20 [Pankratius et al(2016)Pankratius, Li, Gowanlock, Blair, Rude, Herring, Lind, Erickson, and Lonsdale]Pankratius2016ComputerAidedDT Pankratius V, Li JD, Gowanlock MG, et al (2016) Computer-aided discovery: Toward scientific insight generation with machine support. IEEE Intelligent Systems 31:3–10 [Panter-Brick(2014)]panter2014health Panter-Brick C (2014) Health, risk, and resilience: Interdisciplinary concepts and applications. Annual Review of Anthropology 43(1):431–448 [Papadimitriou et al(2020)Papadimitriou, Balasis, Boutsi, Daglis, Giannakis, Anastasiadis, Michelis, and Consolini]Papadimitriou2020DynamicalCO Papadimitriou C, Balasis G, Boutsi AZ, et al (2020) Dynamical complexity of the 2015 st. patrick’s day magnetic storm at swarm altitudes using entropy measures. Entropy 22 [Papadimitriou et al(1998)Papadimitriou, Raghavan, Tamaki, and Vempala]Papadimitriou1998LatentSI Papadimitriou CH, Raghavan P, Tamaki H, et al (1998) Latent semantic indexing: a probabilistic analysis. J Comput Syst Sci 61:217–235 [Parrish et al(2002)Parrish, Viscido, and Grünbaum]Parrish_2002 Parrish J, Viscido S, Grünbaum D (2002) Self-organized fish schools: an examination of emergent properties. Biol Bull 202(3):296–305. 10.2307/1543482 [Paton et al(2000)Paton, Smith, and Violanti]Paton2000DisasterRR Paton D, Smith LM, Violanti JM (2000) Disaster response: risk, vulnerability and resilience. Disaster Prevention and Management 9:173–179 [Pavlos et al(1992)Pavlos, Kyriakou, Rigas, Liatsis, Trochoutsos, and Tsonis]Pavlos1992EvidenceFS Pavlos GP, Kyriakou GA, Rigas AG, et al (1992) Evidence for strange attractor structures in space plasmas. Annales Geophysicae 10:309–322 [Peek et al(2020)Peek, Tobin, Adams, Wu, and Mathews]Peek2020AFF Peek L, Tobin J, Adams RM, et al (2020) A framework for convergence research in the hazards and disaster field: The natural hazards engineering research infrastructure converge facility. Frontiers in Built Environment [Perreault and Akasofu(1978)]Perreault_1978 Perreault P, Akasofu SI (1978) A study of geomagnetic storms. Geophysical Journal International 54(3):547–573. 10.1111/j.1365-246X.1978.tb05494.x, <https://doi.org/10.1111/j.1365-246X.1978.tb05494.x>, https://arxiv.org/abs/https://academic.oup.com/gji/article-pdf/54/3/547/1682075/54-3-547.pdfhttps://arxiv.org/abs/https://academic.oup.com/gji/article-pdf/54/3/547/1682075/54-3-547.pdf [Plenz et al(2021)Plenz, Ribeiro, Miller, Kells, Vakili, and Capek]Plenz2021SelfOrganizedCI Plenz D, Ribeiro TL, Miller SR, et al (2021) Self-organized criticality in the brain. In: Frontiers in Physics [Pomerantz(2015)]pomerantz2015metadata Pomerantz J (2015) Metadata. The MIT Press Essential Knowledge series, MIT Press, <https://books.google.com/books?id=VtmMEAAAQBAJ> [Porter et al(2009)Porter, Onnela, and Mucha]Porter2009CommunitiesIN Porter MA, Onnela JP, Mucha PJ (2009) Communities in networks. Econometrics: Mathematical Methods & Programming eJournal [Price and Prichard(1993)]Price1993TheNR Price CP, Prichard D (1993) The non‐linear response of the magnetosphere: 30 october 1978. Geophysical Research Letters 20:771–774 [Price(2019)]Reynolds2019 Price S (2019) Jason reynolds calls for architects of understanding. American Libraries <https://americanlibrariesmagazine.org/blogs/the-scoop/jason-reynolds-opens-annual/> [Prichard and Price(1992)]Prichard1992SpuriousDE Prichard D, Price CP (1992) Spurious dimension estimates from time series of geomagnetic indices. Geophysical Research Letters 19:1623–1626 [Prince(2009)]Prince2009CatastropheAS Prince SH (2009) Catastrophe and Social Change, Based Upon a Sociological Study of the Halifax Disaster [Promislow et al(2022a)Promislow, Anderson, Scheffer, Crespi, DeGregori, Harris, Horowitz, Levine, Riolo, Schneider, Spencer, Valenzano, and Hochberg]Promislow2022ResilienceIC Promislow DEL, Anderson RM, Scheffer M, et al (2022a) Resilience integrates concepts in aging research. iScience 25 [Promislow et al(2022b)Promislow, Anderson, Scheffer, Crespi, DeGregori, Harris, Horowitz, Levine, Riolo, Schneider, Spencer, Valenzano, and Hochberg]Promislow_2022 Promislow DEL, Anderson RM, Scheffer M, et al (2022b) Resilience integrates concepts in aging research. iScience 25 [Quarantelli(1987)]Quarantelli1987DisasterSA Quarantelli EL (1987) Disaster studies: An analysis of the social historical factors affecting the development of research in the area. International Journal of Mass Emergencies & Disasters 5:285 – 310 [Radford and Narasimhan(2018)]Radford2018ImprovingLU Radford A, Narasimhan K (2018) Improving language understanding by generative pre-training. <https://gwern.net/doc/www/s3-us-west-2.amazonaws.com/d73fdc5ffa8627bce44dcda2fc012da638ffb158.pdf> [Ramasubramanian et al(2020)Ramasubramanian, Virts, Shirey, Kumar, Hassan, Acharya, Ramachandran, and Manil]Ramasubramanian2020SurveyingTM Ramasubramanian M, Virts KS, Shirey A, et al (2020) Surveying the machine learning landscape in earth sciences [Ridley et al(1997)Ridley, Lu, Clauer, and Papitashvili]Ridley1997IonosphericCD Ridley AJ, Lu G, Clauer CR, et al (1997) Ionospheric convection during nonsteady interplanetary magnetic field conditions. Journal of Geophysical Research 102:14,563–14,579 [Ridley et al(1998)Ridley, Lu, Clauer, and Papitashvili]Ridley1998ASS Ridley AJ, Lu G, Clauer CR, et al (1998) A statistical study of the ionospheric convection response to changing interplanetary magnetic field conditions using the assimilative mapping of ionospheric electrodynamics technique. Journal of Geophysical Research 103:4023–4039 [Roberts(1991)]Roberts1991IsTA Roberts DA (1991) Is there a strange attractor in the magnetosphere? Journal of Geophysical Research 96:16,031–16,046 [Roberts et al(1991)Roberts, Baker, Klimas, and Bargatze]Roberts1991IndicationsOL Roberts DA, Baker DN, Klimas AJ, et al (1991) Indications of low dimensionality in magnetospheric dynamics. Geophysical Research Letters 18:151–154 [Rosas et al(2020)Rosas, Mediano, Jensen, Seth, Barrett, Carhart-Harris, and Bor]Rosas_2020 Rosas FE, Mediano PAM, Jensen HJ, et al (2020) Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLOS Computational Biology 16(12):1–22. 10.1371/journal.pcbi.1008289, <https://doi.org/10.1371/journal.pcbi.1008289> [Ruelle(1980)]Ruelle_1980 Ruelle D (1980) Strange attractors. The Mathematical Intelligencer 2(126). https://doi.org/10.1007/BF03023053, <https://doi.org/10.1007/BF03023053> [Rumelhart et al(1986)Rumelhart, Hinton, and Williams]Rumelhart1986LearningRB Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323:533–536 [Schank and Wagner(2005)]Schank2005ApproximatingCC Schank T, Wagner D (2005) Approximating clustering coefficient and transitivity. J Graph Algorithms Appl 9:265–275 [Scheffer(2009)]Scheffer2009CriticalTI Scheffer M (2009) Critical Transitions in Nature and Society [Scheffer et al(2001)Scheffer, Carpenter, Foley, Folke, and Walker]Scheffer_2001 Scheffer M, Carpenter S, Foley J, et al (2001) Catastrophic shifts in ecosystems. Nature 413:591–6. 10.1038/35098000 [Scheffer et al(2009)Scheffer, Bascompte, Brock, Brovkin, Carpenter, Dakos, Held, van Nes, Rietkerk, and Sugihara]Scheffer2009EarlywarningSF Scheffer M, Bascompte J, Brock WAB, et al (2009) Early-warning signals for critical transitions. Nature 461:53–59 [Scheffer et al(2018)Scheffer, Bolhuis, Borsboom, Buchman, Gijzel, Goulson, Kammenga, Kemp, van de Leemput, Levin, Martin, Melis, van Nes, Romero, and Rikkert]Scheffer_2018 Scheffer M, Bolhuis JE, Borsboom D, et al (2018) Quantifying resilience of humans and other animals. Proceedings of the National Academy of Sciences of the United States of America 115:11,883 – 11,890 [Schrijver et al(2015)Schrijver, Kauristie, Aylward, Denardini, Gibson, Glover, Gopalswamy, Grande, Hapgood, Heynderickx, Jakowski, Kalegaev, Lapenta, Linker, Liu, Mandrini, Mann, Nagatsuma, Nandy, Obara, Paul O’Brien, Onsager, Opgenoorth, Terkildsen, Valladares, and Vilmer]Schrijver_2015 Schrijver CJ, Kauristie K, Aylward AD, et al (2015) Understanding space weather to shield society: A global road map for 2015–2025 commissioned by COSPAR and ILWS. Advances in Space Research 55(12):2745 – 2807. https://doi.org/10.1016/j.asr.2015.03.023, <http://www.sciencedirect.com/science/article/pii/S0273117715002252> [Schunk et al(2021)Schunk, Scherliess, Eccles, Gardner, Sojka, Zhu, Pi, Mannucci, Komjathy, Wang, and Rosen]Schunk2021ChallengesIS Schunk RW, Scherliess L, Eccles V, et al (2021) Challenges in specifying and predicting space weather. Space Weather 19 [Sethna(2021)]Sethna2021StatisticalME Sethna JP (2021) Statistical Mechanics: Entropy, Order Parameters, and Complexity [Shan et al(1991a)Shan, Goertz, and Smith]shan1991embedding Shan LH, Goertz CK, Smith RA (1991a) On the embedding-dimension analysis of ae and al time series. Geophysical research letters 18(8):1647–1650 [Shan et al(1991b)Shan, Hansen, Goertz, and Smith]shan1991chaotic Shan LH, Hansen P, Goertz C, et al (1991b) Chaotic appearance of the ae index. Geophysical research letters 18(2):147–150 [Shannon(1948)]Shannon1948AMT Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:623–656 [Sharma et al(1993)Sharma, Vassiliadis, and Papadopoulos]Sharma1993ReconstructionOL Sharma AS, Vassiliadis D, Papadopoulos KD (1993) Reconstruction of low‐dimensional magnetospheric dynamics by singular spectrum analysis. Geophysical Research Letters 20:335–338 [Sharma et al(2016)Sharma, Aschwanden, Crosby, Klimas, Milovanov, Morales, Sánchez, and Uritsky]Sharma201625YO Sharma AS, Aschwanden MJ, Crosby NB, et al (2016) 25 years of self-organized criticality: Space and laboratory plasmas. Space Science Reviews 198:167–216 [Shay et al(1998)Shay, Drake, Denton, and Biskamp]Shay1998StructureOT Shay MA, Drake JF, Denton RE, et al (1998) Structure of the dissipation region during collisionless magnetic reconnection. Journal of Geophysical Research 103:9165–9176 [Shim(2009)]Shim2009AnalysisOT Shim JS (2009) Analysis of total electron content (tec) variations in the low- and middle-latitude ionosphere [Shimizu et al(2020)Shimizu, Mcgranaghan, Eberhart, and Kellerman]Shimizu2020TowardsAM Shimizu C, Mcgranaghan R, Eberhart A, et al (2020) Towards a modular ontology for space weather research. In: Workshop on Ontology Design and Patterns (WOP) [Simpson et al(2021)Simpson, Mach, Constable, Hess, Hogarth, Howden, Lawrence, Lempert, Muccione, Mackey et al]simpson2021framework Simpson NP, Mach KJ, Constable A, et al (2021) A framework for complex climate change risk assessment. One Earth 4(4):489–501 [Sneppen et al(1995)Sneppen, Bak, Flyvbjerg, and Jensen]Sneppen1995EvolutionAA Sneppen K, Bak P, Flyvbjerg H, et al (1995) Evolution as a self-organized critical phenomenon. Proceedings of the National Academy of Sciences of the United States of America 92 11:5209–13 [Sober and Wilson(2021)]Sober2021UntoO Sober E, Wilson DS (2021) Unto others. Philosophy after Darwin [Solnit(2009)]Solnit2009APB Solnit R (2009) A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster [Song et al(2006)Song, Havlin, and Makse]song2006origins Song C, Havlin S, Makse HA (2006) Origins of fractality in the growth of complex networks. Nature physics 2(4):275–281 [Sorathia et al(2017)Sorathia, Merkin, Ukhorskiy, Mauk, and Sibeck]Sorathia_2017 Sorathia KA, Merkin VG, Ukhorskiy AY, et al (2017) Energetic particle loss through the magnetopause: A combined global mhd and test-particle study. Journal of Geophysical Research: Space Physics 122(9):9329–9343. https://doi.org/10.1002/2017JA024268, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017JA024268>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024268https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2017JA024268 [Spanswick et al(2018)Spanswick, Donovan, Liang, Weatherwax, Skone, Hampton, Gillies, Yang, Chaddock, Vollmerhaus, and Kujawski]Spanswick2018FirstLightOF Spanswick EL, Donovan E, Liang J, et al (2018) First-light observations from the Transition Region Explorer (TREx) ground-based network. In: AGU Fall Meeting Abstracts [Srivastava et al(2021)Srivastava, Mierla, and Zhang]Srivastava2021EditorialSW Srivastava N, Mierla M, Zhang J (2021) Editorial: Space weather prediction: Challenges and prospects. Frontiers in Astronomy and Space Sciences 10.3389/fspas.2021.818878 [Stanley et al(2002)Stanley, Amaral, Buldyrev, Gopikrishnan, Plerou, and Salinger]Stanley2002SelforganizedCI Stanley HE, Amaral LAN, Buldyrev SV, et al (2002) Self-organized complexity in economics and finance. Proceedings of the National Academy of Sciences of the United States of America 99:2561 – 2565 [Steinhaeuser et al(2011)Steinhaeuser, Ganguly, and Chawla]Steinhaeuser_2011 Steinhaeuser K, Ganguly AR, Chawla N (2011) Multivariate and multiscale dependence in the global climate system revealed through complex networks. Climate Dynamics 39:889–895 [Stephens et al(2019)Stephens, Sitnov, Korth, Tsyganenko, Ohtani, Gkioulidou, and Ukhorskiy]Stephens_2019 Stephens GK, Sitnov MI, Korth H, et al (2019) Global empirical picture of magnetospheric substorms inferred from multimission magnetometer data. Journal of Geophysical Research: Space Physics 124(2):1085–1110. https://doi.org/10.1029/2018JA025843, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA025843>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025843https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA025843 [Stumpo et al(2020)Stumpo, Consolini, Alberti, and Quattrociocchi]Stumpo2020MeasuringIC Stumpo M, Consolini G, Alberti T, et al (2020) Measuring information coupling between the solar wind and the magnetosphere–ionosphere system. Entropy 22 [Syrjäsuo and Donovan(2002)]syrjasuo2002analysis Syrjäsuo M, Donovan E (2002) Analysis of auroral images: detection and tracking. Geophysica 38(1-2):3–14 [Syrjäsuo and Donovan(2004)]Syrjsuo2004DiurnalAO Syrjäsuo MT, Donovan EF (2004) Diurnal auroral occurrence statistics obtained via machine vision. Annales Geophysicae 22:1103–1113 [Szabo(2014)]Szabo2014 Szabo A (2014) NASA Wind Satellite (1994), Springer International Publishing, Cham, pp 1–14. 10.1007/978-3-319-02847-7_13-1, <https://doi.org/10.1007/978-3-319-02847-7_13-1> [Takalo et al(1993)Takalo, Timonen, and Koskinen]Takalo1993CorrelationDA Takalo J, Timonen J, Koskinen HEJ (1993) Correlation dimension and affinity of ae data and bicolored noise. Geophysical Research Letters 20:1527–1530 [Takalo et al(1994)Takalo, Timonen, and Koskinen]Takalo1994PropertiesOA Takalo J, Timonen J, Koskinen HEJ (1994) Properties of ae data and bicolored noise. Journal of Geophysical Research 99:13,239–13,249 [Takalo et al(1999)Takalo, Timonen, Klimas, Valdivia, and Vassiliadis]Takalo1999ACM Takalo J, Timonen J, Klimas AJ, et al (1999) A coupled‐map model for the magnetotail current sheet. Geophysical Research Letters 26 [Tamkin et al(2021)Tamkin, Brundage, Clark, and Ganguli]Tamkin2021UnderstandingTC Tamkin A, Brundage M, Clark J, et al (2021) Understanding the capabilities, limitations, and societal impact of large language models. ArXiv abs/2102.02503 [Tegmark(2017)]Tegmark2017Life3B Tegmark M (2017) Life 3.0: Being human in the age of artificial intelligence [Thayer(2011)]Thayer_CEDAR Thayer J (2011) Coupling, energetics, and dynamics of atmospheric regions (cedar) the new dimension, strategic vision <https://cedarscience.org/sites/default/files/2021-10/CEDAR_Plan_June_2011_online.pdf> [Theiler(1986)]Theiler1986SpuriousDF Theiler (1986) Spurious dimension from correlation algorithms applied to limited time-series data. Physical review A, General physics 34 3:2427–2432 [Theiler et al(1992)Theiler, Eubank, Longtin, Galdrikian, and Farmer]Theiler1992TestingFN Theiler J, Eubank S, Longtin A, et al (1992) Testing for nonlinearity in time series: the method of surrogate data. Physica D: Nonlinear Phenomena 58:77–94 [Topliff et al(2020)Topliff, Cohen, and Bristow]Topliff2020SimultaneouslyFG Topliff C, Cohen MB, Bristow WA (2020) Simultaneously forecasting global geomagnetic activity using recurrent networks. ArXiv abs/2010.06487 [Torr et al(1995)Torr, Torr, Zukic, Johnson, Ajello, Banks, Clark, Cole, Keffer, Parks, Tsurutani, and Spann]Torr_1995 Torr MR, Torr DG, Zukic M, et al (1995) A far ultraviolet imager for the international solar-terrestrial physics mission. Space Science Reviews 71:329–383 [Torres et al(2021)Torres, Blevins, Bassett, and Eliassi-Rad]Torres_2021 Torres L, Blevins AS, Bassett DS, et al (2021) The why, how, and when of representations for complex systems. SIAM Rev 63:435–485 [Tsonis et al(2006)Tsonis, Swanson, and Roebber]Tsonis_2006 Tsonis AA, Swanson KL, Roebber P (2006) What do networks have to do with climate. Bulletin of the American Meteorological Society 87:585–595 [Tsurutani et al(1990)Tsurutani, Sugiura, Iyemori, Goldstein, Gonzalez, Akasofu, and Smith]Tsurutani_1990 Tsurutani BT, Sugiura M, Iyemori T, et al (1990) The nonlinear response of ae to the imf bs driver: A spectral break at 5 hours. Geophysical Research Letters 17(3):279–282. https://doi.org/10.1029/GL017i003p00279, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/GL017i003p00279>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/GL017i003p00279https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/GL017i003p00279 [Turing(1950)]Turing1950ComputingMA Turing AM (1950) Computing machinery and intelligence. Mind LIX:433–460 [Upendran et al(2020)Upendran, Cheung, Hanasoge, and Krishnamurthi]Upendran2020SolarWP Upendran V, Cheung MCM, Hanasoge SM, et al (2020) Solar wind prediction using deep learning. Space Weather 18 [Uritsky and Pudovkin(1998)]Uritsky_1998 Uritsky VM, Pudovkin MI (1998) Low frequency 1/f-like fluctuations of the ae-index as a possible manifestation of self-organized criticality in the magnetosphere. Annales Geophysicae 16(12):1580–1588. 10.1007/s00585-998-1580-x, <https://angeo.copernicus.org/articles/16/1580/1998/> [Uritsky et al(2001)Uritsky, Klimas, and Vassiliadis]Uritsky2001ComparativeSO Uritsky VM, Klimas AJ, Vassiliadis D (2001) Comparative study of dynamical critical scaling in the auroral electrojet index versus solar wind fluctuations. Geophysical Research Letters 28 [Uritsky et al(2002)Uritsky, Klimas, Vassiliadis, Chua, and Parks]Uritsky_2002 Uritsky VM, Klimas AJ, Vassiliadis D, et al (2002) Scale-free statistics of spatiotemporal auroral emissions as depicted by polar uvi images: Dynamic magnetosphere is an avalanching system. Journal of Geophysical Research: Space Physics 107(A12):SMP 7–1–SMP 7–11. https://doi.org/10.1029/2001JA000281, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2001JA000281>, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2001JA000281https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2001JA000281 [Uritsky et al(2007)Uritsky, Paczuski, Davila, and Jones]Uritsky2007CoexistenceOS Uritsky VM, Paczuski M, Davila JM, et al (2007) Coexistence of self-organized criticality and intermittent turbulence in the solar corona. Physical review letters 99 2:025,001 [Valdivia et al(2005)Valdivia, Rogan, Muñoz, Toledo, and Stepanova]Valdivia2005TheMA Valdivia JA, Rogan J, Muñoz V, et al (2005) The magnetosphere as a complex system. Advances in Space Research 51:1934–1941 [Vassiliadis et al(1990)Vassiliadis, Sharma, Eastman, and Papadopoulos]Vassiliadis1990LowdimensionalCI Vassiliadis D, Sharma AK, Eastman TE, et al (1990) Low-dimensional chaos in magnetospheric activity from ae time series. Geophysical Research Letters 17:1841–1844 [Vassiliadis et al(1995)Vassiliadis, Klimas, Baker, and Roberts]Vassiliadis1995ADO Vassiliadis D, Klimas AJ, Baker DN, et al (1995) A description of the solar wind-magnetosphere coupling based on nonlinear filters. Journal of Geophysical Research 100:3495–3512 [Vaswani et al(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin]Vaswani2017AttentionIA Vaswani A, Shazeer NM, Parmar N, et al (2017) Attention is all you need. ArXiv abs/1706.03762 [Viall and Borovsky(2020)]Viall_2020 Viall NM, Borovsky JE (2020) Nine outstanding questions of solar wind physics. Journal of Geophysical Research: Space Physics 125(7):e2018JA026,005. https://doi.org/10.1029/2018JA026005, <https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JA026005>, e2018JA026005 2018JA026005, https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA026005https://arxiv.org/abs/https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018JA026005 [Walker et al(2004)Walker, Holling, Carpenter, and Kinzig]Walker2004ResilienceAA Walker BW, Holling CS, Carpenter SR, et al (2004) Resilience, adaptability and transformability in social–ecological systems. Ecology and Society 9:5 [Watkins et al(2015)Watkins, Pruessner, Chapman, Crosby, and Jensen]Watkins_2015 Watkins NW, Pruessner G, Chapman SC, et al (2015) 25 years of self-organized criticality: Concepts and controversies. Space Science Reviews 198:3–44 [Watts and Strogatz(1998)]Watts1998CollectiveDO Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393:440–442 [Weigend and Gershenfeld(1994)]Weigend1994TimeSP Weigend AS, Gershenfeld NA (1994) Time series prediction: Forecasting the future and understanding the past. Science [West(2017)]west2017scale West G (2017) Scale: The Universal Laws of Life and Death in Organisms, Cities and Companies. Orion, <https://books.google.com/books?id=rXS0CwAAQBAJ> [White and Haas(1975)]White1975AssessmentOR White GF, Haas JE (1975) Assessment Of Research On Natural Hazards [Wiener and Collection(1961)]Wiener_1961 Wiener N, Collection BLJF (1961) Cybernetics Or Control and Communication in the Animal and the Machine. DE-601)251474038: MIT paperback series, M.I.T. Press, <https://books.google.com/books?id=NnM-uISyywAC> [Wilson(1998)]Wilson1998ConsilienceTU Wilson EO (1998) Consilience: The Unity of Knowledge [Wiltberger et al(2015)Wiltberger, Merkin, Lyon, and Ohtani]Wiltberger2015HighresolutionGM Wiltberger M, Merkin VG, Lyon JG, et al (2015) High‐resolution global magnetohydrodynamic simulation of bursty bulk flows. Journal of Geophysical Research: Space Physics 120:4555 – 4566 [Wing and Johnson(2019)]Wing_2019 Wing S, Johnson JR (2019) Applications of information theory in solar and space physics. Entropy 21(2). 10.3390/e21020140, <https://www.mdpi.com/1099-4300/21/2/140> [Wing et al(2016)Wing, Johnson, Camporeale, and Reeves]Wing2016InformationTA Wing S, Johnson JR, Camporeale E, et al (2016) Information theoretical approach to discovering solar wind drivers of the outer radiation belt. Journal of Geophysical Research: Space Physics 121:9378 – 9399 [Wing et al(2018)Wing, Johnson, and Vourlidas]Wing2018InformationTA Wing S, Johnson JR, Vourlidas A (2018) Information theoretic approach to discovering causalities in the solar cycle. The Astrophysical Journal 854 [Wissel(2004)]Wissel2004AUL Wissel C (2004) A universal law of the characteristic return time near thresholds. Oecologia 65:101–107 [Wolfram(2002)]Wolfram_2002 Wolfram S (2002) A New Kind of Science. Wolfram Media, <https://books.google.com/books?id=dw_vAAAAMAAJ> [Wood(1986)]WOOD198660 Wood RE (1986) Task complexity: Definition of the construct. Organizational Behavior and Human Decision Processes 37(1):60–82. https://doi.org/10.1016/0749-5978(86)90044-0, <https://www.sciencedirect.com/science/article/pii/0749597886900440> [Zurek(1990)]Zurek1990ComplexityEA Zurek WH (1990) Complexity, Entropy and the Physics of Information
http://arxiv.org/abs/2307.01238v1
20230703122204
Learning Difference Equations with Structured Grammatical Evolution for Postprandial Glycaemia Prediction
[ "Daniel Parra", "David Joedicke", "J. Manuel Velasco", "Gabriel Kronberger", "J. Ignacio Hidalgo" ]
cs.LG
[ "cs.LG", "cs.AI" ]
addr1]Daniel Parra addr2]David Joedicke addr1]J. Manuel Velasco addr2]Gabriel Kronberger addr1]J. Ignacio Hidalgo [addr1]Department of Computer Architecture and Automatics, Universidad Complutense de Madrid Madrid, Spain. {dparra02, mvelascc, hidalgo}@ucm.es [addr2]University of Applied Sciences of Upper Austria, Hagenberg, Austria. {David.Joedicke, Gabriel.Kronberger}@fh-hagenberg.at People with diabetes must carefully monitor their blood glucose levels, especially after eating. Blood glucose regulation requires a proper combination of food intake and insulin boluses. Glucose prediction is vital to avoid dangerous post-meal complications in treating individuals with diabetes. Although traditional methods, such as artificial neural networks, have shown high accuracy rates, sometimes they are not suitable for developing personalised treatments by physicians due to their lack of interpretability. In this study, we propose a novel glucose prediction method emphasising interpretability: Interpretable Sparse Identification by Grammatical Evolution. Combined with a previous clustering stage, our approach provides finite difference equations to predict postprandial glucose levels up to two hours after meals. We divide the dataset into four-hour segments and perform clustering based on blood glucose values for the two-hour window before the meal. Prediction models are trained for each cluster for the two-hour windows after meals, allowing predictions in 15-minute steps, yielding up to eight predictions at different time horizons. Prediction safety was evaluated based on Parkes Error Grid regions. Our technique produces safe predictions through explainable expressions, avoiding zones D (0.2% average) and E (0%) and reducing predictions on zone C (6.2%). In addition, our proposal has slightly better accuracy than other techniques, including sparse identification of non-linear dynamics and artificial neural networks. The results demonstrate that our proposal provides interpretable solutions without sacrificing prediction accuracy, offering a promising approach to glucose prediction in diabetes management that balances accuracy, interpretability, and computational efficiency. Diabetes Machine Learning System Dynamics Symbolic Regression Evolutionary Computation Neural Networks * This paper proposes a novel approach to glucose prediction in diabetes management that emphasises interpretability: Interpretable Sparse Identification by Grammatical Evolution (ISIGE) * The proposed technique combines clustering with ISIGE to obtain finite difference equations that predict postprandial glucose levels up to two hours after meals. * We have employed data from 24 different participants with diabetes mellitus type-I that was clustered based on blood glucose values for the two-hour window before the meal * Although ANN is one of the best-performing techniques for glucose prediction and one of the most commonly used methods in this field, in our study, both SINDy and ISIGE obtained better results in most clusters. § INTRODUCTION More than 450 million people suffer from diabetes. There are two main types of diabetes: type I is an autoimmune disease that causes the destruction of insulin-producing cells (beta cells) of the pancreas, while type II appears when there is resistance to insulin action. Insulin-dependent diabetes patients need to estimate, or even better to predict their blood glucose levels in the near term to manage their condition and prevent complications. Predicting glucose levels can help individuals make informed decisions about their diet, exercise, and especially the insulin and medication they use to maintain their blood glucose within a healthy range. The last decade has seen the rapid spread of new and reliable continuous glucose monitoring systems (CGM), which provide real-time glucose readings and trend data to help individuals adjust their treatment plan accordingly. The availability of data from CGMs has lead the research in the field. Great efforts have been made in the search for accurate glucose prediction models. Some of them are black-box models <cit.>, others are based on analytical models <cit.>, and most of them provide predictions for a time horizon from 15 to 120 minutes <cit.>. Among them, symbolic regression (SR) <cit.> techniques and artificial neural networks (ANNs) obtained very good performance <cit.>. ANNs have become a popular tool for glucose prediction in the management of diabetes due to their ability to capture complex, non-linear relationships between input variables and glucose levels. However, one of the key challenges in using ANNs for glucose prediction is the lack of interpretability of the solutions they provide. ANNs can be viewed as black-box models, making it difficult to understand how they arrive at their predictions and limiting their usefulness in clinical decision making. This problem of interpretability has led researchers to explore alternative explainable AI techniques to provide both accurate forecasts and insight for clinicians. This work aims to explore techniques for deriving finite difference equations that accurately represent the dynamics of blood glucose levels, with the benefit of obtaining interpretable models that can serve to aid the work not only of people with diabetes, but also of diabetes clinicians. To achieve this interpretability, we have developed a variant of grammatical evolution (GE) that produces sparse solutions: Interpretable Sparse Identification by Grammatical Evolution (ISIGE). This technique seeks to integrate the good results in the last years of Grammatical Evolution for blood glucose prediction,<cit.> and the advantages of a recently proposed method: sparse identification of non-linear dynamics (SINDy). SINDy is an optimization-based method that constructs a sparse model of the dynamics of a system as a linear model with a set of non-linear base functions. To achieve sparsity, SINDy adds a sparsity-promoting regularization term which penalizes models with too many non-zero coefficients, forcing the algorithm to select a smaller subset of variables and interactions most relevant to the dynamics of the system. This helps to avoid overfitting, improve the interpretability of the model, and reduce computational complexity <cit.>. Conceptually, the approach is similar to fast function extraction (FFX) <cit.> but focuses on system dynamics. In addition, to refine the models and predictions, we employ a clustering approach to group the glucose time series before meals which give us several scenarios for prediction. Due to the complexity of this situation and the need for a technique with good performance characteristics, we have implemented ISIGE using Dynamic Structured GE (DSGE). In addition, to reduce the risk of premature convergence and to improve the diversity of the generated solutions, we have applied the ϵ-lexicase selection technique <cit.>. In this paper, we describe our technique and analyze the experimental results against SINDy and ANNs, in terms of precision, suitability for diabetes care and interpretability. The results of this study are significant in several ways: * The results demonstrate that the proposed ISIGE approach, together with SINDy, outperforms well tested ANNs in terms of prediction accuracy. * ISIGE is the technique that provides the most accurate predictions. This is particularly important for glucose prediction in diabetes treatment, where accurate and safe predictions are crucial for clinical decision-making. * The fact that ISIGE and SINDy show similar interpretability suggests that ISIGE can provide interpretable solutions without sacrificing prediction performance and safety. These results offer a promising new approach to glucose prediction in diabetes management that balances accuracy, interpretability and computational efficiency. The subsequent sections of this article are structured as follows: Section <ref> presents an overview of the existing literature on utilizing machine learning (ML) techniques for hypoglycemia prediction. In Section <ref>, our approach is elucidated, encompassing the specific features of the grammar used in our version of DSGE, together with the iterative numerical evaluation algorithm used for the fitness computation. Section <ref> outlines the workflow and primary techniques employed in this research as a point of reference. The experimental setup, data utilized, and the resulting outcomes obtained from our proposed approach are detailed in Section <ref>. Finally, Section <ref> provides the concluding remarks of this study. § RELATED WORK Dynamic models for glucose prediction are mathematical models that can simulate and predict behavior of blood glucose levels over time. They have been used in a variety of applications, such as personalized glucose control algorithms, prediction of hypoglycemia events, and optimization of insulin therapy. There are three types of dynamic models for glucose prediction: * Physiological models <cit.>: These models use a set of differential equations to simulate the dynamics of glucose and insulin in the body. Physiological models are based on our current understanding of the complex interactions between glucose and insulin in the body. * Data-driven models <cit.>: These are mathematical models that are constructed based on data obtained from individuals with diabetes. These models use statistical and machine learning techniques, to analyze the data and identify patterns that can be used to predict future glucose levels. Different models have been developed in conjunction with CGM systems to make short-term or long-term predictions. * Hybrid models <cit.>: These models combine physiological and data-driven models to take advantage of the strengths of both types of models to improve the accuracy of glucose predictions. As we can see, blood glucose prediction is a very active field of research, and although significant progress has been made, several challenges remain to be overcome, including: * Individual differences: the metabolism of each person is unique, and blood glucose dynamics can vary significantly from individual to individual and from the situation of each person (e.g., changes during pregnancy, illness requiring medication administration, etc.). * Delayed response: There is always a variable delayed response between insulin administration or carbohydrate intake and blood glucose value. * Sensor limitations: The accuracy and reliability of glucose sensors can vary over their lifetime, so that the resulting noise can affect the data quality used for training and model prediction. * Complex dynamics: Blood glucose dynamics are influenced by many factors that can interact in complex ways, such as food intake, physical activity, medication, sleep, and stress. This paper addresses these challenges using two techniques that we consider particularly suitable for glucose prediction: structured grammatical evolution (SGE) and SINDy. These two approaches can handle large and complex datasets with noisy and incomplete data, automatically extract the most relevant inputs, and identify the main mathematical functions to describe glucose dynamics. Additionally, they generate interpretable models that provide the opportunity to be analyzed, allowing researchers better to understand the underlying dynamics of blood glucose regulation. SINDy <cit.> is a data-driven approach used to discover governing equations that describe the dynamics of a system. Although it is a relatively new approach several research papers have been published addressing problems on different fields including physics <cit.>, network biology <cit.> or chemical kinetics <cit.>. The primary reference for GE applied to glucose forecasting is the book chapter from Hidalgo et al. <cit.>, where a new approach for identifying mathematical models that can predict blood glucose levels in people with diabetes using GE is proposed. The approach was evaluated using data from the OhioT1DM dataset <cit.> containing glucose and physiological data from 43 people with diabetes. The results showed that the proposed system outperformed several baseline approaches regarding accuracy and interpretability, including linear regression, ARIMA models, and ANNs. The lead article in the field of SGE is <cit.>. This paper presents a novel approach to predict glucose levels in people with diabetes using SGE, a variant of GE that encodes the chromosomes of individuals using a list of values corresponding to the productions of each non-terminal symbol to increase the locality of genetic operators. Different selection schemes can be used for SGE. Spector <cit.> presented a preliminary report with a novel idea: lexicase selection. Lexicase selection has been applied to many optimisation problems. Helmuth et al. <cit.> used lexicase selection to evolve models for several problems, being one of them predicting blood pressure based on a set of physiological features. LaCava and Spector (2016) <cit.>, propose a variant (called Epsilon-lexicase) focused on continuous fitness landscapes that produces better results for symbolic regression problems. Spector et al. (2018) <cit.> used lexicase selection to evolve models for predicting protein-ligand binding affinity. The work from Stanton et al. <cit.> used lexicase selection to evolve robot controllers for tasks such as navigation and object manipulation. § METHODS AND TECHNIQUES A finite difference equation (FDE) is a mathematical expression that describes the difference between a variable y at two discrete time points <cit.>. A FDE takes the form of Equation <ref>, where x⃗ represents a set of input variables involved in the equation, y represents the target variable, and θ are optional calibration parameters. Δ y(t)= y(t+Δt)-y(t)=f(x⃗(t),y(t), θ ) FDEs are classified based on their properties, such as linearity, nonlinearity, and order, which is the highest difference in time steps that explicitly appear in the equation. For example, Equation <ref> is a first-order equation, while Equation <ref> is a second-order equation as it involves the difference between y(t+2) and y(t). y(t+2Δt)-y(t)=f(x⃗(t)) Due to their simplicity, finite difference equations are handy for modelling dynamic processes which are measured with constant frequency (equidistant time steps), such as blood glucose levels measured by CGM. Equation <ref>, express the dynamics of glucose values as a finite difference equation problem. The description of the input variables is given in Table <ref>. Δ G(t) = G(t+Δ_t) - G(t) = f(G(t), B_I(t), I_B(t), F_ch(t), HR(t), C(t), S(t)) As is often the case in physical systems, we assume that in the glucose regulation system, only a few terms are relevant in defining its dynamics. Therefore, we can consider the governing equations are scattered in a high-dimensional non-linear function space. In this way, we aim to create a simplified model that accurately captures the essential dynamics of the system while minimising complexity. The SINDy algorithm is particularly suitable for this purpose. In subsection <ref>, we briefly explain how SINDy works. Then, in subsection <ref>, we present our proposal, ISIGE, which seeks to incorporate the benefits of SINDy within the paradigm of GE. §.§ Sparse Identification of Nonlinear Dynamics (SINDy) SINDy is a data-driven method for discovering the governing equations of a dynamical system from time series data <cit.>. In Algorithm <ref>, we have summarized the main steps for SINDy. * Data gathering (line 2): The first step is to collect data from the system that we want to model. The data are time-series measurements of the state variables of the system (Table <ref>). * Construct a library of candidate functions (line 3): The next step is to construct a library of candidate functions that could potentially be part of the governing equations. In the general SINDy algorithm, these functions could include polynomials, trigonometric functions, exponentials, etc, depending on the system being modeled. In this work and based in our prior knowledge of the system, we have chosen up to X^P_2, quadratic non-linearities of X (Equation <ref>). Θ(X) = [ [ ⋮ ⋮ ⋮ ⋮; 1 X ⋮ X^P_2; ⋮ ⋮ ⋮ ⋮; ]] * The development of all the linear combinations (line 5), are grouped in Equation <ref>, where t_0 represents the first time step, t_1 the subsequent time step, and so on. X^P_2 = [ [ G(t_0)· B_I(t_0) G(t_0)· I_B(t_0) G(t_0)· F_CH(t_0) ··· F_CH^2(t_0); G(t_1)· B_I(t_1) G(t_1)· I_B(t_1) G(t_1)· F_CH(t_1) ··· F_CH^2(t_1); ⋮ ⋮ ⋮ ⋮ ⋮; G(t_n)· B_I(t_n) G(t_n)· I_B(t_n) G(t_n)· F_CH(t_n) ··· F_CH^2(t_n); ]] * Add regularization (line 6): SINDy uses sequential threshold ridge regression as a regularization. The objective function ||Θξ - Ẋ_t||^2_2 + λ ||ξ||^2_2 is minimized by iteratively performing least squares and masking out elements of the weight array ξ that are below a given threshold λ. In this paper we used λ = 0.5 which tests have shown to produce the best results. * Solving the optimization problem (lines 7-8): through a sparse regression algorithm, we identify the combination of terms that best describes the observed dynamics of the system. * Once the sparse coefficients have been found (line 9), we have the governing equations of the system. Starting from a basic dynamical system (Equation <ref>), we consider that future glucose values are based primarily on the current glucose value, the basal rate of insulin administration, carbohydrate intake and bolus insulin injections. At this point, we use a library of candidate functions to create a set of possible terms (Equation <ref>) in the governing equations. Then, through a sparse regression algorithm, we identify the combination of terms that best describes the observed dynamics of the system. GKR: equations are incorrect as we only predict blood glucose and use measured values for the other variables as inputs tx⃗(t) =f⃗(x⃗(t)) x⃗(t) =[G(t),B_I(t),I_B(t),F_ch(t)], x⃗(t) ∈ R^n Where X^P_2 represents the quadratic non-linearities of X, X^P_3 the cubic non-linearities and so on (in this work we only use up to X^P_2). GKR: this equation is not really useful as we do not know the definition of G, Bi, IB, FcH. After application to our problem we obtain: Where t_0 represents the first time step, t_1 the subsequent time step, and so on. §.§ Interpretable Sparse Identification by Grammatical Evolution As mentioned above, in this work we propose Interpretable Sparse Identification by Grammatical Evolution (ISIGE) to obtain FDEs to model the dynamics of blood glucose. Dynamic Structured grammatical evolution (DSGE), a version of Grammatical Evolution (GE) will be in charge of obtaining Δ G(t) as expressed in equation <ref>. DSGE overcomes low locality problems of GE this problem by ensuring that a slight change in the genotype also implies a small change in the phenotype, resulting in a high locality. Another limitation of GE is redundancy, which arises when different genotypes produce the same phenotype. SGE addresses this issue by creating a one-to-one mapping between the genotype and the non-terminal, using a list of integers for each non-terminal to represent the possible expansions for that particular non-terminal. DSGE replaces the fixed-size lists with variable-size lists. One of the advantages of using DSGE is obtaining explainable expressions, making it easier to analyze the impact and importance of the different components of the system. Our proposal includes an important feature that enhances DSGE. We have enabled DSGE to include predicted glucose values in the grammar, leading to more accurate and diverse Functional Differential Equations (FDEs). To achieve this, we have used the concept of sparse identification, which suggests that only a few terms are relevant for defining the dynamics of many physical systems. Starting with the variables used to calculate the glucose values (which are shown in Table <ref>), we have added the related terms up to X^P_2 to get the final set of variables. The DSGE can then select the variables to be used in the expression from this set and generate additional combinations as necessary. This study introduces a novel approach, ISIGE, to model blood glucose dynamics using FDEs. Specifically, we utilize DSGE, a variant of GE, to obtain Ĝ(t+Δ (t)) as expressed in equation <ref> which comes from the expression <ref>. Ĝ(t+Δ (t)) = G(t) + Δ G(t) = G(t) + f(x(t)) One of the benefits of using DSGE is the ability to obtain explainable expressions, which enables us to analyze the impact and significance of different components in the system. Therefore, our proposed approach is expected to provide interpretable and efficient models for blood glucose dynamics. DSGE addresses two limitations of GE. First, it overcomes the low locality problem by ensuring that a slight change in the genotype results in a corresponding small change in the phenotype, thus ensuring high locality. Second, it eliminates redundancy, which occurs when different genotypes produce the same phenotype, by creating a one-to-one mapping between the genotype and the non-terminal. To achieve this, DSGE employs variable-size lists instead of the fixed-size lists used in GE. §.§.§ ϵ-Lexicase selection The characteristics of our data are explained in depth in Section <ref>. Broadly speaking, we have grouped the data into clusters comprising multiple 4-hour glucose time series segments. The behaviour of these segments is highly varied (Figure <ref>), and if the selection mechanism of the evolutionary algorithm fails to take this into account, many individuals who do not achieve high fitness in all cases will be rejected. Because of this, we think of lexicase (and, more specifically, ϵ-Lexicase) as a selection mechanism. Lexicase selection <cit.> is a powerful search strategy used in evolutionary computation to overcome the problem of premature convergence. It works by selecting individuals that perform well in a randomly chosen subset of the fitness cases and repeating the process with the remaining fitness cases until only one individual is left. However, this method may become too selective, leading to limited search space exploration and the loss of diversity. To address this issue, ϵ-Lexicase <cit.> selection was proposed, introducing a tolerance parameter, ϵ. This parameter selects individuals that perform well within a certain threshold of the best-performing individuals in each subset. This approach allows for a more comprehensive search space exploration while preserving good individuals. Algorithm <ref> outlines the process of obtaining difference equations using ISIGE. We load the datasets and problem properties, then create an initial population and start the evolutionary process. We obtain the phenotypes (expressions) of each generation by decoding the population with the grammar. Using ϵ-Lexicase selection, we determine the parents for the next population. We generate a new population by crossing and mutating the selected parents. Where Ĝ(t+Δt)_j comes from Equation <ref> §.§.§ Iterative Numerical Evaluation One of the differences in the use of FDE is the application of an iterative numerical method to solve Equation <ref>, which calculates the glucose concentration for the next time instant (Ĝ(t + Δ t)). In our study, i represents the samples every 15 minutes within two hours after the meal. We evaluated Ĝ(t_i+1) using the different variables of the system (Table <ref>), including glucose (Ĝ(t_i)), basal insulin (I_B(t_i)), insulin bolus (B_I(t_i)), carbohydrate intake (F_ch(t_i)), heart rate (HR(t_i)), calories burned (C(t_i)), and step count (S(t_i)). Δ G(t_i) = G(t_i+1) - G(t_i) = f(G(t_i), B_I(t_i), I_B(t_i), F_ch(t_i), HR(t_i), C(t_i), S(t_i)) G(t_i+1) = G(t_i) + Δ G(t_i) = G(t_i) + f(x(t_i)) We initialized all variables (t_0) from the dataset and calculated the glucose value for the subsequent time instance (t_1) using the equation Ĝ(t_i+1). We then recursively computed the values of G(t_i) for each subsequent time instance t_i until we reached the forecast horizon. This allowed us to predict the value of G(t_i+1) for all time instances within the forecast window using the values of G calculated forecasted at previous time instants. During the ϵ-Lexicase selection process, we evaluate individuals in the current subpopulation using the specific dataset for that iteration. Algorithm <ref> represents the iterative numerical evaluation process, which predicts glucose values for each time step after a meal using a given function and dataset. Specifically, we use dataset_k to obtain the glucose value at the first time step, while subsequent time steps use the predicted value from the previous time step. Once we have predicted glucose values for all time instances, we obtain the actual values and calculate the Root Mean Square Error (RMSE) using Equation <ref>. The resulting RMSE represents the fitness of that individual for dataset_k, indicating how well it performs for that specific dataset. This process is repeated for each dataset in the ϵ-Lexicase selection process to determine the overall fitness of each individual. RMSE = √(1/N∑_t=1^N(Ĝ_t-G_t)^2) During the ϵ-Lexicase selection process, we evaluate individuals from the current subpopulation using a specific dataset for each iteration. One difference in our approach is the application of an iterative numerical method to solve Equation <ref>. This equation calculates the glucose concentration for the next time step (Ĝ(t + Δ t)). We have developed algorithm <ref>, named Iterative Numerical Evaluation (INE), to implement this iterative numerical method. In our study, each time step (t_i) represents samples taken every 15 minutes within two hours after a meal. We evaluate the predicted glucose value at the next time step (Ĝ(t_i+1)) using various variables of the system (x_k(t_i).) from the data set, in the case of glucose, only the first value is taken, and for the remaining iterations, the predicted value obtained in the previous step is used. Once predictions are made for all time instances, we calculate the Root Mean Square Error (RMSE) by comparing the predicted glucose values with the actual values using Equation <ref>. The resulting RMSE serves as the fitness measure for that individual on the specific dataset, indicating its performance for that dataset. RMSE = √(1/N∑_t=1^N(Ĝ_t-G_t)^2) In Equation <ref>, N represents the total number of time instances, Ĝ_t denotes the predicted glucose value at time t, and G_t represents the actual glucose value at time t. The iterative numerical evaluation, algorithm <ref>, takes as input a function f_j and a dataset D_k. It proceeds as follows: * Initialize the predicted glucose value at the starting time (t_0) as the actual glucose value from the dataset (G_k(t_0)). * Iterate through each time step (t_i) from 0 to the total number of points in the dataset minus one. * Update the predicted glucose value at the next time step (t_i+1) by adding the function f_j applied to the variables x_k(t_i). * Calculate the fitness F_j of individual j for the dataset k by comparing the predicted glucose values (Ĝ) with the actual glucose values (G_k) using the Root Mean Square Error (RMSE) measure, equation <ref>. * Return the fitness F_j. Where G_k is the set of actual glucose values of D_k, Ĝ is the set of predicted values for the same points of D_k, x_k(t_i) the set of variables of D_k for time i and F_j is the fitness of individual j for D_k. This process is repeated for each individual in the subpopulation of this ϵ-Lexicase selection iteration. In the above equation, Δ G represents the change in glucose concentration between two time instances, t_i and t_i+1. The function F is dependent on various factors such as basal insulin (I_B(t_i)), insulin bolus (B_I(t_i)), carbohydrate intake (F_ch(t_i)), heart rate (HR), sensitivity parameter (S), and other constants (C). This equation allows us to calculate the Delta G value for specific time intervals and enables us to model glucose dynamics accurately. We initialized all variables (t_0) from the dataset and calculated the glucose value for the subsequent time instance (t_1) using the equation G(v_t_0). We then recursively computed the glucose values for each subsequent time instance t_x until we reached the forecast horizon. This allowed us to predict the value of G(t_n+1) for all time instances within the forecast window using G(v_t_n). For this study, we set the time step at fifteen minutes and a forecast horizon of two hours (which is the usual time required for digesting a meal). By performing these numerical simulations, we can determine the ability of DSGE-generated Delta G expressions to accurately predict glucose dynamics in different variables of the system over time. We evaluate numerically the Δ G expressions generated by DSGE for different variables of the system (G(t_i), I_B(t_i), B_I(t_i), F_ch(t_i), HR(t_i), and θ) at different time instants (t_i). This process is illustrated in figure <ref>. We begin by obtaining the initial values of all variables (t_0) directly from the dataset. Next, we use the equation G(v_t_0) to calculate the glucose value for the next time instant (t_1). We then recursively calculate the glucose values at each subsequent time instant t_x, until the forecast horizon is reached. This allows us to predict the value of G(t_n+1) for all values of t in the forecast window using G(v_t_n). The time step was set at fifteen minutes, and we have chosen a forecast horizon of two hours for this study (usual time for digesting a meal). §.§.§ Grammar Creating a grammar to define the search space is essential in DSGE. This involves establishing how to generate constants, which operations to use, which structures to generate expressions and which variables to incorporate. In Figure <ref>, we can see that the set of base functions used in the SINDy algorithm (subsection <ref>) is equivalent to the ISIGE grammar, but there is an important consideration here. Due to the method used to evaluate the numerical expressions, it is only possible to calculate the values of the variables for one time step at each iteration. In Figure <ref>, we present the grammar used in this work in Backus Naur Form (BNF), which is crucial for obtaining the desired FDEs. One of the essential elements of this grammar is the initial definition of the <func> expression, which establishes that all expressions must follow the form G + < expr>, ensuring that the resulting expressions are in the FDE form. Another critical element is the <var> field, which includes individual variables (excluding G, which has already been included), first-order non-linearities, and second-order non-linearities. This approach allows us to mimic the set of base functions used in the SINDy algorithm. Using this grammar, we can effectively generate diverse FDEs that describe the system behaviour and make accurate predictions. § WORKFLOW Figure <ref> summarizes the workflow of the methodology applied in this work. It consists on several steps, further described in this section: data acquisition, data preprocessing, data partitioning, clustering, modelling, testing and results comparison. §.§ Data acquisition For this study, we used data from 24 participants of the Hospital Universitario Príncipe de Asturias, Madrid, Spain. Two different devices were used to obtain the raw data: A continuous glucose monitor (CGM) system (Free Style Libre sensor) and an activity monitoring wristband (Fitbit Ionic). The CGM measures interstitial glucose levels every 15 minutes, while the wristband records data on calories, steps and heart rate at different time frequencies. In addition, information about insulin and carbohydrate intakes was recorded by two different methods depending on the insulin administration mode. Participants wearing an automatic insulin continuous infusion system (insulin pumps) obtain this information directly from the device (Medtronic or Roche systems). Participants under multiple doses of insulin (MDI) therapy recorded the information about basal insulin, insulin boluses, and carbohydrate intakes using a mobile application. The studies involving human participants were reviewed and approved by Hospital Universitario Príncipe de Asturias (Spain). The patients/participants provided their written informed consent to participate in this study. §.§ Data Preprocessing To use the recorded data for the modelling of the blood glucose level several preprocessing steps had to be performed. This includes both cleaning as well as feature engineering. §.§.§ Features for the absorption of carbohydrates and insulin Since the dissolution of substances in the body occurs gradually, we preprocessed both the reported insulin bolus and carbohydrate values using two functions to spread the uptake over multiple observations: the Berger function (Equation <ref> and Figure <ref>) <cit.> and the Bateman function (Equation <ref> and Figure <ref>) <cit.>. conditions
http://arxiv.org/abs/2307.01183v1
20230703174904
Extraction of the strong coupling with HERA and EIC inclusive data
[ "Salim Cerci", "Zuhal Seyma Demiroglu", "Abhay Deshpande", "Paul R. Newman", "Barak Schmookler", "Deniz Sunar Cerci", "Katarzyna Wichmann" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-th", "nucl-ex", "nucl-th" ]
arabic Extraction of the strong coupling with HERA and EIC inclusive data Salim Cerci^1, Zuhal Seyma Demiroglu^2,3, Abhay Deshpande^2,3,4, Paul R. Newman^5, Barak Schmookler^6, Deniz Sunar Cerci^1, Katarzyna Wichmann^7 ^1 Adiyaman University, Faculty of Arts and Sciences, Department of Physics, Turkiye ^2 Center for Frontiers in Nuclear Science, Stony Brook University, NY 11764, USA ^3 Stony Brook University, Stony Brook, NY 11794-3800, USA ^4 Brookhaven National Laboratory, Upton, NY 11973-5000, USA ^5 School of Physics and Astronomy, University of Birmingham, UK ^6 University of California, Riverside, Department of Physics and Astronomy, CA 92521, USA ^7 Deutsches Elektronen–Synchrotron DESY, Germany The sensitivity to the strong coupling α_S(M^2_Z) is investigated using existing Deep Inelastic Scattering data from HERA in combination with projected future measurements from the Electron Ion Collider (EIC) in a next-to-next-to-leading order QCD analysis. A potentially world-leading level of precision is achievable when combining simulated inclusive neutral current EIC data with inclusive charged and neutral current measurements from HERA, with or without the addition of HERA inclusive jet and dijet data. The result can be obtained with significantly less than one year of projected EIC data at the lower end of the EIC centre-of-mass energy range. Some questions remain over the magnitude of uncertainties due to missing higher orders in the theoretical framework. § INTRODUCTION Of the coupling strengths of the fundamental forces, the strong coupling α_S is by far the least well constrained. At the same time, it is an essential ingredient of Standard Model cross section calculations, as well as constraints on new physics and grand unification scenarios <cit.>. It has previously been measured in a wide range of processes <cit.>. In Deep Inelastic Scattering (DIS), recent studies from HERA have shown limited sensitivity when using only inclusive data <cit.>, but much more competitive precision when additionally including jet production cross sections <cit.>. In recent years, the advances in QCD theory from next-to-leading order (NLO) to next-to-next-to-leading order (NNLO) have resulted in a substantial reduction in the uncertainties on α_S extractions due to missing higher order corrections, usually expressed in terms of a QCD scale uncertainty, though they remain by far the largest single source of uncertainty in the best HERA extractions. The Electron Ion Collider (EIC) <cit.>, currently under preparation at Brookhaven National Laboratory in partnership with the Thomas Jefferson National Accelerator Facility is expected to begin taking data around 2030. The EIC will collide highly polarised electrons with highly polarised protons and light/heavy nuclei. In ep mode, the expected luminosity is of order 10^33-10^34 cm^-2 s^-1 and the centre-of-mass energy √(s) will range from 29 GeV to 141 GeV. As part of the extensive program of EIC physics <cit.>, inclusive DIS cross sections will be measured to high precision in a phase space region that will be complementary to HERA, in particular improving the sensitivity to the large Bjorken-x kinematic region. In this work, the expected experimental uncertainty on the strong coupling at the scale of the Z-pole mass α_S(M^2_Z) is estimated when adding simulated EIC inclusive data to analyses very similar to those performed on HERA data. § ANALYSIS METHOD §.§ Data samples The HERA data used in this analysis are the final combined H1 and ZEUS inclusive DIS neutral current (NC) and charged current (CC) cross sections <cit.> and, where appropriate, the H1 and ZEUS inclusive and dijet measurements used in a recent study of parton distribution function (PDF) sensitivity at NNLO, as summarised in Table 1 of <cit.>. The HERA cross sections correspond to unpolarised beam configurations at proton beam energies of 920, 820, 575 and 460 GeV and an electron beam energy of 27.5 GeV. The data correspond to an integrated luminosity of about 1 fb^-1 and span six orders of magnitude in the modules of the four-momentum-transfer squared, Q^2, and in Bjorken x. The detailed experimental apparatus configurations for the EIC are currently under intense development. However, the broad requirements are well established, as documented for example in <cit.>. In this paper, the simulated EIC data are taken from the studies performed in the framework for ATHENA detector proposal <cit.>. The ATHENA configuration has since been combined with ECCE <cit.> in the framework of a new and fast-evolving ePIC design. Whilst the details of the apparatus are different, the overall kinematic range and achievable precision are expected to be very similar. As summarised in Table <ref>, neutral current EIC simulated measurements (`pseudodata') are produced with integrated luminosities corresponding to expectations for one year of data taking with each of the five different beam configurations expected at the EIC. Charged current pseudodata are also available at the highest √(s). The neutral current pseudodata are produced in a grid of five logarithmically-spaced x and Q^2 values per decade over the range[Here, y is the usual inelasticity variable, y = Q^2/(sx).] 0.001 < y < 0.95, which is well-justified by the expected resolutions. The central values are taken from predictions using HERAPDF2.0NNLO <cit.>,[The variant with α_S(M^2_Z) set to 0.116 is taken, most-closely matching the recent HERA NNLO estimation of 0.1156 <cit.>.] randomly smeared based on Gaussian distributions with standard deviations given by the projected uncertainties as estimated by the ATHENA collaboration and as previously used to study collinear PDF sensitivities in <cit.>. The systematic precision is based on experience from HERA and further considerations in <cit.> and is rather conservative in the context of the more modern detector technologies and larger data sets at the EIC. Most data points have a point-to-point uncorrelated systematic uncertainty of 1.9%, extending to 2.75% at the lowest y values. An additional normalisation uncertainty of 3.4% is ascribed, which is taken to be fully correlated between data at each √(s), and fully uncorrelated between data sets with different √(s). For the purposes of the QCD fits (section <ref>), the point-to-point systematic uncertainties are added in quadrature with the statistical uncertainties and the normalisation uncertainties are treated as nuisance parameters, as in <cit.>. The kinematic plane of the inclusive data used in this analysis is shown in Fig. <ref>. The EIC pseudodata overlap in their coverage with the HERA data and extend the kinematic reach in the high x, moderate Q^2 region. Their impact at large x is significant since the large x HERA data are relatively imprecise due to their kinematic correlation with large Q^2, the 1/Q^4 photon propagator term in the cross section and the limited integrated luminosity. §.§ Theoretical framework and Fitting Procedure The analysis is based on a QCD fit that follows the HERAPDF <cit.> theoretical framework, PDF parameterisations and model parameter choices. In the fit, the proton PDFs and α_S(M^2_Z) are constrained simultaneously in a χ^2-minimisation procedure in which the Q^2 evolution is performed according to the NNLO DGLAP evolution equations <cit.>. The xFitter framework <cit.> is used, with the light quark coefficient functions calculated to NNLO as implemented in QCDNUM <cit.>. The MINUIT program <cit.> is used for the minimisation. The general-mass variable-flavor-number scheme <cit.> is used for the contributions of heavy quarks. The renormalisation and factorisation scales are taken to be μ_r=μ_f=√(Q^2) for the inclusive DIS data, while μ^2_r=μ^2_f=Q^2 + p^2_T is used for inclusive jet data and μ^2_r=μ^2_f=Q^2 + <p_T>^2_2 for dijets, where <p_T>_2 is the average of the transverse momenta of the two jets. The charm and beauty quark masses (M_c, M_b) follow the choices in <cit.>. The minimum Q^2 of the inclusive data included in the fits is Q^2_ min = 3.5 GeV^2. As well as avoiding complications associated with low Q^2, this requirement also reduces the possible influence of ln(1/x) resummation <cit.>. For the central fit, the PDFs are parameterised at a starting scale for QCD evolution of μ_f0 = 1.9 GeV^2, as in the HERAPDF2.0 fits <cit.>. The PDFs are parameterised at the starting scale in terms of the gluon distribution (xg), the valence quark distributions (xu_v, xd_v), and the u-type and d-type anti-quark distributions (xU̅, xD̅), where xU̅ = xu̅ corresponds to anti-up quarks only and xD̅ = xd̅ + xs̅ is the sum of anti-down and anti-strange quarks. Symmetry is assumed between the sea quarks and antiquarks for each flavour. Strange quarks are suppressed relative to light quarks through a factor f_s = 0.4 whereby xs̅ = f_s xD̅ for all x. The nominal parameterisation is xg(x) = A_g x^B_g (1-x)^C_g - A_g' x^B_g' (1-x)^25 ; xu_v(x) = A_u_v x^B_u_v (1-x)^C_u_v(1+E_u_vx^2 ) ; xd_v(x) = A_d_v x^B_d_v (1-x)^C_d_v ; xU̅(x) = A_U̅ x^B_U̅ (1-x)^C_U̅(1+D_U̅x) ; xD̅(x) = A_D̅ x^B_D̅ (1-x)^C_D̅ . The parameters A_u_v and A_d_v are fixed using the quark counting rules and A_g is fixed using the momentum sum rule. The requirement xu̅ = xd̅ is imposed as x → 0 through corresponding conditions on A_U̅, A_D̅, B_U̅, B_D̅ and f_s. There are thus a total of 14 PDF free parameters. The experimental, model, and parameterisation uncertainties on α_s(M^2_Z) are evaluated as described in  <cit.>. The modelling uncertainties are obtained through variations of Q^2_ min, f_s, M_c and M_b as shown in Table <ref>; the parameters are altered independently and the changes relative to the central value of α_s(M^2_Z) are added in quadrature. For the PDF parameterisation uncertainties, the procedure of <cit.> is followed, based on variations resulting from the addition of further D and E parameters to the expressions in Eqs. <ref> – <ref> and changes in the starting scale μ^2_f0 by ± 0.3 GeV^2. The fits are repeated with each of these variations and the largest difference relative to the nominal α_s(M^2_Z) is taken to be the uncertainty. The model and parameterisation uncertainties are added in quadrature in quoting the final results. For the jet data, the uncertainties in the hadronisation corrections are treated in the same manner as the experimental correlated systematic uncertainties. The influence on the extracted α_s(M_Z^2) of missing orders in the perturbation series beyond NNLO is estimated via a scale uncertainty, in which the the renormalisation μ_r and factorisation μ_f scales are varied up and down by a factor of two. Combinations are considered in which μ_r and μ_f are changed together or separately and the largest resulting positive and negative deviations on α_s(M_Z^2) (with the exclusion of the two extreme combinations of the scales) is taken as the scale uncertainty. As is currently customary in global QCD fits,[Scale variations are typically applied to all hadronic final state observables, including jet data from ep collisions.] no scale variations are made in the treatment of the inclusive data. This topic is further discussed in section <ref>. § RESULTS §.§ Fits with EIC inclusive data and HERA inclusive and jet data A simultaneous NNLO fit to extract the PDFs and α_s(M^2_Z) from HERA inclusive and jet data and EIC simulated inclusive data at all five √(s) values is performed as described in section <ref>. The result is α_s(M_Z^2) = 0.1161 ± 0.0003 (exp) ± 0.0001 (model+parameterisation)  ^+0.0002_-0.0001 (scale). By construction of the EIC simulated data, α_s(M_Z^2) must be close to 0.116. As expected, the PDF parameters obtained in the fits are also fully compatible with those from the HERAPDF2.0 set. The uncertainties from the joint fit to HERA and EIC data can be compared with those from the HERAPDF2.0Jets NNLO result <cit.>: α_s(M_Z^2) = 0.1156 ± 0.0011 (exp) ^+0.0001_-0.0002 (model+parameterisation) ± 0.0029 (scale). The results and uncertainties with and without the inclusion of EIC data are shown in the form of a χ^2 scan as a function of α_s(M_Z^2) in Fig. <ref>. Each point in the figure corresponds to a full QCD fit, with all 14 PDF parameters free and a fixed strong coupling value. The result without EIC data corresponds exactly to the most recent HERA result <cit.>. Adding the simulated inclusive EIC data leads to a remarkable improvement in the estimated experimental and scale uncertainties. The improvement in experimental precision is attributable to the addition of precise EIC pseudodata in the large x, moderate Q^2 region, complementing the kinematic coverage of the HERA data. The additional phase space coverage leads to improved precision on the Q^2 dependence of the inclusive cross section, corresponding to the logarithmic derivative of the inclusive structure function d F_2 / dln Q^2. The overlap of HERA and EIC data at the same x and Q^2 but different √(s) is sensitive to the longitudinal structure function F_L <cit.>. Both of these quantities are proportional at lowest order to the product of the gluon parton distribution and α_s(M^2_Z) <cit.>. The scale uncertainty is reduced to a similar level to the combined model and parameterisation uncertainties and becomes smaller than the experimental uncertainty. This is a consequence of the reduced dependence of the fit on the jet data. The scale uncertainty is not yet evaluated for the inclusive data, as further discussed in section <ref>. §.§ Fits with EIC and HERA inclusive data only The very significant impact of the EIC inclusive data on the α_s(M_Z^2) precision naturally raises the question of whether a similar result can be obtained without the HERA jet data, i.e. using only inclusive DIS measurements. A further question of interest is how important a role is played by the multiple √(s) values available at the EIC. Correspondingly, further fits are performed to the following inclusive data sets with the fit procedures otherwise unchanged: * HERA inclusive data only, as already published in the HERAPDF2 paper <cit.>; * HERA inclusive data and the EIC simulated inclusive data described in <ref>, including all five different √(s) values in Table <ref>; * HERA inclusive data and the EIC simulated inclusive data, separately for each of the five √(s) values. Figure <ref> shows the results of this investigation. The fits to HERA data alone show only a limited dependence of the fit χ^2 on the strong coupling α_s(M^2_Z), corresponding to a relatively poor constraint <cit.>. In contrast, the χ^2 minimum around α_s(M^2_Z) = 0.116 is very well pronounced for fits that additionally include EIC data. Although the best result is obtained when including all EIC √(s) values together, the precision degrades only slightly when restricting the EIC data to only one EIC √(s) value. In the latter case, the precision improves as the √(s) value of the chosen EIC data decreases. The two lowest √(s) values, corresponding to E_e × E_p = 5 × 100 GeV and 5 × 41 GeV are shown in Fig. <ref>. The strong coupling extracted from the simultaneous fit for the PDFs and α_s(M^2_Z), using the full set of EIC pseudodata together with the HERA inclusive data, is α_s(M_Z^2) = 0.1161 ± 0.0003 (exp) ± 0.0001 (model + parameterisation), corresponding to a total precision of better than 0.3%. As discussed in section <ref>, no scale uncertainty is quoted here. It is expected to be significantly reduced in a fit to inclusive data only relative to the result quoted in section <ref>. Section <ref> contains a discussion of possible ways of estimating the scale uncertainties in this case. The fit using inclusive data only is further extended to investigate the influence of the integrated luminosity of the EIC data on the α_s(M_Z^2) precision. The statistical uncertainties of the EIC data are scaled such that the pseudodata samples at each beam energy correspond to 1 fb^-1, approximately matching the integrated luminosity of the HERA data. This results in only a small change compared with the results shown in Fig. <ref>. The experimental uncertainty in the fits including only a single EIC data set with one of the two lowest √(s) values is approximately ± 0.0006 or 0.5%, only approximately a factor of two larger than that obtained when including the full expected EIC data at all √(s) values and significantly better than the present DIS measurements. Given that the earliest EIC data are expected to be at these low √(s) values, this result is obtainable after significantly less than one year of EIC data taking. §.§ Robustness Checks The robustness of the extracted α_s(M^2_Z) and PDF results and their uncertainties is tested by varying the details of the fits in a number of ways. The lowest Q^2 data are most likely to be influenced by missing higher orders, higher twist effects and ln(1/x) resummation effects <cit.>. To checked that the precision is not dramatically altered by excluding these data, the analysis is repeated with the Q^2_min cut increased from 3.5 GeV^2 to 10 GeV^2 or 20 GeV^2. The distinct minima shown in Figures <ref> and <ref> are still observed, with only a small dependence on Q^2_min (below 0.1%). To check for a possible bias from the data simulation procedure, the HERA data were replaced with pseudodata obtained using the same method as for the EIC samples. The α_s(M^2_Z) scan using the HERA pseudodata alone (Fig. <ref>) closely follows that of the real HERA data, with no distinct minimum observed. The established technique for including correlated systematic uncertainties in global QCD fits treats each source of correlated uncertainty separately, whereas the EIC estimate is in terms of only a single normalisation uncertainty for each √(s), corresponding to the sum of all such sources. Studies are therefore conducted in which the correlated EIC systematic uncertainty is decomposed into the separate sources, following Table 10.5 in the EIC Yellow Report <cit.>. The changes to the results are negligible. To further test the influence of the EIC systematic uncertainty assumptions, the fit is repeated with the correlated systematic uncertainties increased by a factor of two. The uncertainty on the extracted α_s(M^2_Z) is barely influenced. Conversely, if the uncorrelated systematic uncertainties are increased by a factor of two, the uncertainty on α_s(M^2_Z) increases to around 0.9%. The uncorrelated systematic uncertainties are thus the most closely correlated with the precision on α_s(M^2_Z). §.§ Discussion The precision on α_s(M_Z^2) obtained in the fits using only inclusive HERA and EIC data, and also additionally using HERA jet data, are compared in Fig. <ref> with results from previous DIS experiments and extractions using a wide range of other processes. The world average of experimental measurements according to the Particle Data Group (PDG) <cit.> and an average from lattice QCD calculations <cit.> are also shown. The projected results from the current analyses, show a level of precision that is significantly better than both the world average and the lattice average. This very encouraging result is subject to the caveat that no uncertainty has been included due to missing higher orders beyond NNLO in the QCD analysis. Most commonly, analyses that are sensitive to strong interactions include estimates of the missing higher order uncertainty (MHOU) in the perturbative QCD framework used through the variation of the renormalisation and factorisation scales. As in many other dedicated and global analyses, the approach used for the jet data included here is to obtain a scale uncertainty by varying the scales by factors of two. However, in the global QCD fits to extract PDFs, MHOUs have routinely not been included in the treatment of inclusive DIS data, since they are expected to be relatively small in comparison to other PDF uncertainties. This is particularly likely to be the case for an analysis using a perturbative QCD treatment at NNLO. However, the MHOU associated with inclusive data must clearly be finite and cannot be ignored completely at the very high level of precision suggested in the present analysis. First studies of their influence on PDFs have been performed by the NNPDF collaboration <cit.>, though the impact on the strong coupling was not included. There is as yet no consensus how to estimate MHOUs for inclusive DIS. Some discussion of possible methods is supplied in the following. In a previous analysis at NLO accuracy <cit.>, the H1 collaboration made a combined fit of inclusive-only DIS data from HERA and from the fixed target BCDMS experiment. The strong coupling was found to be well-constrained, with the BCDMS data playing a similar role to the EIC pseuododata here. An MHOU was obtained by varying the factorisation and renormalisation scales in the usual way, resulting in a large uncertainty at the level of 4%. However, as shown in the context of PDF uncertainties in <cit.> (Appendix B), applying this method in an NLO analysis results in an estimate of the MHOU that is larger than the difference between NLO and NNLO results by a factor as large as 20-50. A similarly conservative approach might be to fit pseudodata simulated using QCD evolution at NNLO using an NLO framework and vice versa, taking the MHOU on α_s(M^2_Z) to be the deviation of the extracted α_s(M^2_Z) from the input. Applying this method to the present analysis also results in an uncertainty of around 4%, but is also likely to be a very significant over-estimate. A potentially promising approach is suggested by the NNPDF group <cit.>. First a theory covariance matrix is computed, typically using scale variations to include missing higher order uncertainties. Including the covariance matrix explicitly in the PDF fit ensures that the theory uncertainties propagate properly, including those associated with α_s(M^2_Z) if it is a fit parameter. However, until a consensus around a well-developed method for including inclusive data such as this emerges, the MHOU in the present α_s(M^2_Z) extraction remains to be evaluated. § CONCLUSIONS AND OUTLOOK This work shows that the strong coupling can be determined with potentially world-leading precision in a simultaneous fit of PDFs and α_s(M^2_Z) at NNLO in perturbative QCD, using only inclusive DIS data from HERA and simulated data from the EIC. Data from both colliders are essential in obtaining this result. The estimated total uncertainty on the strong coupling when including one year's data at each of the five expected EIC √(s) values is better than 0.3%, substantially improving on the precision of the present world experimental and lattice averages. If the EIC pseudodata are restricted to a small fraction of a standard expected year of running at one of the two lowest centre-of-mass energies, as expexcted in the earliest phase of operation, the estimated total uncertainty remains at the level of 0.5%. It still remains to assign a meaningful uncertainty due to missing higher order contributions beyond NNLO in the theory. Further improvements of the α_s(M^2_Z) precision may be obtainable by adding inclusive jet and dijet EIC data to the QCD analysis, for example using theory grids for the EIC energies in the fastNLO framework <cit.>. Other observables carrying information on the strong constant include event shapes, jet substructure and jet radius parameters. In the time before the start of the EIC, it is hoped that new light will be shed on the issue of higher order uncertainties, leading to a consensus on how they should be treated in α_s(M^2_Z) determinations relying on EIC data. § ACKNOWLEDGEMENTS We are grateful to many colleagues in the EIC experimental community who have worked on all aspects of the project over many years and are now developing detector concepts towards the acquisition of real data similar to those simulated here. We thank numerous theoretical physics colleagues for very valuable discussions about the theory uncertainties: Néstor Armesto, Andrea Barontini, Thomas Cridge, Stefano Forte, Lucian Harland-Lang, Anna M. Staśto and Robert S. Thorne, as well as Valerio Bertone and Francesco Giuli for their help with the APFEL program and Christopher Schwan for his help with the PineAPPL tool. This work of Z.S.D and A.D. was supported in part by the U.S. Department of Energy and the Simons Foundation. inbibliographytrue tocsectionReferences JHEP inbibliographyfalse
http://arxiv.org/abs/2307.02647v1
20230705203845
(Semi)automated disambiguation of scholarly repositories
[ "Miriam Baglioni", "Andrea Mannocci", "Gina Pavone", "Michele De Bonis", "Paolo Manghi" ]
cs.DL
[ "cs.DL" ]
2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). IRCDL'23: The Conference on Information and Research science Connecting to Digital and Library science 2023 (Formerly the Italian Research Conference on Digital Libraries) 23-24 February 2023 - Bari, Italy [1] [1]All authors contributed equally. 1]Miriam Baglioni[ orcid=0000-0002-2273-9004, [email protected] ] 1]Andrea Mannocci[ orcid=0000-0002-5193-7851, [email protected] ] [1] 1]Gina Pavone[ orcid=0000-0003-0087-2151, [email protected] ] 1]Michele De Bonis[ orcid=0000-0003-2347-6012, [email protected] ] 1,2]Paolo Manghi[ orcid=0000-0001-7291-3210, [email protected] ] [1]CNR-ISTI – National Research Council, Institute of Information Science and Technologies “Alessandro Faedo”, 56124 Pisa, Italy [2]OpenAIRE AMKE, Athens, Greece [1]Corresponding author. The full exploitation of scholarly repositories is pivotal in modern Open Science, and scholarly repository registries are kingpins in enabling researchers and research infrastructures to list and search for suitable repositories. However, since multiple registries exist, repository managers are keen on registering multiple times the repositories they manage to maximise their traction and visibility across different research communities, disciplines, and applications. These multiple registrations ultimately lead to information fragmentation and redundancy on the one hand and, on the other, force registries' users to juggle multiple registries, profiles and identifiers describing the same repository. Such problems are known to registries, which claim equivalence between repository profiles whenever possible by cross-referencing their identifiers across different registries. However, as we will see, this “claim set” is far from complete and, therefore, many replicas slip under the radar, possibly creating problems downstream. In this work, we combine such claims to create duplicate sets and extend them with the results of an automated clustering algorithm run over repository metadata descriptions. Then we manually validate our results to produce an “as accurate as possible” de-duplicated dataset of scholarly repositories. Scholarly Registries Scholarly Repositories De-duplication Open Science (Semi)automated disambiguation of scholarly repositories [ August 1, 2023 ======================================================== § INTRODUCTION Scholarly repositories are essential for Open Science practice, as they enable access to research products and grant their long-term preservation. Besides, they play a crucial role in improving the visibility, discoverability and reuse of research products <cit.>. Given the ever-increasing number of repositories over the years, there is a growing need for scholarly repository registries providing repositories with an identity to enable non-ambiguous reference, support provenance tracking and impact assessment. Moreover, registries hold authoritative descriptive profiles of repositories intended to foster their discoverability. To this end, specialised scholarly repository registries have been set up <cit.> in order to store a broad range of information about registered repositories, such as the type of content, discipline and subjects, access rights, and licensing information of the resources they store. As a plurality of registries does exist and serves different scientific domains, communities and use cases, repository managers are incentivised to register their repository in more than one registry to boost their online presence and traction at the expense of information maintainability and up-to-dateness. Such a variety of scholarly repository registries provides a full-spectrum overview across scientific disciplines and research applications and becomes a rich asset for scholarly registries' users downstream, such as researchers, scholarly service providers, and Open Science infrastructures listing and aggregating their content. However, the availability of multiple registries, repository identifiers and profiles poses non-trivial questions and challenges regarding their authoritativeness, disambiguation, and coverage. In particular, such fragmentation quickly becomes a drawback in terms of information redundancy and scattering, as multiple registrations produce different identifiers for the same repository, arbitrarily used across the scholarly communication track record, and may lead to potential information inconsistencies across registries. Unsurprisingly, registry managers are aware of such drawbacks and claim equivalence of repository profiles whenever possible by cross-referencing their identifiers and PIDs across different registries. As we will see, however, this “claim set” is far from complete. Nevertheless, this is the only “ground knowledge” at our disposal; hence, we consider it valid without further challenge. In this work, we conflate such claims to infer the duplicate sets of different profiles about the same repository across the registries. Then, we further extend such duplicate sets by integrating them with the clusters obtained by an automated clustering algorithm run over repository profiles. Finally, we deliver the dataset of repository duplicates, taking care of manually validating all the cases where the de-duplication contributed to integrating the claims provided by the registries, and we draw some conclusions. § RELATED WORK The many scholarly communication services and academic search systems spun in the last decade have grown in parallel with a different, often siloed, mindset to target a broad and diverse range of use cases and application contexts <cit.>. Consequently, despite modelling very similar aspects of the academic domain (often the same ones), the respective data models ended up being quite distant in order to capture the different peculiarities at hand. Scholarly registries are no exception as they have often been developed to target diverse research communities, academic disciplines and research applications, as can be easily derived by inspecting the documented data models and schemas, as documented in Section <ref>. Intuitively, talking about their content comparison and interconnection is paramount to ensure their consistency and pave the way towards full interoperability and information exchange across scholarly registries. To the best of our knowledge, no prior study analysed and compared the content of publicly available scholarly registries and highlighted their inherent ambiguity. Therefore, in this work, we address the disambiguation of scholarly repositories across scholarly repository registries, which are at the centre of the present study. To date, the only tangible trace towards this direction can be found in the claims provided in some cases by scholarly registries to cross-reference other repository registration across other registries. As we will see further into our analysis, these efforts are, however, not enough, and the “claim set” provided by the registries is far from ideal, as many subsequent registrations of the same research repositories went so far unnoticed. § DATA AND METHODS For this study, we selected four prominent scholarly repository registries, namely FAIRsharing[FAIRsharing – <https://fairsharing.org>] <cit.>, re3data[re3data registry – <https://re3data.org>] <cit.>, OpenDOAR[OpenDOAR registry – <https://v2.sherpa.ac.uk/opendoar>], and ROAR[ROAR registry – <http://roar.eprints.org/information.html>], whose details are summarised in Table <ref> and the following. FAIRsharing Hosted at the University of Oxford in the UK and launched in 2011, FAIRsharing <cit.> is a web-based, searchable portal of three interlinked registries, containing both in-house and crowdsourced, manually curated descriptions of standards, databases (including repositories and knowledge bases) and data policies, which are persistently identifiable via DOIs. FAIRsharing maps the landscape of these three resources, monitoring their relationships, development, evolution and integration, such as the implementation and use of standards in databases or their adoption in data policies by funders, journals and other organisations. FAIRsharing is also an endorsed output of the RDA FAIRsharing WG[RDA FAIRsharing WG – <https://www.rd-alliance.org/group/fairsharing-registry-connecting-data-policies-standards-databases.html>], and its management combines a community-driven approach where the internal curators are supported by the maintainers of the resources themselves, which get credited via their ORCID. As of February 2022, FAIRsharing has over 3,600 resources, of which 1,853 are databases, i.e., scholarly repositories of interest for this analysis. The content, licenced under a CC-BY-SA licence[FAIRsharing licence – <https://fairsharing.org/licence>], can be browsed by (among other fields) registry type, discipline and country, and a live statistics page provides several at-glance-views of the landscape[FAIRsharing stats – <https://fairsharing.org/summary-statistics>]. re3data re3data[re3data registry – <https://re3data.org>] <cit.> is a global registry of research data repositories from all academic disciplines. Since its launch in 2012, this registry has been funded by the German Research Foundation[German Research Foundation (DFG) – <https://dfg.de>] (DFG). The urgency of avoiding duplication of effort and serving the research community with a single, sustainable registry is mentioned on the “About” web page of the website registry, referring to the merger between re3data and Databib in 2013. Under the project “re3data.org – Community Driven Open Reference for Research Data Repositories (COREF)”, the registry provides “customisable and extendable core repository descriptions that are persistently identifiable and can be referred to and cited appropriately. This includes unique identification in machine-to-machine communication”[re3data mission statement – <https://www.re3data.org/about>]. re3data was created to meet the need for a resource specifically dedicated to data repositories <cit.>. In fact, the already existing registries OpenDOAR and ROAR mainly focused on repositories for scholarly publications and only hosted a residual share of data repositories. Furthermore, there was a need for a more detailed description of each research data repository, e.g., containing precise information on access and reuse conditions. The registry can be browsed by content type, discipline or country. For convenience, we retrieved the data via the OpenAIRE project[OpenAIRE – <https://www.openaire.eu>] as it already integrates the registry. As of February 2022, it contains 2,793 data repository profiles; the registry content is released under a CC-BY licence. The collected data follow the re3data schema version 2.2[re3data.org 2.2 schema – <https://www.re3data.org/schema/2-2>]. On August 2021, re3data delivered a new schema version[re3data 3.1 schema – <https://www.re3data.org/schema>], and we used this one for the crosswalk. OpenDOAR OpenDOAR[OpenDOAR registry – <https://v2.sherpa.ac.uk/opendoar>] is a directory listing only Open Access repositories. The service was launched in 2005 as a result of a collaboration between the University of Nottingham and Lund University, funded by OSI, Jisc, SPARC Europe and CURL. The listed repositories are grouped into five types (Undetermined, Institutional, Disciplinary, Aggregating, Governmental), which can be browsed by type of content, software, countries and regions[OpenDOAR advanced browsing – <https://v2.sherpa.ac.uk/cgi/search/repository/advanced>]. To be included in OpenDOAR, repositories must meet the inclusion criteria, and the submission has to be accepted by the curators' team. The submission request to OpenDOAR is carried out in two parts: first, the application is sent by filling in a form with basic information. If the admission criteria are met, further information is requested. We retrieved the data via OpenAIRE as it natively integrates the registry content. As of Feb 2022, OpenDOAR lists 5,811 repositories worldwide; the content of the registry is redistributed under a CC-BY-NC-ND licence. The collected data follow the schema accessible through their website[OpenDOAR schema – <https://v2.sherpa.ac.uk/api/metadata-schema.html>]. ROAR The Registry of Open Access Repositories (ROAR) is hosted at the University of Southampton, UK, and it is funded by the Jisc[ROAR registry – <http://roar.eprints.org/information.html>]. As declared on the project's web page, its aim is “to promote the development of Open Access by providing timely information about the growth and status of repositories throughout the world”. A registered account is needed to add a new repository, and the submission will be reviewed and eventually accepted or rejected with editorial comments[ROAR registration – <http://roar.eprints.org/cgi/roar_register>]. The data have been downloaded directly from the site, choosing the option Multiline CSV. The CSV data formatting is consistent with the schema available online[ROAR schema – <http://roar.eprints.org/cgi/schema>]. As of February 2022, it contains 5,444 data repository profiles. The registry's content can be browsed by country, year, repository type, institutional association, repository software[ROAR stats – <http://roar.eprints.org/view>] is redistributed under a CC-BY licence. As mentioned, in some cases, three out of four registries provide claims (sometimes provided by users), establishing “same-as” equivalences among repository profiles registered across registries. However, the claim set is far from complete, as FAIRsharing provides claims to re3data only, ROAR provides claims to OpenDOAR only, and re3data provides claims to all the other three (often not at the same time, though). Moreover, we empirically noticed that no registry provides claims addressing internal duplicates (see Section <ref> for more details). Finally, no assurance about their correctness nor the existence of missing claims is given. Yet, such claims are the only “ground knowledge” at our disposal; hence, we consider them valid without further challenge. In order to address such shortcomings and identify further repository duplicates, in the present work, we first use the claims provided by the three registries and conflate them to create duplicate sets whenever they regard different profiles of the same (allegedly) repository. Then, we further extend the duplicate sets by integrating them with the clusters obtained by an automated clustering algorithm run over repository profiles. Whenever a duplicate set intersects with one or more clusters, we try to extend it with new repository profiles provided by the clusters; otherwise, it is left untouched. Finally, all the clusters not intersecting with any duplicate set are promoted to duplicate sets. As the last step, a manual validation is performed for all the duplicate sets where the de-duplication is involved. All the duplicate sets obtained as such, together with the code and original data, can be found in our Gitea repository[Data and code – <https://code-repo.d4science.org/miriam.baglioni/Registries>]. For the sake of clarity, the methodology is represented in Figure <ref> and described in details in the following paragraphs. Conflating registry claims To conflate the claims, we arbitrarily start from FAIRsharing and programmatically explore, for each repository profile in FAIRsharing, the claims referring to repository profiles in re3data (e.g., 2114→r3d100010191)[Hereafter, we refer to repository profiles by indicating repository registration identifiers prefixed according to the involved registry (i.e., fs: – od: – rd: – rr:). Each example links to the profile in the relevant registry in its current version, which might differ from the one observed at the time of writing. The metadata as we collect them are provided for transparency and validation of the reported examples.] to seed the starting duplicate sets. Then, for any given duplicate set, we verify whether it exists a claim from the re3data profile pointing back to the starting FAIRsharing repository (e.g., 2114→r3d100010191→2114). If this is not the case, we note down the three profiles involved as a problematic duplicate set, which has to be controlled manually at a later stage (e.g., 3652→r3d100012729→ 1724), and we move forward. Next, we try to extend any given duplicate set by conflating the re3data profile claims towards OpenDOAR. A new profile from OpenDOAR is added to a duplicate set if and only if it is not already part of another; otherwise, the duplicate set is marked as problematic and will be controlled manually. Then the claims pointing from the re3data profile to ROAR are conflated, and, as for OpenDOAR, one profile from ROAR is added to the duplicate set only if it is not already part of another set; otherwise, the set will be controlled manually. If one profile from ROAR is added to a duplicate set, we consider its claims. Since ROAR provides claims only towards OpenDOAR, a profile from OpenDOAR could be added to the duplicate set if it is not already present in another set. Finally, for each re3data claim not already processed, we seed a new duplicate set and search for FAIRsharing, OpenDOAR, and ROAR claims as done before. Similarly, we check the ROAR repositories claims towards OpenDOAR that have not already been processed. In this case, if the OpenDOAR profile is present in another duplicate set, we extend the already formed duplicate set with the new ROAR profile[Multiple profiles from ROAR can be added to the same duplicate set because more than one ROAR repository can claim the same OpenDOAR one, e.g., 919 and 5425 both claim 1047]. De-duplication In order to automatically detect repository duplicates and cluster them, we opted for FDup <cit.>. FDup has been developed within the effort of the OpenAIRE-Advance[OpenAIRE-Advance – <https://cordis.europa.eu/project/id/777541>] and OpenAIRE-Nexus[OpenAIRE-Nexus – <https://cordis.europa.eu/project/id/101017452>] projects and is currently applied in the production of the OpenAIRE Research Graph <cit.> to de-duplicate metadata records of research products and organisations. FDup provides an efficient de-duplication framework capable of comparing the metadata records in a “big data” collection to identify the groups of equivalent ones, hereafter referred to as duplicate sets. To this aim, FDup is given a dataset consisting of all the repository profiles across the four registries, for which we selected a handful of relevant, common fields: original identifier and source registry for identification, repository name, and homepage URL. Then, FDup performs a two-phase processing: the candidate identification phase optimises the otherwise quadratic complexity of comparing all possible pairs of profiles by pre-fetching the pairs of profiles that are likely representing the same repository; the duplicate sets identification matches all pairs of candidate profiles to effectively validate their equivalence and finally generates the duplicates sets by grouping all identified matching pairs of profiles via their transitive closure. The outcome of this process is a graph, where two nodes (i.e., repository profiles) are connected by an edge whenever similar. The graph is then explored to identify its connected components, which are the subgraphs in which each pair of nodes is connected via a path of edges (i.e., duplicate sets). The outcome of this phase strongly depends on the threshold chosen for the similarity match. For our analysis, we chose a 0.9 threshold to increase the precision level of the duplicate sets containing equivalent repositories. § RESULT While ROAR and FAIRsharing provide just one-to-one claims, re3data can claim duplicates in more than one registry. From re3data, in fact, we get one set composed of all four registries (e.g., r3d100012322) and four sets composed of three registries out of four (e.g., r3d100012274, r3d100013438, r3d100013665, and r3d100013717). The remaining claims are one-to-one: 451 claims to FAIRsharing, 15 to OpenDOAR, and 4 to ROAR. After conflating the registries' claims, we get 3,548 duplicate sets, whose composition is reported in Figure <ref>. The 88.8% of these sets consist of two profiles. As expected, the majority of the duplicate sets are composed of coupled profiles from OpenDOAR and ROAR (74%), followed by FAIRsharing and re3data (25%). The remaining involve duplicate sets where a re3data profile is paired with one in either ROAR or OpenDOAR. Sets composed of three profiles are 9.8% of the total, and the majority (98%) involve one profile in OpenDOAR and two in ROAR, suggesting the presence of duplicate repository registrations within ROAR. The remaining 2% comprehends profiles drawn from re3data, OpenDOAR, and ROAR. Finally, the last 1.4% are sets composed of four or five profiles. In all but one, ROAR profiles appear more than once, up to four times in one set with OpenDOAR. Also, one duplicate from OpenDOAR is present. Summing up, 13.45% of the repositories registered in ROAR, which provide a claim, can be considered duplicates (8.1% of the total number of repositories). The conflation produced six problematic duplicate sets, which involve only claims between FAIRsharing and re3data: * 3652→r3d100012729→1724. 1724 was deprecated and has been subsumed into 3652. The set is 3652 and r3d100012729. * 3340→r3d100010543→2107. 2117 was deprecated and has been replaced by 3340. The set is 3340 and r3d100010543. * r3d100010412→2424→r3d100011538. r3d100011538 is a duplicate of r3d100010412; the set can be considered right * r3d100011257→1730→r3d100012862. r3d100011257 has been merged into r3d100012862; the set is r3d100012862 and 1730 * r3d100011343→2163→r3d100000039. r3d100011343 do not belong to the same set, and it refers to a different organisation. The set is: 2163 and r3d100000039 * r3d100013223→2524→r3d100012397. r3d100013223 is a duplicate of r3d100012397; the set can be considered to be right After this manual validation, the checked claims are then used to extend the duplicate sets. From FDup automatic clustering, we get 2,230 clusters, whose composition is reported in Figure <ref>. Most of them (90.7%) involves two profiles. This time, inward duplicate profiles (i.e., within the same registry) are immediately visible: 5.4% of the clusters are composed of duplicates within ROAR, and 0.7% of duplicates within OpenDOAR. Also, one duplicate within re3data is present. As before, most of the remaining clusters (69.04%) are composed of ROAR and OpenDOAR profiles, then 21.02% of FAIRsharing and re3data profiles, and the last 3.3% involve a re3data profile with another one from OpenDOAR or ROAR. The 8.34% of the clusters comprise three profiles, 69.89% of which contain more than one profile from the same registry. Eight clusters are composed entirely of profiles within ROAR, which, together with OpenDOAR, is the registry with the highest number of multiple internal registrations for the same repository. The remaining 30.11% is composed of a combination of three registries out of four. The last 0.96% of the clusters are composed of four or five profiles, six of which have at least one profile for each repository, and only five have no duplicate from the same registry. The exploitation of clusters to extend the duplicate sets produced 208 unique sets[The extended duplicate sets are 239, but 31 have been extended via merging: more than one set match with the same cluster, e.g., (978, 976, 5221, 2328, 239, 241) via (976, 978) from de-duplication, and (241, 978) and (239, 2328, 5221, 976) from registries, thus producing one single set] (e.g., (4194, r3d100011201, 2560) extending (2560, r3d100011201)), 428 duplicate sets are obtained via de-duplication only, and 1,400 clusters overlap completely with the corresponding duplicate set, while 1,720 duplicate sets come from registry claims only. The 70.7% of the FDup-extended sets are obtained by adding one profile, 21.7% by adding two profiles, and the remaining by adding up to five profiles, getting two sets of cardinality 8. Please note that the extended sets may not be complete. As an example, consider (2373, 2115, 3755, 3591, 4562) and (2373, 3591, 4562, 4695) which are completely contained in the first set but 4695. The first one is obtained via the set (3591, 4562, 2373) and the cluster (4695, 3591) and the second via the sets (3591, 4562, 2373) and (3755, 2115), and the cluster (4562, 2115). This was because we had two different clusters for the same repository, which were registered with different names. Assuming that the claims from the registries are correct, we have manually validated all the duplicate sets coming from FDup only or extended via a cluster to verify their correctness. Those extended via registry claims are also checked to understand why the FDup could not cluster them correctly. To check the correctness of a given set, we considered the repository URLs and names as they were in our initial data. When both are the same, the repository is considered the same. When one of the two is considerably different, we inspected the URLs and registry content as a tie-break. Incidentally, some repositories are no longer in the registry (7 among those inspected in the repository). These repositories come from claims among registries (ROAR to OpenDOAR mainly). There are also 16 wrong duplicate sets (9 because of FDup being wrong, 7 because of incorrect claims in the registries), and 74 not working URLs. Furthermore, different duplicate sets can still refer to the same repository: de-duplication and registry claims do not have a common profile on whose basis our methodology can extend the duplicate set. Although we experimentally verified the presence of four of them, we cannot provide their exact number. § DISCUSSION On the one hand, the registry claims are a good starting point to identify multiple registrations of the same repository, but they are far from complete, and, most importantly, they are sometimes proved to be inaccurate. On the other hand, a systematic procedure to automatically extend this claim set can be of aid, but can lead to incomplete duplicate sets or even disjoint and wrong ones. The best of the two worlds can be achieved by combining the two approaches, but human intervention is still needed to obtain duplicate sets as accurately as possible; however, manual inspection is expensive and entails other known problems. Moreover, in some cases, time and effort do not suffice, as domain expert knowledge is required to determine whether two profiles can be considered the same (e.g., r3d100010553 and 1956 whose website URLs resolve to very different pages). Furthermore, some choices would also remain discretionary, for example, in the case of two repositories being registered with the same name and with URLs resolving, in one case to a list of collections or subsets of a repository, and in the other to one of these subsets. Unfortunately, there is no clear way to address such a conundrum without domain expert knowledge. In conclusion, this study and the results presented here outline a first-of-its-kind wake-up call to raise awareness about the inherent ambiguity residing in scholarly repository registries. While information scattering, consistency and duplication are not new in data quality literature, siloed, out-of-sync, scholarly repository registries can have a detrimental impact on the global scientific track record. Firstly and foremost, inconsistency and incompleteness of a registry content can negatively affect its image and directly hamper its uptake and reliability in the eyes of the research community(ies) of reference. Moreover, scholarly registries' downstream users, such as researchers, scholarly service providers, and Open Science infrastructures listing and aggregating their content, are exposed to potentially conflictual information, which has to be reconciled on an arbitrary (possibly manual) basis. The OpenAIRE infrastructure, for example, aggregates data sources (i.e., repositories) from a variety of registries, such as FAIRsharing, OpenDOAR, re3data, and CRIS Systems. When the same repository is ingested from multiple registries, OpenAIRE needs to reconcile the duplicates into one single record. One copy is elected as master (if possible) and is enriched with all the identifiers from the other copies. As it is not always possible to choose a master, multiple copies of the same data source can still appear in the OpenAIRE portal. The de-duplication of data sources within OpenAIRE mostly entails manual work, and the cross-references provided by the registries and the de-duplication have to be checked and enriched with other repositories that were not included to get duplicate sets as accurate as possible. Finally, as subsequent registrations yield multiple identifiers (e.g., a DOI) for the same repository, assessing metrics reflecting the usage, adoption rate, and impact of a repository across the academic community can be hindered if equivalence across different registrations cannot be precisely pinpointed. In our opinion, such problems can be solved mostly via an agreed-upon solution, paving the way towards the full support of interoperability across registries to enable a seamless exchange of the wealth of information contained therein. This work was partially funded by the EC H2020 OpenAIRE-Nexus (Grant agreement 101017452).
http://arxiv.org/abs/2307.00373v1
20230701160603
On the notion of polynomial reach: a statistical application
[ "Alejandro Cholaquidis", "Antonio Cuevas", "Leonardo Moreno" ]
math.ST
[ "math.ST", "cs.CG", "stat.TH" ]
On the notion of polynomial reach: a statistical application Alejandro Cholaquidis^1, Antonio Cuevas^2 and Leonardo Moreno^3 ^1 Centro de Matemática, Universidad de la República, Uruguay ^2 Departamento de Matemáticas, Universidad Autónoma de Madrid and Instituto de Ciencias Matemáticas ICMAT (CSIC-UAM-UCM-UC3M) ^3 Instituto de Estadística, Universidad de la República, Uruguay The volume function V(t) of a compact set S∈ℝ^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called “positive reach”) which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of V(t) has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call “polynomial reach”) might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea. § INTRODUCTION Given a compact set S⊂ℝ^d and r>0, the r-parallel set of S, denoted here by B(S,r), is the set of all points in ℝ^d for which there is a point of S within a distance not larger than r; see below in this section for formal definitions. The “volume function” of S is then defined by V(r)=μ(B(S,r)), where μ denotes the Lebesgue measure on ℝ^d. As it happens, this function carries a lot of useful information on the geometry of S. In particular, V(0)=μ(S). Also, the limit, when it does exist, L(S)=lim_ϵ→ 0^+μ(B(S,ϵ)∖ S)/ϵ= lim_ϵ→ 0^+V(ϵ) - V(0)/ϵ , provides a natural way of defining the surface measure of S. The value L(S) is called the outer Minkowski content of ∂ S; see Ambrosio et al. (2008) for a detailed study of this and other notions of surface measure. In many important cases the function V is a polynomial of degree at most d on some interval [0,R]. The best known example is given by the so-called “sets of positive reach” introduced in a celebrated paper by <cit.>. The reach of a compact set S is the supremum of the values 𝐫 such that any point outside S has only one metric projection on S; see below for more details. Of course, when the volume function V of S is a polynomial, the constant term is V(0)=μ(S) and the coefficient of the first-order term is V'(0)=L(S). The purpose of this work The general aim of this work is to exploit the above mentioned polynomial assumption for statistical purposes. We follow the lines of <cit.> where it is shown that V(r) can be consistently estimated from a random sample of points inside S. Then, if we denote V_n(r) an estimator of V based on a sample of size n, the coefficients V(r) can be estimated by a minimal distance procedure, just approximating V_n from the closest polynomial of degree d on the interval [0,R] of validity of the polynomial assumption. The present paper addresses two topics in this framework. First and foremost, we consider (from both the theoretical and practical point of view) the estimation of the “polynomial reach” 𝐑, that is, the maximum value of R for which the polynomial assumption holds on [0,R]. This allows us, as an important by-product, to address the statistical estimation of L(S) and μ(S) from a random sample of points, which remains as the main motivation of the whole study. To be more precise, if our main goal is to estimate the coefficients of the polynomial volume we do not need in fact ot estimate the polynomial reach 𝐑. A infra-estimation of this parameter would be enough (and safer than a possible over estimation which might lead to an erroneous polynomial fit for the volume function. The word “reach” is used here by analogy with the ordinary, geometric notion of Federer's reach, mentioned above. Such analogy is motivated by the fact that, as proved by Federer (1959), if the reach 𝐫 of a compact set is positive, then the volume function is a polynomial on [0,𝐫]. However this sufficient condition is by no means necessary, since many simple sets with 𝐫=0 have a polynomial volume on some interval. So if we are just interested in the polynomial volume property we could perhaps focus on 𝐑 rather than 𝐫. We will comment at some more detail these statistical aspects in Section <ref>. However, let us now advance that (a) we will assume that our sample information comes just from an inside sample on S, unlike other approaches that also require sample information outside S; see Section <ref> for details and references. (b) Our proposal does not require to estimate the set S itself as a preliminary step. (c) The conditions imposed on S are lighter than others appearing in the literature. Some notation and preliminary definitions Given a set S⊂ℝ^d, we will denote by S and ∂ S the interior and boundary of S, respectively with respect to the usual topology of ℝ^d. The parallel set of S of radius ε will be denoted as B(S,), that is B(S,)={y∈ℝ^d: inf_x∈ S‖ y-x‖≤}. If A⊂ℝ^d is a Borel set, then μ_d(A) (sometimes just μ(A)) will denote its Lebesgue measure. We will denote by B(x,ε) the closed ball in ℝ^d, of radius ε, centred at x, and ω_d=μ_d(B(x,1)). Given two compact non-empty sets A, C⊂ℝ^d, the Hausdorff distance or Hausdorff-Pompei distance between A and C is defined by d_H(A,C)=inf{>0: A⊂ B(C,) C⊂ B(A,)}. If I is an interval in ℝ denote by L^2(I) the space of real square-integrable functions defined on I, endowed with the usual norm, ‖ f‖_L^2(I)=√(∫_I f^2(s)ds). Let us denote V(r)= μ(B(S,r)), for r≥ 0, the volume function of the set S. If ℵ_n={X_1,…,X_n} stands for a sample of points X_i on S we will denote by V_n(r)=μ(B(ℵ_n,r)) the empirical volume function. Given r>0 and a closed interval I in [0,∞), we denote by P_n,ℓ^I and P_ℓ^I, respectively, the best approximations, by polynomials of degree at most ℓ≥ d, of V_n and V, with respect to the L^2 norm. That is, if Π_ℓ(I) denotes the (closed) subspace of all polynomials of degree at most ℓ∈ℕ in L^2(I), P_n,ℓ^I=argmin_π∈Π_ℓ(I)‖ V_n-π‖_L^2(I), and P_ℓ^I=argmin_π∈Π_ℓ(I)‖ V-π‖_L^2(I). Following the notation in <cit.>, let Unp(S) be the set of points x∈ℝ^d with a unique metric projection on S. For x∈ S, let reach(S,x)=sup{r>0:B(x,r)⊂Unp(S)}. The reach of S is then defined by reach(S)=inf{reach(S,x):x ∈ S}, and S is said to be of positive reach if 𝐫:=reach(S)>0. A set S⊂ℝ^d is said to be standard with respect to a Borel measure ν in a point x if there exists λ>0 and δ>0 such that ν(B(x,)∩ S)≥δμ_d(B(x,)), 0<≤λ. A set S⊂ℝ^d is said to be standard if (<ref>) holds for all x∈ S. See <cit.> and references therein for examples on the usefulness of the standardness condition in set estimation. In addition to the outer Minkowski content(<ref>), an alternative way of measuring is the two-side version, simply known as Minkowski content, L_0(S)=lim_ϵ→ 0^+μ(B(∂ S,ϵ))/2ϵ. The relation of this notion with its one-sided version (<ref>) and with the, perhaps more popular, concept of (d-1)-dimensional Hausdorff measure ℋ^d-1(∂ S) (that will be mentioned below in the proof of Lemma 1) is analyzed in <cit.>. Organization of this work In the following section the notion of polynomial reach is formally introduced. Also, some perspective and motivation are given in order to show the usefulness of such notion. The estimation of the polynomial reach from a random sample of points is considered in Section 3. Two methods are proposed: one of them is asymptotically consistent to the true value of the reach, but not that useful in practice; the other one provides a infra-estimation, which is enough for most practical purposes. Convergence rates for the estimation of the polynomial coefficients are derived in Section 4. Some numerical experiments are commented in Section 5. Finally, a few conclusions are briefly commented in Section 6, a few technical proofs are included in Appendix A and some tables with numerical outputs are provided in Appendix B. § SOME PERSPECTIVE AND MOTIVATION. THE NOTION OF POLYNOMIAL REACH As mentioned above, our aim here is exploiting a geometric idea to address some statistical problems in the setup of set estimation. The general purpose of this theory is to reconstruct a (compact) set S from the information provided by a random sample of points. A brief survey can be found in <cit.>. See, e.g., <cit.> and <cit.> for more recent references, including connections to manifold learning and other relevant topics. In many cases one is mostly interested in estimating a functional of S, typically the Lebesgue measure μ(S) or the outer Minkowski content (boundary measure) L(S) as defined in (<ref>). Such problems have been addressed in the literature from different strategies, which we next summarize. A) Plug-in approaches, based on a shape assumption on S. For example, if S is assumed to be convex it would be quite natural to use the volume or the boundary measure of the convex hull of the sample ℵ_n={X_1,…,X_n} as an estimator of the values μ(S) and L(S), respectively. See, e.g., <cit.> for a recent reference on the plug-in estimation of μ(S) under convexity. In <cit.> and <cit.> the analogous plug-in estimation of L(S) and μ(S) under the wider assumption of r-convexity is considered. In this case, the plug-in estimators of L(S) and μ(S) would be L(S_n) and μ(S_n), S_n being the r-convex hull of the sample B) Methods based on two-samples. In some cases, one may assume that one has two samples one inside and the other outside S. This extra information might allow for estimators of L(S) essentially based on nearest neighbors ideas; see, for example, <cit.> and <cit.>. C) Indirect methods, based on auxiliary functions or formulas involving the surface area. This the case of <cit.> or <cit.>. Also the results on the asymptotic distribution of the Hausdorff distance between a random sample and its support provided by <cit.> are of potential interest in this regard. The present paper fits in the item C) of this list. More specifically we follow the lines of <cit.>. However, whereas in that paper the interval of validity [0, R] of the polynomial assumption is assumed to be known, we consider here the non-trivial problem of estimating such interval. The motivation for this is to use the polynomial character of V(t) in that interval to estimate the polynomial coefficients which, as commented above, have a relevant geometric interest. This can be seen as a sort of “algebraic counterpart” of the estimation of Federer's reach parameter defined above. Nevertheless, it is important to note that the assumption of positive reach (which is very relevant in many other aspects) is not needed or used here at all. Of course, if we assume that S has a positive reach 𝐫>0, the polynomial volume assumption would be ensured in the interval [0,𝐫]. Therefore, any method to estimate Federer's reach 𝐫 (see, e.g., <cit.>) would be useful to exploit the polynomial volume assumption. The point is that there are many extremely simple sets for which Federer's reach is 0 and, still, polynomial volume assumption does hold. These include a “pacman-type” set such as the closed unit disk in ℝ^2 excluding an open sector, the union of two disjoint squares, or a simple set such as [-1,1]^2∖ (-1/2,1/2)^2. In all these cases, and in many others, the following definition applies A compact set S⊂ℝ^d is said to fulfil the polynomial volume property if there exist constants θ_0,…,θ_d∈ℝ and R>0 such that V(r)=θ_0+θ_1r+…+θ_dr^d, r∈[0,R ]. When this condition holds, a natural strategy to follow is to estimate V(r) by its empirical counterpart V_n(r), as defined in the previous section and, in turn, to approximate V_n(r) by a polynomial of degree d, whose coefficients θ_0n, and θ_1n can be seen as estimators of μ(S) and L(S), respectively. The above definition leads in a natural way to the following notion of polynomial reach, which is the central concept of the present work. Given a compact set S∈ℝ^d with volume function V, we will define the polynomial reach 𝐑 of S as 𝐑=sup{r≥ 0: V d [0,r]}. § CONSISTENT ESTIMATION OF THE POLYNOMIAL REACH The aim of this section and in fact, the main theoretical contribution of this work, is to show that, under some conditions, one can obtain either a consistent estimator (subsection <ref>) or, asymptotically, a lower bound (subsection <ref>) of the polynomial reach 𝐑. Note that, somewhat paradoxically, a lower bound might be even preferable and “safer” since, in order to use the polynomial volume assumption to estimate the relevant geometric quantities (area, perimeter,...) all we need is an interval where the polynomial expression does hold. Thus, we do not need in fact the whole interval [0,𝐑]: we may more easily afford some underestimation of 𝐑 rather than an error by excess in the estimation of this quantity. Let us start with two technical lemmas with some independent interest, whose proofs are given in Appendix A. Let S⊂ℝ^d be a compact set such that the right-hand side derivative V^+(0) does exists and is finite. Let ℵ_n={X_1,…,X_n} be an iid sample on S, b>0, and γ_n=d_H(ℵ_n,S). Then, with probability one for n large enough, sup_s∈ [2γ_n,b]|V(s)-V_n(s)| ≤ Cmax_p∈∂ Sd(p,ℵ_n) where C=_s∈ [0,b] |V'(s)|, and ‖ V-V_n‖_L^2([0,2γ_n])≤ 2V(0)(2γ_n)^1/2. As a consequence, for all compact interval I=[0,b] we have that, with probability one, for n large enough, ‖ V-V_n‖_L^2(I)≤ 2√(2)V(0)γ_n^1/2+√(b)Cmax_p∈∂ Sd(p,ℵ_n). Let ℵ_n={X_1,…,X_n} be an iid sample on S, that comes from a distribution P_X standard with respect to Lebesgue measure. Assume further that S fulfils L_0(S)<∞. Then with probability one, lim sup_n→∞(n/log(n))^1/dmax_p∈∂ Sd(p,ℵ_n)≤(2/δω_d)^1/d, being δ the standardness constant of P_X. §.§ A consistent estimator The main focus of this subsection is to show in Proposition <ref> that the polynomial reach can be estimated consistently from a sample. While this result has some conceptual interest, it suffers from some practical limitations: on the one hand, the estimator we propose might have (as suggested by some numerical experiments) a rather poor accuracy except for very large sample sizes. In the second place it could provide in some cases an overestimation which could entail some practical drawbacks commented at the beginning of the following subsection. Let us denote by simplicity P_n^t:=P^[0,t], P^t:=P^[0,t] the best polynomial approximation (in L^2) of degree at most d for V_n and V, respectively, on [0,t]. Define G_n(t)=‖ V_n-P_n^t‖_t=‖ V_n-P_n^t‖_L^2[0,t], and G(t)=‖ V-P^t‖_t:=‖ V-P^t‖_L^2[0,t]. Let S⊂ℝ^d be compact and ℵ_n={X_1,…,X_n} be an iid sample on S, let us denote γ_n=d_H(ℵ_n,S). Take a sequence ϵ_n>0 such that ϵ_n→ 0 and γ_n^1/2=o(ϵ_n), n→∞, ϵ_n/γ_n^1/2→∞, then 𝐑̃=G_n^-1(ϵ_n)=inf{t: G_n(t)>ϵ_n} fulfills 𝐑̃→𝐑, Let us denote π^t the orthogonal L^2 projection of onto the space of polinomials of degree at most d on [0,t]. Let b>0, and n large enough such that 2γ_n<b. Then, using the contractive property for the projections on convex sets in Hilbert spaces, we have sup_0<2γ_n<t<b‖ P_n^t-P^t‖_t=sup_0<2γ_n<t<b‖π^t(V_n)-π^t(V)‖_t≤sup_0<2γ_n<t<b‖ V_n-V‖_t ≤‖ V_n-V‖_2γ_n+ √((b-2γ_n))sup_0<2γ_n<t<bsup_s∈ [2γ_n,t] |V_n(s)-V(s)|. From (<ref>) ‖ V_n-V‖_2γ_n≤ 2V(0)(2γ_n)^1/2. Proceeding as we did to prove (<ref>), and using that max_p∈∂ Sd(p,ℵ_n)≤γ_n → 0 a.s, it follows that √((b-2γ_n))sup_0<2γ_n<t<bsup_s∈ [2γ_n,t] |V_n(s)-V(s)|→ 0 a.s. Then sup_0<2γ_n<t<b‖ P_n^t-P^t‖_t→ 0 as n→∞. a.s. From the triangular inequality we have that, with probability one, sup_0<2γ_n<t<b|‖ V_n-P_n^t‖_t- ‖ V-P^t‖_t| ≤sup_0<2γ_n<t<b( ‖ V_n-V‖_t+‖ P^t-P_n^t‖_t ) → 0, as n→∞. This proves that with probability one, for all b>0, sup_s∈ [2γ_n,b] |G_n(s)-G(s)|→ 0 as n→∞. Let us first assume that 𝐑>0, then G(t)=0 for al 0≤ t≤𝐑. Then from (<ref>) sup_s∈ [2γ_n, 𝐑] G_n(s)<ϵ_n. From (<ref>) G_n(t)<2V(0)(2γ_n)^1/2 for all t∈ [0,2γ_n]. This together with (<ref>) proves that, with probability one, for n large enough, 𝐑̃= inf{t:G_n(t)>ϵ_n}≥𝐑. Observe that if 𝐑=+∞ (<ref>) proves that 𝐑̃→ +∞. Assume now that 0≤𝐑<∞, then G(t)>0 for all t> 𝐑. Let us fix δ> 𝐑 and n large enough to ensure ϵ_n<G(δ). With probability one, G_n(δ)→ G(δ) as n→∞, then with probability one, for n large enough, 𝐑̃=inf{t:G_n(t)>ϵ_n}≤𝐑+δ. Since this holds for all δ> 𝐑 it follows that with probability one, lim_n→∞𝐑̃≤𝐑, This in particular proves that if 𝐑=0, 𝐑̃→ 0 a.s. as n→∞. From (<ref>) and (<ref>) it follows (<ref>). §.§ A consistent algorithm for a lower bound of the polynomial reach Let us note that any consistent estimator (as that proposed in the previous subsection) could provide, for finite sample sizes, an over estimation of the polynomial reach 𝐑. This might be particularly harmful in practice since, in the over estimation case, the polynomial volume condition for V is not fulfilled on [0,𝐑] which might lead to poor estimation of the polynomial coefficients. From this point of view it is safer to look for a infra-estimation of 𝐑. This is the purpose of the estimation algorithm we propose in this subsection. This algorithm (whose convergence will be proved below) works as follows Inputs and notation. A sample ℵ_n={X_1,…,X_n} of points on the support S. An arbitrarily small, η>0. A grid of K values 0<r_1<r_2<…<r_K. A sequence U_n=(log n/n)^1/2d-η. A positive integer value ℓ≥ d to be used as the degree of the approximating polynomials. Let us define I_i=[0,r_i] and J_i=[r_i,r_K] for all i=1,…,K-1, let P_n,d^I_i (resp. P_n,ℓ^J_i) be the best polynomial approximation of V_n of degree d on the interval I_i (resp. of degree ℓ≥ d on J_i). Step 0. Put i=0. If ‖ V_n-P_n,d^I_1‖_L^2(I_1)>U_n then the output is 𝐑=0 and the algorithm stops. Otherwise, put i← i+1 and go to the following step. Step 1 Define for i=1,…,K-1 c_i=‖ V_n-P_n,d^I_i‖_L^2(I_i)/‖ V_n-P_n,ℓ^J_i‖_L^2(J_i). Output. If the algorithm does not stop at Step 0, let i be the first index such that c_i>1 The output of the algorithm is 𝐑=r_i-1. If there is no such i, we just define the output of the algorithm as 𝐑=r_K-1. The consistency of this algorithm is stated in the next result. It relies on two assumptions, denoted H1 and H2. In Theorem <ref> below we will show precise conditions under which H1 and H2 hold. Let 𝐑 be the polynomial reach of the compact set S⊂ℝ^d. Let us consider the above algorithm, based on a grid 0=r_0<r_1<r_2<…<r_K and applied to samples of size n from a random variable whose distribution has support S. Assume . For some η>0, with probability one, for n large enough we have ‖ V_n-P_n^I_i‖_L^2(I_i)<U_n:= (log (n)/n)^1/2d-η for all i>0 such that I_i⊂[0,𝐑]. . For all ϵ>0, there exists a degree ℓ=ℓ(ϵ) large enough such with probability one for all n ‖ V_n-P_n,ℓ^J_i‖_L^2(J_i)<ϵ, for all i=1,…,K-1. Then, there exists ℓ large enough such that the output 𝐑 of the algorithm with these choices of ℓ and U_n fulfills (i) If 0≤𝐑<r_1 then 𝐑=0 eventually almost surely. If 𝐑≥ r_K then r_1≤𝐑≤ r_K-1 eventually almost surely. (ii) If r_i_0<𝐑<r_i_0+1 for some 0<i_0<K then 𝐑=r_i_0 eventually almost surely. (iii) If 𝐑=r_i_0 for some 1≤ i_0 < K then 𝐑≤ r_i_0, moreover if 𝐑=r_i_0>r_1, then 𝐑∈{r_i_0-1,r_i_0}, eventually almost surely. (i) Assume first that 0≤𝐑<r_1, let us prove that the algorithm stops in step 0. From the definition of 𝐑, V is not a polynomial on the closed interval I_1. Then, we have, with probability one, lim_n‖ P_n,d^I_1-V_n‖_L^2(I_1)=‖ P_d^I_1-V‖_L^2(I_1)>0. Indeed, ‖ P_n,d^I_1-P_d^I_1‖_L^2(I_1)→ 0 a.s., as a consequence of the continuity of the projections in a Hilbert space. On the other hand, we must have ‖ P_d^I_1-V‖_L^2(I_1)>0. To prove this observe that V is polynomial of degree d on [0,𝐑] and not on [0,r_1]. So we must have ‖ P_d^I_1-V‖_L^2(I_1)>0. Let n large enough to guarantee that U_n<‖ P_d^I_1-V‖_L^2(I_1)/2. We conclude (<ref>) and, in particular, that, with probability one, for n large enough ‖ P_n,d^I_1-V_n‖_L^2(I_1)>U_n, So, the algorithm will stop at step 0 eventually a.s., when 0≤𝐑<r_1 and then 0=𝐑≤𝐑 eventually almost surely as desired. Now, if 𝐑>r_K from assumption H1 we may ensure that, eventually, almost surely, the algorithm does not stop at the initial step. So, necessarily r_1≤𝐑̂≤ r_K-1. Before proving (ii), assume that r_i_0≤𝐑<r_i_0+1 for some i_0∈{1,…, K-2} and let us prove that there exists ℓ large enough such that c_i_0+1>1 with probability one for all n large enough. Reasoning as we did with I_1, we conclude that with probability one, for n large enough, ‖ P_n,d^I_i_0+1-V_n‖_L^2(I_i_0+1)>‖ P_d^I_i_0+1-V‖_L^2(I_i_0+1)/2>0. Thus, c_i_0+1 =‖ V_n-P_n,d^I_i_0+1‖_L^2(I_i_0+1)/‖ V_n-P_n,ℓ^J_i_0+1‖_L^2(J_i_0+1)≥‖ P_d^I_i_0+1-V‖_L^2(I_i_0+1)/2/‖ V_n-P_n,ℓ^J_i_0+1‖_L^2(J_i_0+1). By assumption H2 we can take ℓ large enough such that with probability one, for all n, ‖ V_n-P_n,ℓ^J_i_0+1‖_L^2(J_i_0+1)<‖ P_d^I_i_0+1-V‖_L^2(I_i_0+1)/2 so that c_i_0+1>1. This in particular proves that if 𝐑=r_i_0 for some 0<i_0 <K-1 then 𝐑≤ r_i_0. Let us now prove (ii). So, assume that r_i_0< 𝐑<r_i_0+1 for i_0∈{1,…,K-1}. Take ℓ fixed but large enough to guarantee that (<ref>) holds. Let us prove that c_i<1 for all 1≤ i≤ i_0. First observe that lim_n→∞‖ V_n-P_n,ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])=‖ V-P_ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1]) a.s. Let us prove that V is not a polynomial of degree at most ℓ in [r_i_0,r_i_0+1]. Assume by contradiction that V(t)=∑_i=0^ℓ a_it^i for some a_0,…,a_ℓ and t∈ [r_i_0,r_i_0+1]. Since r_i_0<𝐑<r_i_0+1 V is a polynomial of degree at most d in [r_i_0, 𝐑] and then a_d+1=… = a_ℓ =0. But then V is a polynomial of degree at most d in [0,r_i_0+1] from where it follows that 𝐑≥ r_i_0+1 which is a contradiction. Since V is not a polynomial of degree at most ℓ in [r_i_0,r_i_0+1] ‖ V-P_ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])>0. From (<ref>) it follows that, with probability one for n large enough, ‖ V_n-P_n,ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])>‖ V-P_ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])/2>0 but ‖ V_n-P_n,ℓ^J_i‖_L^2(J_i)≥‖ V_n-P_n,ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])>‖ V-P_ℓ^[r_i_0,r_i_0+1]‖_L^2([r_i_0,r_i_0+1])/2>0. Since ‖ V_n-P_n,d^I_i_0+1‖_L^2(I_i_0+1)→ 0 it follows from (<ref>) that c_i→ 0 for all 1≤ i≤ i_0 because the right-hand side of (<ref>) does not depend on n and ℓ is fixed, in particular c_i<1 for all i≤ i_0< K. This in particular proves that if r_K-1<𝐑<r_K then 𝐑̂=r_K-1. Since we have proved that c_i_0+1>1 for i_0<K-1, it concludes the proof for the case r_i_0< 𝐑<r_i_0+1 for all i_0<K-1. Note that, in this case, the value 𝐑 is eventually constant, 𝐑=r_i_0. To prove (iii) we must see that if 𝐑=r_i_0>r_1, then 𝐑∈{r_i_0-1,r_i_0}, eventually almost surely. As we have already proved 𝐑≤ r_i_0 eventually almost surely, it is enough to prove that c_i_0-1<1. First observe that, for all ℓ>0 fixed, with probability one for n large enough ‖ V_n-P_n,ℓ^J_i_0-1‖_L^2(J_i_0+1)>‖ V-P_ℓ^J_i_0-1‖_L^2(J_i_0+1)/2>0 Also, ‖ V_n-P_n,d^I_i_0-1‖_L^2(I_i_0-1)→ 0 with probability one, for n large enough. Then c_i_0-1→ 0 almost surely. So, eventually c_i<1. §.§ On the assumptions H1 and H2 of Theorem <ref> We now address an obvious question: which conditions on the set S enable us to guarantee that assumptions H1 and H2 in Theorem <ref> (that guarantee the convergence of the algorithm) are fulfilled? The answer is given in the next result. Under the assumptions of Lemmas <ref> and <ref>. Then, the algorithm defined in Subsection <ref> is consistent, in the sense that it provides eventually with probability one a lower bound for 𝐑, provided that U_n= (log (n)/n)^1/2d-η, where η is any value strictly positive. Let us prove that the choice of U_n fulfils condition H1 of Theorem <ref>. We have to prove that if 𝐑≥ b then for a fixed η>0, ‖ V_n-P_n^I‖_L^2(I)<U_n where I= [0,b]. Since 𝐑≥ b, ‖ V_n-P_n,d^I‖_L^2(I)≤‖ V-V_n‖_L^2(I). So we have to prove that if 𝐑>b then for any fixed η>0, ‖ V-V_n‖_L^2(I)<U_n. This follows from Theorem 3 in <cit.>, which proves that with probability one, for all n large enough, d_H(ℵ_n, S)≤(2/δω_dlog (n)/n)^1/d, together with Lemmas <ref> and <ref>. Let us prove that hypothesis H2 of Theorem <ref> holds. By Theorem 1 in <cit.> V_n is absolutely continuous , then by Lemma 3 in <cit.>, with probability one, for all n, ‖ V_n-P_n,ℓ^J_i‖_L^2(J_i)≤√(2)π/2(ℓ+1)‖ V_n'‖_L^2(J_i) for all i=1,…,K-1, where ℓ is even. To bound ‖ V_n'‖_L^2(J_i) we will use again <cit.>: in the points t where V_n'(t) exists, it holds that V_n'(t)= ℋ^d-1(∂ B(ℵ_n,t))≤ d V_n(t)/t for t>0. Since for all t>0 0≤ V_n(t)≤ V(t) it follows that ‖ V_n-P_n,ℓ^J_i‖_L^2(J_i)≤√(2)π/2(ℓ+1)dV(r_K)√(∫_r_i^r_K 1/t^2dt)≤√(2)π/2(ℓ+1) dV(r_K) √(1/r_1-1/r_K) and then H2 holds. § ESTIMATION OF THE POLYNOMIAL COEFFICIENTS: CONVERGENCE RATES Theorem 1 in <cit.> states that the coefficients of the best polynomial approximation of the estimated volume function V_n, are consistent estimators of the coefficients of V. In this section we obtain the rates of convergence for those estimators. First we will assume that either 𝐑 is known, or we have an underestimation. In order to do that, we will use the following Lemma, whose proof is given in the appendix. Let [a,b]⊂ℝ. There exists a constant κ_d>0 such that for any pair of polynomials f(t)=∑_i=0^d α_i t^i and g(x)=∑_i=0^d β_i t^i defined on [a,b], | α_i - β_i |≤κ_d ‖ f-g‖_L^2([a,b]). Moreover, κ depends only on d and [a,b]. The following two results provide the convergence rates for the estimation of the polynomial coefficients. Not surprisingly, these rates are of “non-parametric type" with orders O((log(n)/n)^1/d) depending on the dimension. Let S⊂ℝ^d be compact such that the Hausdorff dimension of S is d'≤ d. Let ℵ_n be a sample iid of X whose distribution P_X is assumed to be standard w.r.t to ℋ^d', the d'-dimensional Hausdorff measure on S. Assume that S has polynomial reach 𝐑>0. Let P_n,d^I(t)=θ_0n+…+θ_dnt^d for t∈ I⊂ [0,𝐑] as in (<ref>). Then, there exists κ_d>0 such that, with probability one, lim sup_n→∞(n/log(n))^1/d'max_i |θ_i-θ_in|≤ 2κ_d C(2/δω_d)^1/d' δ being the standardness constant given by (<ref>), and C>0. From B(S,(1-d_H(ℵ_n,S))s)⊂ B(ℵ_n,s)⊂ B(S,s), it follows that V(s)-V_n(s)≤ V(s)-V((1-d_H(ℵ_n,S))s). Proceeding as in Lemma <ref> , ‖ V-V_n‖_L^2(I)≤ C d_H(ℵ_n,S). It can be proved, following the same ideas used to prove Theorem 3 in <cit.> that, with probability one, lim_n→∞(n/log(n))^1/d'd_H(ℵ_n,S)≤(2/δω_d)^1/d'. This together with max_i |θ_i-θ_in|≤ 2κ_d ‖ V-V_n‖_L^2(I). proves (<ref>). Moreover, since the output of the algorithm given in subsection <ref> is constant for n large enough, we can obtain the rates of convergence for the estimators of the coefficients as given by the next result. Under the assumptions of Lemmas <ref> and <ref>. Let 𝐑 be the output of the algorithm given in subsection <ref>. Assume r_i_0<𝐑<r_i_0+1 for some i_0∈{1,….K-1}. Let P_n,d^[r_1,𝐑](t)=∑_i=0^d θ_int^i for t∈ [r_1,𝐑] as in (<ref>). Then, with probability one, for n large enough max_i |θ_i-θ_in|=O((log(n)/n)^1/d). In the proof of Theorem <ref> it is shown that in the case r_i_0<𝐑<r_i_0+1 the output 𝐑̂ of the algorithm is eventually constant, almost surely and a lower bound of the true value 𝐑. Then this result is a direct consequence of (<ref>) together with Lemmas <ref> and <ref>. § COMPUTATIONAL ASPECTS. NUMERICAL EXPERIMENTS In this section we will study the performance of the algorithm presented in subsection <ref> for three examples of polynomial volume sets in the plane: 1 The “Pacman set”, defined as S_P=B(0,1)∩ H^c where H={(x,y): x>0 and y>0}. This set is shown in Figure <ref> together with two parallel sets, B(ℵ_n,0.08) and B(ℵ_n,3), of a sample ℵ_n drawn on S. It can be shown that in this case the polynomial reach is 𝐑=1 and its volume function, for 0≤ t≤ 1, is the polynomial V(t)=(5π/4-1)t^2+(2(1+3π/4)t+3π/4. 2 The “Union-of-squares”: S_U=[2,4]× [-1,1]∪ [-1,1]^2. Its polynomial reach is 𝐑=1 and its volume function, for 0≤ t≤ 1, is the polynomial V(t)=2π t^2+16π t+8. In Figure <ref> it is shown the set, and two parallel sets of the samples B(ℵ_n,0.1) and B(ℵ_n,0.7). 3 The “Frame" set S_F=[-1,1]^2∖ M where M⊂ (-1,1)^2 is a square with side length F<1. Its polynomial reach is 𝐑=F and its volume function is V(t)= 4-F^2+4(F+2)t+(π-4)r^2 t∈ [0,F/2] 4+8r+π r^2 t>F/2 . We took F=1 for the simulations. In the previous section we have studied the problem of the estimation of the coefficients when the polynomial is adjusted in the interval [0,𝐑], 𝐑 being the output of our algorithm. It is easy to demonstrate that the volume function for the “Pacman set” S_P is differentiable at t=1, while the volume function for the union of squares is not differentiable at t=1. This makes S_P a more challenging example for estimating its polynomial reach, as we have to detect a smoother change point. For both S_P and S_U we have applied the algorithm using three different grids of values 0<r_1^g<…<r_K_g^g for g=1,2,3, denoted by gr1 , gr2 and gr3. In the three cases we took r_K=1.98. For g=1,3, we took r^g_j-r^g_j-1=0.4 for all j=2,…,r_K_2^2, r_1^1=0.2 and r_1^3=0.3. For gr2 we took r^2_j-r^2_j-1=0.3 for all j=2,…,r_K_2^2 and r^2_1=0.3. The grid choice is a crucial point in the algorithm. Our numerical experiments suggest to avoid small values of the increments r^g_j-r^g_j-1, which could lead to numerical instabilities associated with the estimation of the L^2 norms. In the case of S_F, where 𝐑 is obviously smaller than in the other examples (in fact it is 𝐑=0.5) we have used other different grids (denoted again g1, gr2 and gr3) with values 0<r_1^g<…<r_K_g^g for g=1,2,3. In this case gr1 is a grid from r_1^1=0.1 to r_K_1^1=1.5 and the grid step is 0.2, gr2 is a grid from r^2_1=0.1 to r_K_2^2=1.5 and the grid step is 0.25 and gr3 goes from r^3_1=0.2 to r_K_3^3=1.5 with a grid step of 0.3. The computations of P_n,d^I_i and P_n,ℓ^J_i are performed by using Bernstein polynomials. The norms are evaluated over grids with step 0.001. As a consequence of the use of three grids, for each simulation run, we have obtained three values of 𝐑. In all cases, the c alculation of V_n has been done by a Monte-Carlo approximation using a sample of one million points on a square containing the sets. The outputs for 𝐑 in S_P, S_U and S_F are summarized in Tables <ref>, <ref> and <ref>, respectively (see Appendix B). These results suggest that, in all instances, the increase in the parameter ℓ, reduces the number of cases where 𝐑 is overestimated though, as we will see below, this does not always improves the coefficient estimation. reduced. However, as we will see in the coefficient estimation, On the other hand, the number of overestimations depends on the grid, ranging from 0 to 49, even with 4000 data points (see Table <ref>). As expected, the estimation of the reach for “frame set” S_F is a more challenging problem than for the S_P or S_U, since the reach is smaller, which requires use of finer grids and larger sample sizes. This is also seen in the coefficients estimation. §.§ The case where 𝐑 is small To demonstrate the algorithm's performance when 𝐑 is small, we have considered the “Union-of-squares” set S_U, where both squares are separated by a distance of 2λ (thus, 𝐑=λ). When r_1>2λ (with λ=0.01, 0.05), the algorithm should halt at step 0. For U_n, we set η=1/10. As shown in Table <ref>, the algorithm generally stops at case 0. Intuitively, this happens because the empirical volume V_n significantly deviates from a polynomial of degree 2 even over small intervals starting at 0. §.§ Estimation of the coefficients To assess the coefficient estimation, we run the algorithm 1000 times. The coefficients are estimated by least squares over a grid from 0.1 to 𝐑 with a grid step 0.01. The reason to start at 0.1 is to avoid the obvious infra-estimation of V(s) provided by V_n(s) when s is small (recall that V_n(0)=0 ). Also, we have excluded in the least squares estimation process those runs where 𝐑 coincides with the minimum possible value r_1 (see the algorithm above), in order to avoid use of too small intervals leading to inaccurate estimations. The outputs are given in Tables <ref> and <ref> for S_P, Tables <ref> and <ref> for S_U, and in Tables <ref> and <ref> for S_F. The results show that the estimation of the first two polynomial coefficients, which are our main target here, are more accurate than the estimation of the highest degree coefficient. On the other hand, as expected, the estimations are better in those grids where the algorithm estimates 𝐑 with a higher average value. Lastly, as with the estimation of R, the case of the S_F set is harder and requires more data. § SOME CONCLUSIONS We place ourselves in the, quite broad, context of sets S having a polynomial volume function on some interval [0,r] We deal with the estimation of the polynomial reach, that is, the supremum 𝐑 of the values r for which the polynomial condition holds. The available information is just a random sample inside the set S of interest. Some plausible estimators, as that considered in Proposition <ref> might not be easy to implement in practice. Thus, since the interval [0,𝐑] of polynomial reach is mainly used in practice to estimate the relevant coefficients of the polynomial volume, a conservative infra-estimation, as that analyzed in Theorem <ref>, might be enough. The minimum-distance estimators of the polynomial coefficients yield, as a by-product, estimators of the volume and the surface measure of the set S. It is important to note that, unlike other volume and surface area estimators available in the literature (see some references below), this procedure requires only an inside sample, rather than two samples, inside and outside the set. Also, it holds under very general conditions for the set S: no convexity or r-convexity is assumed, no positive reach or rolling condition is imposed. Finally, the estimation method does not essentially relies on a smoothing parameter (though some tuning parameters are unavoidably involved in the algorithm). For some references and background on volume/surface area estimation see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Quite predictably, given the nature of the problems at hand, the proposed methods only work in practice with large samples sizes (around 5000 sample points) which is not a serious limitation in cases where we adopt a Monte Carlo approach, based on simulated samples, to estimate the geometric parameters (polynomial reach, volume, surface area) and the set is in fact known. As an interesting open problem we could mention the development of statistical tests of low dimensionality, based on the estimators of the polynomial coefficients. § APPENDIX A: PROOF OF LEMMAS 1-3 Since S is compact it follows that max_a∈∂ Sd(a,ℵ_n)→ 0 a.s., and d_H(S,ℵ_n)→ 0 a.s. Observe that for n large enough, max_p∈∂ Sd(p,ℵ_n)<γ_n<b/2. Let us prove that, for all s∈ [2γ_n,b], B(S,s-max_p∈∂ Sd(p,ℵ_n))⊂ B(ℵ_n,s)⊂ B(S,s). We have to prove only the first inclusion, the second one follows from ℵ_n⊂ S. Let z∈ B(S,s-max_p∈∂ Sd(p,ℵ_n)). Since s≥ 2d_H(S,ℵ_n), S⊂ B(ℵ_n,s). So if z∈ S we have z∈ B(ℵ_n,s). Let us consider the case z∉ S. Let z^* ∈∂ S the projection onto ∂ S of z. Since z∉ S, d(z,z^*)≤ s-max_p∈∂ Sd(p,ℵ_n). Since z^*∈∂ S there exists X_i∈ℵ_n such that d(z^*,X_i)≤max_p∈∂ Sd(p,ℵ_n). Then d(z,ℵ_n)≤ d(z,z^*)+d(z^*,ℵ_n)≤ s-max_p∈∂ Sd(p,ℵ_n)+ max_p∈∂ Sd(p,ℵ_n)≤ s, and we conclude that z∈ B(ℵ_n,s). This proves (<ref>) and, for s∈[2γ_n,b], V(s)-V_n(s)≤μ[B(S,s)∖ B(S,s-max_p∈∂ Sd(p,ℵ_n))]=V(s)-V(s-max_p∈∂ Sd(p,ℵ_n)). Let us now prove that V is Lipschitz. First, recall that, as V is monotone, the derivative V'(t) exists except for a countable set of points t. Also, when V'(t) does exist, it coincides with, ℋ^d-1(∂ B(S,t)), the d-1-dimensional Hausdorff measure of the boundary of the parallel set B(S,t), see <cit.>. But, as shown in <cit.>, ℋ^d-1(∂ B(S,t))≤ d (V(t)-V(0))/t for t>0. Now, as we are assuming that V^+(0) is finite, the upper limit of ℋ^d-1(∂ B(S,t)), as t→ 0 is also finite, so that ℋ^d-1(∂ B(S,t)) must be finite almost everywhere in a neighborhood [0,χ] for some χ>0. Therefore, there is a constant L_1 such that |V'(t)|<L_1 almost everywhere on [0,χ]. On the other hand, according to Theorem 1 in <cit.>, for all ϵ>0, the volume function V is absolutely continuous on [ϵ, b] and its derivative can be expressed, almost everywhere, as t^d-1α(t) , for some monotone decreasing function α. As a consequence there is a positive constant C such that |V'(t)|<C almost everywhere on [0,b]. We thus conclude that V is Lipschitz continuous on [0,b] with Lipschitz constant C. Finally, the asymptotic bound (<ref>) follows from ‖ V-V_n‖_L^2([0,2γ_n])≤ (V(2γ_n)^22γ_n)^1/2 = ( V(0)+V^+(0)2γ_n+o(2γ_n))(2γ_n)^1/2 ≤ 2V(0)(2γ_n)^1/2, and then, using (<ref>) and (<ref>), ‖ V-V_n‖_L^2([0,b])≤ ‖ V-V_n‖_L^2([0,2γ_n])+‖ V-V_n‖_L^2([2γ_n,b]) ≤ 2V(0)(2γ_n)^1/2+√(b)Cmax_p∈∂ Sd(p,ℵ_n) eventually with probability one. Since L_0(∂ S)<∞ there exist ϵ_0 small enough such that we can cover ∂ S by 4L_0(∂ S)/(ω_dϵ^d-1) balls of radius 2ϵ centered at ∂ S for all 0<ϵ<ϵ_0. Let us define ζ_n=(n/log(n))^1/d, and κ=(2/(δω_d))^1/d ℙ(ζ_nmax_p∈∂ Sd(p,ℵ_n)> κ)≤ζ_n^d-14L_0(∂ S)/ω_dκ^d-1(1- 2/ζ_n^d)^n≤ζ_n^d-14L_0(∂ S)/ω_dκ^d-1n^-2 The result follows now from Borel-Cantelli's lemma. Let 𝒫_1={P_0,...,P_d} be an orthonormal base (in L^2([a,b])) for the set of polynomials of degree at most d, and the base 𝒫_2={1,t,...,t^d }. Then, f(t)=∑_i=0^d α^'_i P_i(t) and g(t)=∑_i=0^d β^'_i P_i(t). By Pythagoras ‖ f-g ‖_L^2([a,b])^2= ∑_i=0^d (α^'_i- β^'_i )^2. For all i=0,1,…,d |α^'_i- β^'_i |≤‖ f-g ‖_L^2([a,b]). Let A be the change of coordinates matrix from 𝒫_1 to 𝒫_2, then α-β= A ( α^' -β^' ) where α=(α_1,…,α_d), β=(β_1,…,β_d) α^'=(α^'_1,…,α^'_d) and β^'=(β_1^',…,β_d^') Lastly, for all i=0, …,d. |α_i - β_i |≤‖α-β‖≤‖ A ‖‖α^' -β^'‖ = ‖ A ‖‖ f-g ‖_L^2([a,b]). § APPENDIX B: TABLES 9 [Aaron et al.2017]aar17 Aaron, C., Cholaquidis, A. and Cuevas, A. (2017). Detection of low dimensionality and data denoising via set estimation techniques. Electronic Journal of Statistics, 11, 4596–4628. [Aaron et al.2022]aar22 Aaron, C., Cholaquidis, A. and Fraiman, R. (2022). Estimation of surface area. Electronic Journal of Statistics, 16, 3751–3788. [Arias-Castro et al.2019]ari19 Arias Castro, E., Pateiro-López, B., Rodríguez-Casal, A. (2019). Minimax estimation of the volume of a set under the rolling ball condition. Journal of the American Statistical Association-Theory and Methods, 114. 1162-1173. [Ambrosio et al.2008]amb08 Ambrosio, L., Colesanti, A., and Villa, E. (2008). Outer Minkowski content for some classes of closed sets. Mathematische Annalen, 342(4), 727–748. [Baldin and Reiß2021]bal21 Baldin, N. and Reiß, M. (2021). Unbiased estimation of the volume of a convex body. Stochastic Processes and their Applications 126, 3716–3732. [Bernstein et al2000]bern Bernstein, M., De Silva, V., Langford, J. C., and Tenenbaum, J. B. (2000). Graph approximations to geodesics on embedded manifolds. Technical report, Department of Psychology, Stanford University. 961–968. [Chazal et. al2017]chazal2017 Chazal, F., Cohen-Steiner, D., Lieutier, A., Mérigot, Q., and Thibert, B. (2017). Inference of curvature using tubular neighborhoods. In Modern Approaches to Discrete Curvature. 133–158. Springer, Cham. [Cholaquidis, Fraiman, Moreno2022]chola22 Cholaquidis, A., Fraiman, R., and Moreno, L. (2022). Universally consistent estimation of the reach. Journal of Statistical Planning and Inference, 225, 110–120. [Cholaquidis et al.2014]chola:14 Cholaquidis, A., Cuevas, A. and Fraiman, R. (2014) On Poincaré cone property. Ann. Statist., 42, 255–284. [Cuevas and Rodríguez-Casal2004]cue04 Cuevas, A. and Rodríguez-Casal, A. (2004). On boundary estimation. Adv. in Appl. Probab. 36 340–354. [Cuevas2009]cue09 Cuevas, A. (2009). Set estimation: Another bridge between statistics and geometry. BEIO, 25, 71–85. [Cuevas et al.2007]cue07 Cuevas, A., Fraiman, R. and Rodríguez-Casal, A. (2007). A nonparametric approach to the estimation of lengths and surface areas. Ann. Statist. 35, 1031-1051. [Cuevas, Fraiman and Pateiro-López2012]cue12 Cuevas, A., Fraiman, R. and Pateiro-López, B. (2012). On statistical properties of sets fulfilling rolling-type conditions. Adv. in Appl. Probab. 44, 311–329. [Cuevas, Fraiman and Györfi, L.2013]cue13 Cuevas, A., Fraiman, R. and Györfi, L. (2013). Towards a universally consistent estimator of the Minkowski content. ESAIM: Probability and Statistics, 17, 359–369. [Cuevas and Pateiro-López2018]cuepat18 Cuevas, A., and Pateiro-López, B. (2018). Polynomial volume estimation and its applications. Journal of Statistical Planning and Inference, 196, 174-184. [Federer1959]fed59 Federer, H. (1959). Curvature measures. Trans. Amer. Math. Soc. 93 418–491. [Federer1969]fed:69 Federer, H. (1969). Geometric measure theory Springer. [Golitschek1991]goli:91 Golitschek, M. (1991). Müntz-Jackson Theorems in L_p(0,1), 1≤ p<2 Journal of Approximation Theory. 67, 337–346. [Jiménez and Yukich2011]jim11 Jiménez, R. and Yukich, J.E. (2011). Nonparametric estimation of surface integrals. Ann. Statist. 39, 232–260. [Penrose2023]pen23 Penrose, M. D. (2023). Random Euclidean coverage from within. Probability Theory and Related Fields, 185, 747–814. [Rataj and Winter2010]rataj10 Rataj, J. and Winter, S.(2010). On Volume and Surface Area of Parallel Sets. Indiana University Mathematics Journal, 59, 1661–1685. [Stacho, L. L.1976]st76 Stachó, L. L.(1976) On the volume function of parallel sets. Acta Sci. Math., 38, 365-374.
http://arxiv.org/abs/2307.03042v2
20230706150641
Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain
[ "Aryo Pradipta Gema", "Luke Daines", "Pasquale Minervini", "Beatrice Alex" ]
cs.CL
[ "cs.CL", "cs.LG" ]
[ [ August 1, 2023 ================== Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters. However, this approach is increasingly proven to be impractical owing to the substantial computational requirements associated with training such large language models. To address this issue, Parameter-Efficient Fine-Tuning (PEFT) techniques offer a viable solution by selectively fine-tuning a small subset of additional parameters, significantly reducing the computational requirements for domain adaptation. In this study, we propose Clinical LLaMA-LoRA, a PEFT adapter layer built upon the open-sourced LLaMA model. Clinical LLaMA-LoRA is trained using clinical notes obtained from the MIMIC-IV database, thereby creating a specialised adapter designed for the clinical domain. Additionally, we propose a two-step PEFT framework which fuses Clinical LLaMA-LoRA with Downstream LLaMA-LoRA, another PEFT adapter specialised for downstream tasks. We evaluate this framework on multiple clinical outcome prediction datasets, comparing it to clinically trained language models. Our proposed framework achieves a state-of-the-art AUROC score averaged across all clinical downstream tasks. We observe substantial improvements of 6-9% AUROC score in the large-scale multilabel classification tasks, such as diagnoses and procedures classification. § INTRODUCTION Large Language Models (LLMs) have consistently achieved state-of-the-art performance across various NLP tasks. However, while these models exhibit impressive generalisation abilities, they often struggle to perform in specialised domains such as clinical applications, primarily due to the absence of domain-specific knowledge. The complexity of medical terminology and the presence of incomplete sentences in clinical notes contribute to this challenge <cit.>. Unfortunately, studies have indicated that even LLMs pretrained with datasets comprising biomedical publications still exhibit suboptimal performance when applied to downstream clinical applications, particularly when compared to LLMs pretrained with clinical notes <cit.>. This observation suggests that there are intrinsic nuances specific to the clinical context that can only be effectively captured if LLMs undergo pretraining using clinical datasets. The current approach of adapting pretrained LLMs to the clinical domain typically involves fine-tuning the entire model parameters <cit.>. However, due to the rapid increase in the size of LLMs, such a practice demands extensive computational resources, which may not be readily accessible to all researchers. Consequently, this challenge will further exacerbate the disparity between the resource-rich and resource-constrained research institutions <cit.>. To address the substantial computational demands, studies have proposed various Parameter-Efficient Fine-Tuning (PEFT) techniques. These techniques present a practical solution by fine-tuning a small subset of additional parameters while keeping the remaining pretrained parameters fixed. As a result, this strategy significantly alleviates the computational burden while achieving comparable performance to that of full fine-tuning. In this study, we propose a two-step PEFT framework (see <ref>). Firstly, we introduce Clinical LLaMA-LoRA, a Low-Rank Adaptation <cit.> PEFT adapter built upon the open-source Large Language Model Meta AI (LLaMA) <cit.>. Then, we introduce Downstream LLaMA-LoRA, which is trained on top of the pretrained Clinical LLaMA-LoRA. Downstream LLaMA-LoRA is specifically designed for clinical downstream tasks. The fusion of the two adapters achieves state-of-the-art performance in clinical NLP downstream tasks while considerably reducing the computational requirements. This study presents the following contributions: * We introduce Clinical LLaMA-LoRA, a PEFT-adapted version of the LLaMA model tailored specifically for the clinical domain. * We provide comparisons of multiple PEFT techniques in terms of language modelling performance based on perplexity score, shedding light on the optimal PEFT techniques for the clinical domain-adaptive pretraining. * We introduce Downstream LLaMA-LoRA, built on top of Clinical LLaMA-LoRA and tailored specifically for the clinical downstream tasks. * We evaluate the proposed mixture of Clinical LLaMA-LoRA and Downstream LLaMA-LoRA on downstream clinical datasets and tasks. Our proposed framework showcases improvements in AUROC scores over the existing clinical LLMs. § BACKGROUND §.§ Biomedical Large Language Models General-domain LLMs continue to face challenges when confronted with domain-specific tasks. The complexity associated with the requisite domain knowledge is recognised as a significant factor <cit.>, particularly within the biomedical domain. Consequently, numerous studies have attempted to adapt LLMs specifically for the biomedical domain. An early example of such adaptation is BioBERT <cit.>, which was pretrained using biomedical research articles from PubMed and PubMed Central. This adaptation has shown improved performance across various biomedical NLP tasks. Recognising the significance of biomedical-specific vocabularies, <cit.> proposed PubMedBERT, which is pretrained on biomedical data from scratch and initialised the model vocabulary with the biomedical corpus. The growing interest in biomedical NLP research has led to the adaptation of even larger models to the biomedical domain <cit.> While these biomedical LLMs have demonstrated advancements in various biomedical NLP benchmarking tasks, studies have revealed that clinical LLMs still outperform their biomedical counterparts in numerous clinical downstream tasks <cit.>. This suggests that domain-adaptive pretraining using clinical data is still the de facto protocol in adapting LLMs to the clinical domain. §.§ Clinical Large Language Models Clinical LLMs are often fine-tuned with clinical data from an LLM that is already pretrained with datasets that encompass broader topics. For instance, Bio+ClinicalBERT <cit.> is domain-adaptively pretrained using clinical notes from the Medical Information Mart for Intensive Care (MIMIC)-III database <cit.>, starting from a pretrained BioBERT <cit.>, which itself is pretrained on biomedical articles. BlueBERT <cit.> is domain-adaptively pretrained using PubMed abstracts and MIMIC-III clinical notes from a BERT model <cit.>, that is pretrained with general-domain texts. Similarly, Clinical-T5 <cit.> is domain-adaptively pretrained using the union of MIMIC-III and MIMIC-IV <cit.> clinical notes from T5-base <cit.>, another general-domain LLM. All these studies share a common approach, which is to fine-tune the entire model parameters. With massive LLMs, this method has become cost-prohibitive and inaccessible for many researchers. §.§ Parameter-Efficient Fine-Tuning for Large Language Models Suppose that we have a pretrained LLM P_Φ(y|x); fine-tuning it can be effectively defined as finding the most appropriate parameter changes ΔΦ by optimising the fine-tuning objective. A conventional, full fine-tuning process means that the model needs to learn a ΔΦ whose dimension is equal to the entire parameters of the pretrained LLM |ΔΦ| = |Φ_0|, which is computationally expensive. PEFT techniques address this by tuning the delta ΔΦ, which corresponds to a very small fraction of additional trainable parameters during the fine-tuning process. Adapter tuning <cit.> is an early PEFT method that involves adding small additional parameters called adapters to each layer of the pretrained model and strictly fine-tuning this small set of new parameters. LoRA <cit.> is another PEFT approach that trains low-rank matrices to represent the attention weights update of transformer-based models. Another group of PEFT approaches leverages the concept of prompting. Prefix Tuning <cit.> optimises a sequence of continuous task-specific vectors, called a prefix, which are trainable parameters that do not correspond to real tokens. P-Tuning <cit.> uses a similar strategy as Prefix tuning with a focus on text understanding tasks, as opposed to generative tasks. Prompt tuning <cit.> simplifies Prefix tuning by introducing trainable tokens, called soft prompts, for each downstream task. <cit.> introduced P-tuning v2 which uses deep prompt tuning to address the lack of performance gain in the previous prompt tuning techniques. By fine-tuning a small fraction of additional parameters, all PEFT approaches alleviate the issue of extensive computational resource requirements. § METHODOLOGY §.§ Problem Statement <ref> shows the comparison between the current and proposed problem definitions. The general problem can be decomposed into two stages: Domain-adaptive Pretraining. Given a pretrained general LLM P_Φ(y|x) with its parameters Φ and a training dataset 𝒵 = { (x_i, y_i) }_i=1,...,N. To adapt to the new domain, the model needs to update its weight iteratively from its pretrained state Φ_0 to Φ = Φ_0 + ΔΦ. This process of maximising the objective function can be defined as: _Φ∑_(x, y) ∈𝒵∑_t=1^|y|log(P_Φ(y_t | x, y_<t)) In the current paradigm, a full fine-tuning process means that the model needs to learn a ΔΦ whose dimension is equal to the entire pretrained parameters |ΔΦ| = |Φ_0|, which is computationally expensive. In the proposed paradigm, we tune only small additional parameters θ such that Φ = Φ_0 + ΔΦ(θ) whose dimension is very small compared to the original parameters |θ| ≪ |Φ_0|. Thus, the training objective can be redefined as: _θ∑_(x, y) ∈𝒵∑_t=1^|y|log(P_Φ + ΔΦ(θ)(y_t | x, y_<t)) In the current paradigm, the outcome of domain-adaptive pretraining would be a clinically-adapted LLM. While in the proposed paradigm, the outcome would be the clinical PEFT component, which can be combined with the untouched pretrained general LLM for downstream applications. Downstream Fine-tuning. In the current paradigm, the pretrained clinical LLM is fine-tuned to the downstream tasks, such as document classification tasks. Suppose that we have a pretrained clinical LLM P_Φ, Θ with its domain-adapted parameters Φ and a newly initialised classifier layer Θ, as well as a training dataset 𝒵 = { (x_i, y_i) }_i=1,...,N. We want to maximise a specific loss function, such as a cross-entropy loss: _Φ, Θ1/N∑_i=1^N y_ilog(P_Φ, Θ(x_i)) In contrast, in the proposed paradigm, the fine-tuning process only updates the small additional parameters ΔΦ(θ) and the classifier head Θ: _θ, Θ1/N∑_i=1^N y_ilog(P_Φ + ΔΦ(θ), Θ(x_i)) In fact, we can also decompose the fine-tuning into an additional "delta-updating" process: _θ, ϕ, Θ1/N∑_i=1^N y_ilog(P_Φ + ΔΦ(θ)+ ΔΦ(ϕ), Θ(x_i)) Similar to the Domain-adaptive Pretraining stage, the dimensions of the additional parameters θ and ϕ are very small compared to the original parameters. By updating only the additional parameters and the classifier head, the proposed paradigm reduces the computational requirements, making it more efficient and feasible, especially for clinical settings that are often resource-constrained. §.§ Clinical LLaMA-LoRA Clinical LLaMA-LoRA is a LoRA adapter built upon LLaMA <cit.>. Clinical LLaMA-LoRA is domain-adapted to the clinical domain and fine-tuned to the downstream tasks following the proposed procedure shown on the right-hand side of <ref>. LLaMA models In this study, we evaluate two LLaMA models; the 7 billion parameters version of LLaMA <cit.> and the 7 billion parameters version of PMC-LLaMA<cit.>. LLaMA was pretrained with an array of texts from multiple sources, such as English CommonCrawl, Wikipedia, ArXiv, and C4 <cit.>. While, PMC-LLaMA is a domain-adapted LLaMA model that was pretrained on 4.8 million biomedical academic papers from PubMed Central. Domain-adaptive Pretraining Clinical LLaMA-LoRA is trained using a combination of MIMIC-IV de-identified discharge summaries (331,794) and radiology reports (2,321,355), resulting in a collection of 2,653,149 individual clinical notes. We evaluate five different PEFT techniques, which include LoRA, Adaptation Prompt, Prefix Tuning, Prompt Tuning, and P-tuning. Our approach follows the autoregressive language modelling pretraining objective employed in the original LLaMA training. To ensure compatibility with available computational resources, we use fixed model hyperparameters that allow us to fit the LLM into a single NVIDIA A100-80GB GPU (see Appendix <ref>). We optimise the hyperparameters specific to each PEFT method using Gaussian Process regression for Bayesian Optimisation <cit.> [Specifically, we use the W&B Sweep APIs: <https://docs.wandb.ai/guides/sweeps>] with a maximum of 20 trials. The detailed hyperparameters search space can be found in Appendix <ref>. During this stage, we evaluate the perplexity scores of the LLM variants. Downstream Fine-tuning We fine-tune the Clinical LLaMA-LoRA and Downstream LLaMA-LoRA to clinical document classification tasks: * Prolonged mechanical ventilation (PMV): a binary classification task to predict whether a patient will require mechanical ventilation for more than seven days <cit.>. * In-hospital mortality (MOR): a binary classification task to predict whether a patient will survive during their hospital stay <cit.>. * Length of stay (LOS): a multiclass classification task to predict the length of a patient's hospital stay, categorised into four time-bins: less than three days, three to seven days, one to two weeks, and more than two weeks <cit.>. * Diagnoses (DIAG): a large-scale multilabel classification task to predict the differential diagnoses associated with a patient, represented by simplified ICD-9 diagnosis codes <cit.>. * Procedures (PROC): a large-scale multilabel classification task to predict the diagnostics or treatments administered to a patient, represented by simplified ICD-9 procedure codes <cit.>. The label and split statistics of each dataset can be found in <ref>. During this downstream fine-tuning process, we use fixed model hyperparameters to ensure compatibility with the available computational resources, a single NVIDIA A100-80GB GPU (see Appendix <ref>). We optimise the hyperparameters specific to each PEFT method using Gaussian Process regression for Bayesian Optimisation with a maximum of 20 trials. The detailed hyperparameters search space of the PEFT method can be found in Appendix <ref>. For evaluating the performance of the model on these downstream tasks, we report the Area Under the Receiver Operating Characteristic Curve (AUROC) scores. Additionally, we report the macro-averaged AUROC score across all clinical tasks as commonly done in NLP benchmarking tasks <cit.>. §.§ Baseline Models The baseline models used in the evaluation are as follows: * Bio+ClinicalBERT <cit.>: Bio+ClinicalBERT is pretrained on clinical notes from the MIMIC-III database. It is initialised from a biomedical language model called BioBERT <cit.>, which is pretrained on biomedical research articles. * BlueBERT <cit.>: BlueBERT is pretrained on clinical notes from the MIMIC-III database and PubMed abstracts starting from the pretrained checkpoint of BERT <cit.>, a general-domain language model. * CORe <cit.>: CORe is pretrained on clinical notes from the MIMIC-III database and biomedical articles starting from the pretrained checkpoint of BioBERT <cit.>. * UmlsBERT <cit.>: UmlsBERT is pretrained on clinical notes from the MIMIC-III database starting from the pretrained checkpoint of Bio+ClinicalBERT while modifying the architecture and pretraining objective by incorporating knowledge from the Unified Medical Language System (UMLS) Metathesaurus <cit.>. These baseline models have been trained to perform specifically on clinical data, thus providing comparison points for evaluating the performance of the proposed Clinical LLaMA-LoRA in downstream clinical NLP tasks. § RESULTS AND ANALYSIS §.§ Pretraining The pretraining results can be found in <ref>. We employ PEFT techniques to perform domain-adaptive pretraining. All PEFT techniques train a significantly smaller number of parameters, ranging from only 0.001% to 0.24% of the original model parameters, which substantially decreases the computational resources required and shortens the training time. Note that performing full-parameter training of LLaMA and PMC-LLaMA with just a single GPU is unfeasible. Instead, PEFT techniques require less than 24 hours per epoch on average with only a single NVIDIA A100-80GB GPU. Among all the PEFT techniques, LoRA emerges as the best-performing one for both LLaMA and PMC-LLaMA in the clinical domain-adaptive pretraining, achieving the lowest perplexity scores of 2.244 and 2.404, respectively. This pretrained LoRA is referred to as Clinical LLaMA-LoRA in the subsequent sections. The following experiments in downstream fine-tuning will utilise this pretrained Clinical LLaMA-LoRA. §.§ Downstream results From the downstream fine-tuning results shown in <ref>, we can decompose the analysis into multiple research questions: Can LoRA help fine-tune LLaMA from other domains (general and biomedical) to achieve higher AUROC scores in clinical tasks? We compare the results obtained by LLaMA and LLaMA + LoRA, as well as PMC-LLaMA and PMC-LLaMA + LoRA, as presented in <ref>. The obtained results consistently demonstrate improved AUROC scores when utilising LoRA across all tasks. The macro-averaged AUROC score of LoRA-equipped LLaMA shows a notable 13.01% increase when compared to the LLaMA-only baseline. Similarly, LoRA-equipped PMC-LLaMA exhibits a 12.2% improvement in macro-averaged AUROC compared to the original PMC-LLaMA Both LLaMA and PMC-LLaMA, when equipped with LoRA, exhibit significant AUROC score improvements in all tasks except the prolonged mechanical ventilation prediction task, which is proven challenging for all model variants. Furthermore, the marginal difference in AUROC scores between PMC-LLaMA and the general-domain LLaMA can be attributed to two factors. Firstly, the original LLaMA has been exposed to biomedical concepts during its pretraining, reducing the need for domain-adaptive pretraining to the biomedical domain. Secondly, clinical NLP tasks are challenging, even for biomedical LLMs. Can LoRA-equipped LLaMA and PMC-LLaMA perform comparably in comparison to clinically trained LMs? We compare the AUROC scores obtained by the baseline models, and LoRA-equipped LLaMA and PMC-LLaMA (see <ref>). Among the baseline models, BlueBERT performs the best with a macro-averaged AUROC score of 69.59%. Compared to BlueBERT, both LLaMA and PMC-LLaMA underperform with macro-averaged AUROC scores of 58.61% and 60.51%, respectively. This finding highlights the importance of clinical-specific fine-tuning. Significant improvements can be observed in LoRA-equipped LLaMA and PMC-LLaMA, with macro-averaged AUROC scores of 71.62% and 72.71%, respectively. We notice considerable improvements in the diagnoses and procedures prediction tasks. For example, LoRA-equipped LLaMA achieves AUROC scores of 78.37% and 87.49% in the diagnoses and procedures prediction tasks, respectively, compared to 73.81% and 77.70% for BlueBERT. This represents improvements of 4.56% in diagnoses prediction and 9.79% in procedures prediction. Improvements are also observed in the results obtained by LoRA-equipped PMC-LLaMA, outperforming BlueBERT by 5% in diagnoses prediction and 9.02% in procedures prediction. Overall, LoRA-equipped LLaMA and PMC-LLaMA achieve higher AUROC scores than the baseline clinical LMs in various clinical prediction tasks, particularly in diagnoses, procedures, and mortality predictions, while maintaining competitive AUROC scores in length-of-stay prediction. However, LoRA-equipped LLaMA and PMC-LLaMA still underperform in prolonged mechanical ventilation prediction. Can LLaMA and PMC-LLaMA with Clinical LLaMA-LoRA achieve higher AUROC scores than the clinically trained LMs? The domain-adaptive pretraining step yields the clinically-trained LoRA adapters for LLaMA and PMC-LLaMA, called Clinical LLaMA-LoRA. We compare the results of Clinical LLaMA-LoRA-equipped LLaMA and PMC-LLaMA with the baseline models. We evaluate Clinical LLaMA-LoRA with and without downstream fine-tuning, referred to as "Trainable" and "Frozen" respectively. The results indicate that Clinical LLaMA-LoRA-equipped LLaMA and PMC-LLaMA outperform the baseline models. LLaMA with a trainable Clinical LLaMA-LoRA achieves an AUROC score of 70.85%, surpassing BlueBERT's score of 69.59%. PMC-LLaMA with a trainable Clinical LLaMA-LoRA achieves an even higher AUROC score of 72.23%. These findings demonstrate that the Clinical LLaMA-LoRA contributes to higher AUROC scores for LLaMA and PMC-LLaMA over clinically trained LLMs. Can LLaMA and PMC-LLaMA with Clinical LLaMA-LoRA achieve higher AUROC scores than the other fine-tuning variants? We examine the importance of the domain-adapted LoRA by comparing the results obtained by LLaMA and PMC-LLaMA equipped with Clinical LLaMA-LoRA against the results of LLaMA and PMC-LLaMA fine-tuning, both original and with LoRA. Firstly, we evaluate the frozen pretrained Clinical LLaMA-LoRA. Both LLaMA and PMC-LLaMA with frozen Clinical LLaMA-LoRA do not exhibit a significant increase in performance compared to the original fine-tuning. This indicates that, despite the domain-adaptive pretraining, the limited number of trainable parameters during the downstream fine-tuning restricts the potential improvement that the model can achieve. This reasoning is further supported by the significant improvement observed in the AUROC scores of LLaMA and PMC-LLaMA with trainable Clinical LLaMA-LoRA. LLaMA and PMC-LLaMA with trainable Clinical LLaMA-LoRA achieve 70.85% and 72.23% macro-averaged AUROC scores, respectively, massive improvements from the vanilla fine-tuning performance (58.61% and 60.51% AUROC scores respectively). However, Clinical LLaMA-LoRA does not yield significant improvements when compared to LLaMA and PMC-LLaMA, which are directly equipped with LoRA without pretraining. For instance, we can observe that LLaMA with LoRA achieves a slightly higher macro-averaged AUROC score of 71.62% compared to LLaMA with Clinical LLaMA-LoRA, which achieves 70.85%. Can a downstream LoRA adapter improve the AUROC scores of LLaMA and PMC-LLaMA equipped with Clinical LLaMA-LoRA? By considering Clinical LLaMA-LoRA as the "delta-updating" outcome of the domain-adaptive pretraining, we can view the downstream fine-tuning process as an additional "delta-updating" step. To investigate the impact of this approach, we conduct experiments by adding a Downstream LLaMA-LoRA to LLaMA and PMC-LLaMA models that were already equipped with Clinical LLaMA-LoRA. From <ref>, we can observe that Downstream LLaMA-LoRA fails to improve the performance of LLaMA and PMC-LLaMA with frozen Clinical LLaMA-LoRA. On the other hand, improvement can be observed when adding Downstream LLaMA-LoRA to LLaMA with trainable Clinical LLaMA-LoRA. This combination of LLaMA with trainable Clinical LLaMA-LoRA and Downstream LLaMA-LoRA achieves the highest macro-averaged AUROC score of 72.81%. The macro-averaged AUROC score of Clinical LLaMA-LoRA was almost similar to that of PMC-LLaMA with LoRA, suggesting similar efficacy between Clinical LLaMA-LoRA and the full fine-tuning process that PMC-LLaMA has undergone. Moreover, Clinical LLaMA-LoRA offers the advantage of reduced computational resources and training time, which is aligned with the requirements of practical implementation in clinical settings. Can LoRA help better fine-tune clinically-trained LMs? The baseline models are relatively smaller in size compared to the LLaMA-based models, which may be a better fit to care providers with limited access to computing resources. To that end, we experimented with fine-tuning the baseline models with LoRA. <ref> shows the obtained results. All baseline models see improvements in AUROC scores in all tasks. For instance, the LoRA-equipped BlueBERT achieves an improved macro-averaged AUROC score of 71.56% compared to the conventional fine-tuning with 69.59%. This finding highlights the possibility of using LoRA to efficiently fine-tune clinically trained LMs, such as BlueBERT, to downstream use cases. § CONCLUSIONS In this study, we propose a two-step PEFT framework. We introduce Clinical LLaMA-LoRA, a LoRA <cit.> adapter built upon LLaMA <cit.>. Then, we introduce Downstream LLaMA-LoRA, a task-specific adapter that is trained on top of the pretrained Clinical LLaMA-LoRA. The fusion of the two adapters achieves state-of-the-art performance with an AUROC score of 72.81% macro-averaged across all clinical NLP downstream tasks, which represents a 3.22% improvement over the previous best-performing model. Our proposed framework achieves improvement in performance while reducing the computational requirements, which is suited for clinical settings that are often constrained by their computational power. We also find that the LoRA-equipped BlueBERT model achieves a considerable improvement of macro-averaged AUROC score over the full fine-tuning (71.56% compared to 69.59%), with notable improvements in mortality and length-of-stay prediction. These findings further highlight the potential to achieve strong performance without extensive computational resources. Future works may explore developing a schema to address various real-world use cases, building upon the findings of this study. Such a schema would use multiple Downstream LLaMA-LoRA adapters tailored for different use cases while leveraging the pretrained LLM and Clinical LLaMA-LoRA as the foundation. This solution would also be suited for use cases which rely on private data commonly encountered in care provider settings. § LIMITATIONS This study presents a two-step PEFT framework aimed at effectively adapting LLMs to diverse clinical downstream applications. However, the evaluation of our model was restricted to MIMIC-based datasets, which are constrained to English and obtained exclusively within the Commonwealth of Massachusetts, United States of America. Consequently, despite the promising efficacy demonstrated by our proposed method, it would have been advantageous to directly assess its performance across diverse hospital systems spanning various geographical locations and languages. This would enable a more comprehensive understanding of its applicability and generalizability. However, it is essential to acknowledge that conducting such an analysis would require working within a trusted research environment and obtaining the necessary permissions to access the relevant datasets. It is crucial to recognise the restrictions imposed on accessing internal clinical datasets, as they limit our ability to evaluate the effectiveness of our approach across different care provider systems. Therefore, we encourage care providers to conduct internal experiments within their trusted research environment to ensure the efficacy of our proposed method within their specific use cases should they adopt this approach. Despite the demonstrated performance improvements, the proposed model may still be susceptible to spurious correlations. Predicting patient outcomes solely based on clinical notes presents significant challenges due to the other factors that may not be captured within those notes. For instance, the length of patient's in-hospital stay is not solely correlated with their diagnoses and disease progression. Factors such as the patient's insurance status, which is not typically mentioned in clinical notes, can severely impact the duration of a patient's stay. Therefore, we encourage end users of such clinical LLMs to consider additional measures to ensure predictions that reflect a holistic view of the patient's situation, instead of relying solely on the predictions of LLMs. § ETHICS STATEMENT In this study, we use MIMIC-based datasets obtained after completing the necessary training. These datasets comply with de-identification standards set by the Health Insurance Portability and Accountability Act (HIPAA) through data cleansing. Due to privacy concerns, we refrain from including direct excerpts of the data in the paper. We also refrain from publicly sharing the pretrained checkpoints. While our model demonstrates effectiveness, it is important to acknowledge the risks associated with relying solely on clinical outcome prediction models. There are crucial pieces of information that can be found beyond the scope of clinical notes. Considering the potential impact on patient health outcomes, it is crucial to exercise caution when utilising these clinical LLMs. Therefore, we propose that the PEFT adapter generated by our framework, in conjunction with the pretrained LLM, should be used as an aid rather than a replacement for trained clinical professionals. Acknowledgements AG was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. PM was partially funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 875160, ELIAI (The Edinburgh Laboratory for Integrated Artificial Intelligence) EPSRC (grant no. EP/W002876/1), an industry grant from Cisco, and a donation from Accenture LLP; and is grateful to NVIDIA for the GPU donations. BA was partially funded by Legal and General PLC as part of the Advanced Care Research Centre and by the Artificial Intelligence and Multimorbidity: Clustering in Individuals, Space and Clinical Context (AIM-CISC) grant NIHR202639. For the purpose of open access, AG has applied a creative commons attribution (CC BY) licence to any author-accepted manuscript version arising. This work was supported by the Edinburgh International Data Facility (EIDF) and the Data-Driven Innovation Programme at the University of Edinburgh. acl_natbib § HYPERPARAMETERS FOR THE DOMAIN-ADAPTIVE PRETRAINING §.§ Fixed Model Hyperparameters §.§ PEFT Hyperparameters Optimisation Search Space Specifically for Prompt Tuning, we use a common prompt initialisation text "Finish this clinical note:". § HYPERPARAMETERS FOR THE DOWNSTREAM FINE-TUNING §.§ Fixed Model Hyperparameters §.§ PEFT Hyperparameters Optimisation Search Space § TRAINING CONFIGURATIONS We use HuggingFace's Transformers <cit.> and PEFT <cit.> libraries for the experiments. All LLaMA-based models are trained on one NVIDIA A100-80GB GPU, while the baseline models are trained on a single NVIDIA GeForce GTX 1080 Ti-16GB GPU.
http://arxiv.org/abs/2307.04787v1
20230704173150
Collaborative Score Distillation for Consistent Visual Synthesis
[ "Subin Kim", "Kyungmin Lee", "June Suk Choi", "Jongheon Jeong", "Kihyuk Sohn", "Jinwoo Shin" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Empirical Sample Complexity of Neural Network Mixed State Reconstruction Filippo Vicentini August 1, 2023 ======================================================================== Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities. However, when adapting these priors to complex visual modalities, often represented as multiple images (e.g., video), achieving consistency across a set of images is challenging. In this paper, we address this challenge with a novel method, Collaborative Score Distillation (). is based on the Stein Variational Gradient Descent (SVGD). Specifically, we propose to consider multiple samples as “particles” in the SVGD update and combine their score functions to distill generative priors over a set of images synchronously. Thus, facilitates seamless integration of information across 2D images, leading to a consistent visual synthesis across multiple samples. We show the effectiveness of in a variety of tasks, encompassing the visual editing of panorama images, videos, and 3D scenes. Our results underline the competency of as a versatile method for enhancing inter-sample consistency, thereby broadening the applicability of text-to-image diffusion models.[Visualizations are available at the website <https://subin-kim-cv.github.io/CSD>.] § INTRODUCTION Text-to-image diffusion models <cit.> have been scaled up by using billions of image-text pairs <cit.> and efficient architectures <cit.>, showing impressive capability in synthesizing high-quality, realistic, and diverse images with the text given as an input. Furthermore, they have branched into various applications, such as image-to-image translation <cit.>, controllable generation <cit.>, or personalization <cit.>. One of the latest applications in this regard is to translate the capability into other complex modalities, viz., beyond 2D images <cit.> without modifying diffusion models using modality-specific training data. This paper focus on the problem of adapting the knowledge of pre-trained text-to-image diffusion models to more complex high-dimensional visual generative tasks beyond 2D images without modifying diffusion models using modality-specific training data. We start from an intuition that many complex visual data, e.g., videos and 3D scenes, are represented as a set of images constrained by modality-specific consistency. For example, a video is a set of frames requiring temporal consistency, and a 3D scene is a set of multi-view frames with view consistency. Unfortunately, image diffusion models do not have a built-in capability to ensure consistency between a set of images for synthesis or editing because their generative sampling process does not take into account the consistency when using the image diffusion model as is. As such, when applying image diffusion models on these complex data without consistency in consideration, it results in a highly incoherent output, as in Figure <ref> (Patch-wise Crop), where one can easily identify where images are stitched. Such behaviors are also reported in video editing, thus, recent works <cit.> propose to handle video-specific temporal consistency when using the image diffusion model. Here, we take attention to an alternative approach, Score Distillation Sampling (SDS) <cit.>, which enables the optimization of arbitrary differentiable operators by leveraging the rich generative prior of text-to-image diffusion models. SDS poses generative sampling as an optimization problem by distilling the learned diffusion density scores. While <cit.> has shown the effectiveness of SDS in generating 3D objects from the text by resorting on Neural Radience Fields <cit.> priors which inherently suppose coherent geometry in 3D space by density modeling, it has not been studied for consistent visual synthesis of other modalities. In this paper, we propose Collaborative Score Distillation (CSD), a simple yet effective method that extends the singular of the text-to-image diffusion model for consistent visual synthesis. The crux of our method is two-fold: first, we establish a generalization of SDS by using Stein variational gradient descent (SVGD), where multiple samples share their knowledge distilled from diffusion models to accomplish inter-sample consistency. Second, we present , an effective method for consistent visual editing by leveraging CSD with Instruct-Pix2Pix <cit.>, a recently proposed instruction-guided image diffusion model (See Figure <ref>). We demonstrate the versatility of our method in various applications such as panorama image editing, video editing, and reconstructed 3D scene editing. In editing a panorama image, we show that obtains spatially consistent image editing by optimizing multiple patches of an image. Also, compared to other methods, our approach achieves a better trade-off between source-target image consistency and instruction fidelity. In video editing experiments, obtains temporal consistency by taking multiple frames into optimization, resulting in temporal frame-consistent video editing. Furthermore, we apply to 3D scene editing and generation, by encouraging consistency among multiple views. § PRELIMINARIES §.§ Diffusion models Generative modeling with diffusion models consists of a forward process q that gradually adds Gaussian noise to the input 𝐱_0∼ p_data(𝐱), and a reverse process p which gradually denoises from the Gaussian noise 𝐱_T∼𝒩(0,𝐈). Formally, the forward process q(𝐱_t | 𝐱_0) at timestep t is given by q(𝐱_t|𝐱_0) =𝒩(𝐱_t ; α_t 𝐱_0, σ_t^2𝐈), where σ_t and α_t^2=1-σ_t^2 are pre-defined constants designed for effective modeling <cit.>. Given enough timesteps, reverse process p also becomes a Gaussian and the transitions are given by posterior q with optimal MSE denoiser <cit.>, i.e., p_ϕ(𝐱_t-1|𝐱_t) = 𝒩(𝐱_t-1 ; 𝐱_t -𝐱̂_ϕ(𝐱_t;t), σ_t^2 𝐈), where 𝐱̂_ϕ(𝐱_t;t) is a learned optimal MSE denoiser. <cit.> proposed to train an U-Net <cit.> autoencoder ϵ_ϕ(𝐱_t;t) by minimizing following objective: ℒ_Diff(ϕ;𝐱) = 𝔼_t∼𝒰(0,1), ϵ∼𝒩(0,𝐈)[w(t) ϵ_ϕ(𝐱_t;t) -ϵ_2^2], 𝐱_t = α_t 𝐱_0 + α_tϵ where w(t) is a weighting function for each timestep t. Text-to-image diffusion models <cit.> are trained by Eq. (<ref>) with ϵ_ϕ(𝐱_t; y,t) that estimates the noise conditioned on the text prompt y. At inference, those methods rely on Classifier-free Guidance (CFG) <cit.>, which allows higher quality sample generation by introducing additional parameter ω_y≥ 1 as follows: ϵ_ϕ^ω (𝐱_t;y,t) = ϵ_ϕ(𝐱_t; t) +ω_y (ϵ_ϕ(𝐱_t;y,t)-ϵ_ϕ(𝐱_t; t)) By setting the appropriate guidance scale ω_y>0, one can improve fidelity to the text prompt at the cost of diversity. Throughout the paper, we refer p_ϕ^ω_y(𝐱_t;y,t) a conditional distribution of a text y. Instruction-based image editing by Instruct-Pix2Pix. Recently, many works have demonstrated the capability of diffusion models in editing or stylizing images <cit.>. Among them, <cit.> proposed Instruct-Pix2Pix, where they finetuned Stable Diffusion <cit.> models with the source image, text instruction, edited image (edited by Prompt-to-Prompt <cit.>) triplet to enable instruction-based editing of an image. Given source image 𝐱̃ and instruction y, the noise estimate at time t is given as ϵ_ϕ^ω_s,ω_y(𝐱_t;𝐱̃,y,t) = ϵ_ϕ(𝐱_t;t) + ω_s( ϵ_ϕ(𝐱_t;𝐱̃,t) - ϵ_ϕ(𝐱_t;t)) +ω_y(ϵ_ϕ(𝐱_t;𝐱̃,y,t) - ϵ_ϕ(𝐱_t;𝐱̃,t)), where ω_y is CFG parameter for text as in Eq. (<ref>) and ω_s is an additional CFG parameter that controls the fidelity to the source image 𝐱̃. §.§ Score distillation sampling <cit.> proposed Score Distillation Sampling (SDS), an alternative sample generation method by distilling the rich knowledge of text-to-image diffusion models. SDS allows optimization of any differentiable image generator, e.g., Neural Radiance Fields <cit.> or the image space itself. Formally, let 𝐱=g(θ) be an image rendered by a differentiable generator g with parameter θ, then SDS minimizes density distillation loss <cit.> which is KL divergence between the posterior of 𝐱 = g(θ) and the text-conditional density p_ϕ^ω: ℒ_Distill(θ; 𝐱=g(θ)) = 𝔼_t,ϵ[ α_t/σ_t D_KL( q(𝐱_t|𝐱=g(θ)) p_ϕ^ω(𝐱_t; y,t)) ]. For an efficient implementation, SDS updates the parameter θ by randomly choosing timesteps t∼𝒰(t_min, t_max) and forward 𝐱=g(θ) with noise ϵ∼𝒩(0,𝐈) to compute the gradient as follows: ∇_θℒ_SDS(θ; 𝐱=g(θ)) = 𝔼_t,ϵ[ w(t) (ϵ_ϕ^ω(𝐱_t; y,t) - ϵ)∂𝐱/∂θ]. Remark that the U-Net Jacobian ∂ϵ_ϕ^ω(𝐳_t;y,t)/∂𝐳_t is omitted as it is computationally expensive to compute, and degrades performance when conditioned on small noise levels. The range of timesteps t_min and t_max are chosen to sample from not too small or large noise levels, and the guidance scales are chosen to be larger than those used for image generation. §.§ Stein variational gradient descent The original motivation of Stein variational gradient descent (SVGD) <cit.> is to solve a variational inference problem, where the goal is to approximate a target distribution from a simpler distribution by minimizing KL divergence. Formally, suppose p is a target distribution with a known score function ∇_𝐱log p(𝐱) that we aim to approximate, and q(𝐱) is a known source distribution. <cit.> showed that the steepest descent of KL divergence between q and p is given as follows: 𝔼_q(𝐱)[𝐟(𝐱)^⊤∇_𝐱log p(𝐱) + Tr(∇_𝐱𝐟(𝐱))], where 𝐟:ℝ^D→ℝ^D is any smooth vector function that satisfies lim_𝐱→∞p(𝐱)𝐟(𝐱) = 0. Remark that Eq. (<ref>) becomes zero if we replace q(𝐱) with p(𝐱) in the expectation term, which is known as Stein's identity <cit.>. Here, the choice of the critic 𝐟 is crucial in its convergence and computational tractability. To that end, <cit.> proposed to constrain 𝐟 in the Reproducing Kernel Hilbert Space (RKHS) which yields a closed-form solution. Specifically, given a positive definite kernel k:ℝ^D×ℝ^D→ℝ^+, Stein variational gradient descent provides the greedy directions as follows: 𝐱←𝐱 -ηΔ𝐱, Δ𝐱 = 𝔼_q(𝐱^')[k(𝐱, 𝐱^') ∇_𝐱^'log p(𝐱^') + ∇_𝐱^'k(𝐱,𝐱^')], with small step size η>0. The SVGD update in Eq.  (<ref>) consists of two terms that play different roles: the first term moves the particles towards the high-density region of target density p(𝐱), where the direction is smoothed by kernels of other particles. The second term acts as a repulsive force that prevents the mode collapse of particles. One can choose different kernel functions, while we resort to standard Radial Basis Function (RBF) kernel k(𝐱,𝐱^') = exp(-1/h𝐱-𝐱^'_2^2) with bandwidth h>0. § METHOD In this section, we introduce Collaborative Score Distillation () for consistent synthesis and editing of multiple samples. We first derive a collaborative score distillation method using Stein variational gradient descent (Section <ref>) and propose an effective image editing method using , i.e., , that leads to coherent editing of multiple images with instruction (Section <ref>). Lastly, we present various applications of in editing panorama images, videos, and 3D scenes (Section <ref>). §.§ Collaborative score distillation Suppose a set of parameters {θ_i}_i=1^N that generates images 𝐱^(i) = g(θ_i). Similar to SDS, our goal is to update each θ_i by distilling the smoothed densities from the diffusion model by minimizing KL divergence in Eq. (<ref>). On the contrary, solves Eq. (<ref>) using SVGD demonstrated in Section <ref> so that each θ_i can be updated in sync with updates of other parameters in the set {θ_i}_i=1^N. At each update, samples t∼𝒰(t_min,t_max) and ϵ∼𝒩(0, 𝐈), and update each θ_i as follows: ∇_θ_iℒ_CSD(θ_i) = w(t)/N∑_j=1^N (k(𝐱_t^(j), 𝐱_t^(i))(ϵ_ϕ^ω(𝐱_t^(j);y,t) - ϵ ) + ∇_𝐱_t^(j) k(𝐱_t^(j), 𝐱_t^(i)))∂𝐱^(i)/∂θ_i, for each i=1,2,…, N. We refer to Appendix <ref> for full derivation. Note is equivalent to SDS in Eq. (<ref>) when N=1, showing that is a generalization of SDS to multiple samples. As the pairwise kernel values are multiplied by the noise prediction term, each parameter update on θ_i is affected by other parameters, i.e., the scores are mixed with importance weights according to the affinity among samples. The more similar samples tend to exchange more score updates, while different samples tend to interchange the score information less. The gradient of the kernels acts as a repulsive force that prevents the mode collapse of samples. Moreover, we note that Eq. (<ref>) does not make any assumption on the relation between θ_i's or their order besides them being a set of images to be synthesized coherently with each other. As such, is also applicable to arbitrary image generators, as well as text-to-3D synthesis in DreamFusion <cit.>, which we compare in Section <ref>. §.§ Text-guided editing by collaborative score distillation In this section, we introduce a text-guided visual editing method using Collaborative Score Distillation (). Given source images 𝐱̃^(i) = g(θ̃_i) with parameters θ̃_i, we optimize new target parameters {θ_i}_i=1^N with 𝐱^(i) = g(θ_i) such that 1) each 𝐱^(i) follows the instruction prompt, 2) preserves the semantics of source images as much as possible, and 3) the obtained images are consistent with each other. To accomplish these, we update each parameter θ_i, initialized with θ̃_i, using with noise estimate ϵ_ϕ^ω_y,ω_s of Instruct-Pix2Pix. However, this approach often results in blurred outputs, leading to the loss of details of the source image (see Figure <ref>). This is because the score distillation term subtracts random noise ϵ, which perturbs the undesirable details of source images. We handle this issue by adjusting the noise prediction term that enhances the consistency between source and target images. Subtracting a random noise ϵ in Eq. (<ref>) when computing the gradient is a crucial factor, which helps optimization by reducing the variance of a gradient. Therefore, we amend the optimization by changing the random noise into a better baseline function. Since our goal is to edit an image with only minimal information given text instructions, we set the baseline by the image-conditional noise estimate of the Instruct-Pix2Pix model without giving text instructions on the source image. To be specific, our is given as follows: ∇_θ_iℒ_CSD-Edit(θ_i) = w(t)/N∑_j=1^N (k(𝐱_t^(j), 𝐱_t^(i)) Δℰ_t^(i) + ∇_𝐱_t^(j) k(𝐱_t^(j), 𝐱_t^(i)))∂𝐱^(i)/∂θ_i, Δℰ_t^(i) = ϵ_ϕ^ω_y,ω_s(𝐱_t^(i);𝐱̃, y,t) - ϵ_ϕ^ω_s(𝐱̃_t^(i);𝐱̃, t). In Section <ref>, we validate our findings on the effect of baseline noise on image editing performance. We notice that presents an alternative way to utilize Instruct-Pix2Pix in image-editing without any finetuning of diffusion models, by posing an optimization problem. §.§ for various complex visual domains Panorama image editing. Diffusion models are usually trained on a fixed resolution (e.g., 512×512 for Stable Diffusion <cit.>), thus when editing a panorama image (i.e., an image with a large aspect ratio), the editing quality significantly degrades. Otherwise, one can crop an image into smaller patches and apply image editing on each patch. However this results in spatially inconsistent images (see Figure <ref>, Patch-wise Crop, Appendix <ref>). To that end, we propose to apply on patches to obtain spatially consistent editing of an image, while preserving the semantics of source image. Following <cit.>, we sample patches of size 512×512 that overlap using small stride and apply on the latent space of Stable Diffusion <cit.>. Since we allow overlapping, some pixels might be updated more frequently. Thus, we normalize the gradient of each pixel by counting the appearance. Video editing. Editing a video with an instruction should satisfy the following: 1) temporal consistency between frames such that the degree of changes compared to the source video should be consistent across frames, 2) ensuring that desired edits in each edited frame are in line with the given prompts while preserving the original structure of source video, and 3) maintaining the sample quality in each frame after editing. To meet these requirements, we randomly sample a batch of frames and update them with to achieve temporal consistency between frames. 3D scene editing. We consider editing a 3D scene reconstructed by a Neural Radiance Field (NeRF) <cit.>, which represents volumetric 3D scenes using 2D images. To edit reconstructed 3D NeRF scenes, it is straightforward to update the training views with edited views and finetune the NeRF with edited views. Here, the multi-view consistency between edited views should be considered since inconsistencies between edits across multiple viewpoints lead to blurry and undesirable artifacts, hindering the optimization of NeRF. To mitigate this, <cit.> proposed Instruct-NeRF2NeRF, which performs editing on a subset of training views and updates them sequentially at training iteration with intervals. However, image-wise editing results in inconsistencies between views, thus they rely on the ability of NeRF in achieving multi-view consistency. Contrary to Instruct-NeRF2NeRF, we update the dataset with multiple consistent views through , which serves as better training resources for NeRF, leading to less artifacts and better preservation of source 3D scene. § EXPERIMENTS §.§ Text-guided panorama image editing For the panorama image-to-image translation task, we compare with different versions of Instruct-Pix2Pix: one is which using naive downsizing to 512×512 and performing Instruct-Pix2Pix, and another is updating Instruct-Pix2Pix on the patches as in MultiDiffusion <cit.> (Instruct-Pix2Pix + MultiDiffusion). For comparison, we collect a set of panorama images (i.e., which aspect ratio is higher than 3), and edit each image to various artistic styles and different guidance scales ω_y. For evaluation, we use pre-trained CLIP <cit.> to measure two different metrics: 1) consistency between source and target images by computing similarity between two image embeddings, and 2) CLIP directional similarity <cit.> which measures how the change in text agrees with the change in the images. The experimental details are in Appendix <ref>. In Figure <ref>, we plot the CLIP scores of different image editing methods with different guidance scales. We notice that provides the best trade-off between the consistency between source and target images and fidelity to the instruction. Figure <ref> provides a qualitative comparison between panorama image editing methods. Remark that Instruct-Pix2Pix + MultiDiffusion is able to generate spatially consistent images, however, the edited images show inferior fidelity to the text instruction even when using a large guidance scale. Additional qualitative results are in Appendix <ref>. §.§ Text-guided video editing For the video editing experiments, we primarily compare with existing zero-shot video editing schemes that employ text-to-image diffusion models such as FateZero <cit.>, and Pix2Video <cit.>. To emphasize the effectiveness of against learning-based schemes, we also compare it with Gen-1 <cit.>, a state-of-the-art video editing method trained on a large-scale video dataset. For quantitative evaluation, we report CLIP image-text directional similarity as in Section <ref> to measure alignment between changes in texts and images. Also, we measure CLIP image consistency and LPIPS <cit.> between consecutive frames to evaluate temporal consistency. We utilize video sequences from the popular DAVIS <cit.> dataset at a resolution of 1920× 1080. Please refer to Appendix <ref> for a detailed description of the baseline methods and experimental setup. Table <ref> summarize quantitative comparison between and the baselines. We notice that consistently outperforms the existing zero-shot video editing schemes in terms of both temporal consistency and fidelity to given text prompts. Moreover, Figure <ref> qualitatively demonstrate the superiority of over the baselines on video-stylization and object-aware editing tasks. Impressively, shows comparable editing performance to Gen-1 even without training on a large-scale video dataset and any architectural modification to the diffusion model. Additional qualitative results are in Appendix <ref>. §.§ Text-guided 3D scene editing For the text-guided 3D scene editing experiments, we mainly compare our approach with Instuct-NeRF2NeRF (IN2N) <cit.>. For a fair comparison, we exactly follow the experimental setup which they used, and faithfully find the hyperparameters to reproduce their results. For evaluation, we render images at the novel views (i.e., views not seen during training), and report CLIP image similarity and LPIPS between consecutive frames in rendered videos to measure multi-view consistency, as well as CLIP image-text similarity to measure fidelity to the instruction. Detailed explanations for each dataset sequence and training details can be found in Appendix <ref>. Figure <ref> and Table <ref> summarize the comparison between and IN2N. We notice that enables a wide-range control of 3D NeRF scenes, such as delicate attribute manipulation (e.g., facial expression alterations) and scene-stylization (e.g., conversion to the animation style). Especially, we notice two advantages of compared to IN2N. First, presents high-quality details to the edited 3D scene by providing multi-view consistent training views during NeRF optimization. In Figure <ref>, one can observe that captures sharp details of anime character, while IN2N results in blurry face. Second, is better at preserving the semantics of source 3D scenes, e.g., backgrounds or colors. For instance in Figure <ref>, we notice that allows subtle changes in facial expressions without changing the color of the background or adding a beard to the face. §.§ Ablation study for text-to-3D generation. We explore the effectiveness of in text-to-3D generation tasks following DreamFusion <cit.>. We train a coordinate MLP-based NeRF architecture from scratch using text-to-image diffusion models. Since the pixel-space diffusion model that DreamFusion used <cit.> is not publicly available, we used an open-source implementation of pixel-space text-to-image diffusion model.[<https://github.com/deep-floyd/IF>] When using for text-to-3D generation, we empirically observe that using LPIPS <cit.> as a distance for RBF kernel worked well. We refer to Appendix <ref> for details. Given a set of text prompts, we run both DreamFusion and DreamFusion with with a fixed seed. In Figure <ref>, we visualize generated examples. Remark that DreamFusion and DreamFusion + tend to generate similar objects, but we observe that often adds better details that complement the poor quality of one that made by DreamFusion. For instance, in Figure <ref>, removes blurry artifacts in the synthesized 3D NeRF scene, which is often caused by inconsistent view distillation. Also in Figure <ref>, we verify that the generates more coherent images when conditioned on view-dependent prompts which were used in DreamFusion. We refer to Appendix <ref> for more examples of text-to-3D generation. Ablation on the components of . To demonstrate the effect of our method, we present an ablation study on a video editing experiment. To verify the role of communication between samples using SVGD, we compare the editing results with and without SVGD. Also, to verify the role of baseline noise in , we provide result when using random noise as baseline. As shown in Figure <ref>, consistently edits a source video adding a red cap on a man's head when given the instruction “give him a cap.” However, without SVGD, the edits between frames are inconsistent, for example, blue caps or red caps appear both on the edited frames. In addition, if we set the baseline noise as the random noise injected into the source and target image, each frame gets blurry and loses the original structures, e.g., blurred legs and backgrounds. § RELATED WORK Following remarkable success of text-to-image diffusion models <cit.>, numerous works have attempted to exploit rich knowledge of text-to-image diffusion models for various visual editing tasks including images <cit.>, videos <cit.>, 3D scenes <cit.>, etc. However, extending existing image editing approaches to more complex visual modalities often faces a new challenge; consistency between edits, e.g., spatial consistency in high-resolution images, temporal consistency in videos, and multi-view consistency in 3D scenes. While prior works primarily focus on designing task-specific methods <cit.> or model fine-tuning for complex modalities <cit.>, we present a modality-agnostic novel method for editing, effectively capturing consistency between samples. The most related to our work is DreamFusion <cit.>, which introduced Score Distillation Sampling (SDS) for creation of 3D assets, leveraging the power of text-to-image diffusion models. Despite the flexible merit of SDS to enable the optimization of arbitrary differentiable operators, most works mainly focus on applying SDS to enhance the synthesis quality of 3D scenes by introducing 3D specific frameworks <cit.>. Although there exists some work to apply SDS for visual domains other than 3D scenes, they have limited their scope to image editing <cit.>, or image generation <cit.>. Here, we clarify that our main focus is not to improve the performance of SDS for a specific task, but rather to shift the focus to generalizing it from a new perspective in a principled way. To the best of our knowledge, we are the first to center our work on the generalization of SDS and introduce a novel method that simply but effectively adapts text-to-image diffusion models to diverse high-dimensional visual syntheses beyond a single 2D image with fixed resolution. § CONCLUSION In this paper, we propose Collaborative Score Distillation (CSD) for consistent visual synthesis and manipulation. CSD is built upon Stein variational gradient descent, where multiple samples share their knowledge distilled from text-to-image diffusion models during the update. Furthermore, we propose CSD-Edit that gives us consistent editing of images by distilling minimal, yet sufficient information from instruction-guided diffusion models. We demonstrate the effectiveness of our method in text-guided translation of diverse visual contents, such as in high-resolution images, videos, and real 3D scenes, outperforming previous methods both quantitatively and qualitatively. Limitations. Since we use pre-trained text-to-image diffusion models, there are some cases where the results are imperfect due to the inherent inability of diffusion models in understanding language. Also, our method might be prone to the underlying societal biases in diffusion models. See Appendix <ref>. Societal impact. Our method enables consistent editing of visual media. On the other hand, our method is not free from the known issues that text-to-image models carry when used by malicious users. We expect future research on the detection of generated visual content. See Appendix <ref>. unsrtnat Appendix Website: <https://subin-kim-cv.github.io/CSD> § TECHNICAL DETAILS In this section, we provide detailed explanations on the proposed methods, and . derivation. Consider a set of parameters {θ_i}_i=1^N which generates images 𝐱^(i)=g(θ_i). For each timestep t∼𝒰(t_min, t_max), we aim at minimizing the following KL divergence D_KL( q(𝐱_t^(i) | 𝐱^(i)=g(θ_i)) p_ϕ(𝐱_t;y,t) ) for each i=1,2,…, N via SVGD using Eq. (<ref>). To this end, we approximate the score function, (i.e., gradient of log-density) by the noise predictor from diffusion model as follows: ∇_𝐱_t^(i)log p_ϕ(𝐱_t^(i) ;y,t) ≈ -ϵ_ϕ(𝐱_t^(i);y,t)/σ_t. Then, the gradient of score function with respect to parameter θ_i is given by ∇_θ_ilog p_ϕ(𝐱_t^(i);y,t) = ∇_𝐱_t^(i)log p_ϕ(𝐱_t^(i) ;y,t) ∂𝐱_t^(i)/∂θ_i≈ -α_t/σ_tϵ_ϕ(𝐱_t^(i);y,t) ∂𝐱^(i)/∂θ, for each i=1,… N. Finally, to derive , we plug Eq. (<ref>) to Eq. (<ref>) to attain Eq. (<ref>). Also, we subtract the noise ϵ, which helps reducing the variance of gradient for better optimization. Following DreamFusion <cit.>, we do not compute the Jacobian of U-Net. At high level, takes the gradient update on each 𝐱^(i) using SVGD and update θ_i by simple chain rule without computing the Jacobian. This formulation makes as a straightforward generalization to SDS for multiple samples and leads to effective gradient for optimizing with consistency among batch of samples. CSD-Edit derivation. As mentioned above, we subtract the random noise to reduce the variance of gradient estimation. This is in a similar manner to the variance reduction in policy gradient <cit.>, where having proper baseline function results in faster and more stable optimization. Using this analogy, our intuition is build upon that setting better baseline function can ameliorate the optimization of . Thus, in image-editing via , we propose to use image-conditional noise estimate as a baseline function. This allows to optimize the latent driven by only the influence of instruction prompts. We notice that similar observations were proposed in Delta Denoising Score (DDS) <cit.>, where they introduced an image-to-image translation method that is based on SDS, and the difference of the noise estimate from target prompt and that from source prompt are used. Our can be combined with DDS by changing the noise difference term as follows: Δℰ_t = ϵ_ϕ(𝐱_t; y_tgt,t) - ϵ_ϕ(𝐱̃_t; y_src, t), where 𝐱 and 𝐱̃ are target and source images, y_tgt and y_src are target and source prompts. However, we found that with InstructPix2Pix is more amenable in editing real images as it does not require source prompt. Finally, we remark that can be applied to various text-to-image diffusion models such as ControlNet <cit.>, which we leave it for the future work. § ADDITIONAL EXPERIMENTS §.§ Compositional editing Recent works have shown the ability of text-to-image diffusion models in compositional generation of images handling multiple prompts <cit.>. Here, we show that can extend this ability to compositional editing, even at panorama-scale images which require a particular ability to maintain far-range consistency. Specifically, we demonstrate that one can edit a panorama image to follow different prompts on different regions while keeping the overall context uncorrupted. Given multiple textual prompts {y_k}_k=1^K, the compositional noise estimate is given by ϵ_ϕ(𝐱_t;{y_k}_k=1^K,t) = ∑_k=1^K α_kϵ_ϕ^ω(𝐱_t;y_k,t), where α_k are hyperparameters that regularize the effect of each prompt. When applying compositional generation to the panorama image editing, the challenge lies in obtaining image that is smooth and natural within the region where the different prompts are applied. To that end, for each patch of an image, we set α_k to be the area of the overlapping region between the patch and region where prompt y_k is applied. Also, we normalize to assure ∑_kα_k=1. In Figure <ref>, we illustrate some examples on compositional editing of a panorama image. For instance, given an image, one can change into different weathers, different seasons, or different painting styles without leaving artifacts that hinder the spatial consistency of an image. §.§ Text-to-3D generation with As of Section <ref>, we present a detailed study on the effect of in text-to-3D generation, particularly focusing on the DreamFusion architecture <cit.>. We follow the most of experimental setups from those conducted by <cit.>. Our experiments in this section are based on Stable-DreamFusion <cit.>, a public re-implementation of DreamFusion, given that currently the official implementation of DreamFusion is not available on public. Setup. We use vanilla MLP based NeRF architecture <cit.> with 5 ResNet <cit.> blocks. Other regularizers such as shading, camera and light sampling are set as default in <cit.>. We use view-dependent prompting given the sampled azimuth angle and interpolate by the text embeddings. We use Adan <cit.> optimizer with learning rate warmup over 2000 steps from 10^-9 to 2×10^-3 followed by cosine decay down to 10^-6. We use batch size of 4 and optimize for 10000 steps in total, where most of the case sufficiently converged at 7000 to 8000 steps. For the base text-to-image diffusion model, we adopt DeepFloyd-IF-XL-v1.0 since we found it way better than the default choice of Stable Diffusion in a qualitative manner. While the original DreamFusion <cit.> used guidance scale of 100 for their experiments, we find that guidance scale of 20 works well for DeepFloyd. We selected 30 prompts used in DreamFusion gallery[<https://dreamfusion3d.github.io/gallery.html>] and compare their generation results via DreamFusion from the standard SDS and those from our proposed . We use one A100 (80GB) GPU for each experiment, and it takes ∼5 hours to conduct one experiment. For implementation, we use LPIPS <cit.> as a distance of RBF kernel. Note that LPIPS gives more computational cost than the usual ℓ_2-norm based RBF kernel. The LPIPS is computed between two rendered views of size 64×64. For the kernel bandwidth, we use h=med^2/log B, where med is a median of the pairwise LPIPS distance between the views, B is the batch size. For evaluation, we render the scene at the elevation at 30 degree and capture at every 30 degree of azimuth angle. Then we compute the CLIP image-text similarity between the rendered views and input prompts. We measure similarities for both textured views (RGB) and textureless depth views (Depth). We also report Frechet Inception Distance (FID) between the RGB images and ImageNet validation dataset to evaluate the quality and diversity of rendered images compared to natural images. Results. In Table <ref>, we report the evaluation results of on text-to-3D generation comparison to DreamFusion. Remark that presents better CLIP image-text similarities in both RGB and Depth views. Also, achieves lower FID score showing its better quality on generated samples. Since we used the same random seed in generating both and DreamFusion, the shapes and colors are similar. However, the results show that obtains finer details in its generations. In Figure <ref>, we qualitatively compare the baseline DreamFusion (SDS) and ours. We empirically observe three benefits of using over SDS. First, provides better quality compared to SDS. SDS often suffers from Janus problem, where multiple faces appear in a 3D object. We found that often resolves Janus problem by showing consistent information during training. See the first row of Figure <ref>. Second, can give us better fine-detailed quality. The inconsistent score distillation often gives us blurry artifact or undesirable features left in the 3D object. can handle this problem and results in higher-quality generation, e.g., Figure <ref> second row. Lastly, can be used for improving diversity. One problem of DreamFusion, as acclaimed by the authors, is that it lacks sample diversity. Thus, it often relies on changing random seeds, but it largely alters the output. On the other hand, we show that can obtain alternative sample with only small details changed, e.g., Figure <ref> third row. Even when SDS is successful, can be used in generating diverse sample. § ABLATION STUDY In addition to the qualitative examples shown in Section <ref>, we present an additional ablation study on (a) the effect of SVGD and (b) subtracting random noise in in panorama image editing experiments. Following the experimental setup in Section <ref>, we select 16 images and apply 5 different artistic stylization using , without SVGD, and without subtracting image-conditional noise estimate. Again, we measure the CLIP image similarity and CLIP directional similarity for the evaluation. In Figure <ref>, we plot the results of the ablation study. Remark that without SVGD radically changes the image due to the absence of consistency regularization. As illustrated in Figure <ref>, via subtracting random noise instead of image-conditional noise results in blurry outputs. Here, we also quantitatively show that it results in significant degrade in CLIP image similarity and CLIP directional similarity, losing the details of the source image. In Figure <ref>, we depict the qualitative results on our ablation study. § IMPLEMENTATION DETAILS Setup. For the experiments with , we use the publicly available pre-trained model of Instruct-Pix2Pix <cit.>[<https://github.com/timothybrooks/instruct-pix2pix>] by default. We perform optimization on the output space of Stable Diffusion <cit.> autoencoder. We use SGD optimizer with step learning rate decay, without adding weight decay. We set t_min=0.2 and t_max=0.5, where original SDS optimization for DreamFusion used t_min=0.2 and t_max=0.98. This is because we do not generally require a large scale of noise in editing. We use the guidance scale ω_y∈[3.0, 15.0] and image guidance scale ω_s∈[1.5,5.0]. We find that our approach is less sensitive to the choice of image guidance scale, yet a smaller image guidance scale is more sensitive to editing. All experiments are conducted on AMD EPYC 7V13 64-Core Processor and a single NVIDIA A100 80GB. Throughout the experiments, we use OpenCLIP <cit.> ViT-bigG-14 model for evaluation. §.§ Panorama image editing To edit a panorama image, we first encode into the Stable Diffusion latent space (i.e., downscale by 8), then use a stride size of 16 to obtain multiple patches. Then we select a B batch of patches to perform . Note that we perform and then normalize by the number of appearances as mentioned in Section <ref>. Note that our approach performs well even without using small batch size, e.g., for an image of resolution 1920×512, there are 12 patches and we use B=4. For experiments, we collect 32 panorama images and conduct 5 artistic stylizations: “turn into Van Gogh style painting”, “turn into Pablo Picasso style painting”, “turn into Andy Warhol style painting”, “turn into oriental style painting”, and “turn into Salvador Dali style painting”. We use learning rate of 2.0 and image guidance scale of 1.5, and vary the guidance scale from 3.0 to 10.0. §.§ Video editing We edit video sequences in DAVIS 2017 <cit.> by sampling 24 frames at the resolution of 1920×1080 from each sequence. Then, we resize all frames into 512×512 resolution and encode all frames each using Stable Diffusion. We use learning rate [0.25, 2] and optimize them for [200, 500] iterations. §.§ 3D scene editing Following Instruct-NeRF2NeRF <cit.>, we first pretrain NeRF using the nerfacto model from NeRFStudio <cit.>, training it for 30,000 steps. Next, we re-initialize the optimizer and finetune the pre-trained NeRF model with edited train views. In contrast to Instruct-NeRF2NeRF, which edits one train view with Instruct-Pix2Pix after every 10 steps of update, we edit a batch of train views (batch size of 16) with after every 2000 steps of update. The batch is randomly selected among the train views without replacement. § ADDITIONAL QUALITATIVE RESULTS § LIMITATIONS As our method leverages pre-trained Instruct-Pix2Pix, it inherits the limitations of it such as undesirable changes to the image due to the biases. Also, as described in <cit.>, Instruct-Pix2Pix is often unable to change viewpoints, isolate a specific object, or reorganize objects within the image. When editing a high-resolution image by dividing it into patches, it often remains an artifact at the edge of the patches, especially at the corner side of an image. This is due to that the patches at the corner are less likely to be sampled during the optimization. See Figure <ref> for examples. When editing a video, the edited video often shows a flickering effect due to the inability of the Stable Diffusion autoencoder in compressing the video. We believe that using with video diffusion models trained on video datasets can possibly overcome this problem. § BROADER IMPACT Our research introduces a comprehensive image editing framework that encompasses various modalities, including high-resolution images, videos, and 3D scenes. While it is important to acknowledge that our framework might be potentially misused to create fake content, this concern is inherent to image editing techniques as a whole. Furthermore, our method relies on generative priors derived from large text-to-image diffusion models, which may inadvertently contain biases due to the auto-filtering process applied to the vast training dataset. These biases influence the score distillation process, where the undesired results may come out. However, we propose that employing Consistent Score Distillation (CSD) can assist us in identifying and understanding such undesirable biases. By leveraging the inter-sample relationships and aiming for consistent generation and manipulation of visual content, our method provides a valuable avenue for comprehending the interaction between samples and prompts. Exploring this aspect further represents an intriguing future direction.
http://arxiv.org/abs/2307.02751v1
20230706031940
DSARSR: Deep Stacked Auto-encoders Enhanced Robust Speaker Recognition
[ "Zhifeng Wang", "Chunyan Zeng", "Surong Duan", "Hongjie Ouyang", "Hongmin Xu" ]
cs.SD
[ "cs.SD", "cs.CR", "eess.AS" ]
DSARSR: Deep Stacked Auto-encoders Enhanced Robust Speaker Recognition Zhifeng Wang, Surong Duan, Hongjie Ouyang, and Hongmin Xu Department of Digital Media Technology, Central China Normal University, Wuhan, Hubei, China, 430079 Correspondence author: Chunyan Zeng Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan, Hubei, China, 430068, [email protected] * Zhifeng Wang, Chunyan Zeng, Surong Duan, Hongjie Ouyang, and Hongmin Xu August 1, 2023 =========================================================================== * Speaker recognition is a biometric modality which utilize speaker’s speech segments to recognize identity, determining whether the test speaker belongs to one of the enrolled speakers. In order to improve the robustness of i-vector framework on cross-channel conditions and explore the nova method for applying deep learning to speaker recognition, the Stacked Auto-encoders is applied to get the abstract extraction of the i-vector instead of applying PLDA. After pre-processing and feature extraction, the speaker and channel independent speeches are employed for UBM training. The UBM is then used to extract the i-vector of the enrollment and test speech. Unlike the traditional i-vector framework, which uses linear discriminant analysis (LDA) to reduce dimension and increase the discrimination between speaker subspaces, this research use stacked auto-encoders to reconstruct the i-vector with lower dimension and different classifiers can be chosen to achieve final classification. The experimental results show that the proposed method achieves better performance than the-state-of-the-art method. § INTRODUCTION Speaker recognition is a biometric modality which utilize speaker’s speech segments to recognize identity, determining whether the test speaker belongs to one of the enrolled speakers <cit.>. Speaker recognition can be regarded as an means of identification which is of great use for application like forensics, transaction authentication as well as law enforcement <cit.>. The development of speaker recognition technology has gone through the following four stages. The first stage was from the 1960s to the 1970s, when the research was focused on speech feature extraction and template matching techniques. In 1962, Bell Labs proposed a method which use the spectrogram to recognize speakers <cit.>. After that Atal et al. proposed Linear Predictive Cepstrum Coefficient (LPCC) <cit.>, which improved the accuracy of speaker recognition. The second stage was from the 1980s to the mid-1990s, when the statistical model began to be applied into speaker recognition. Mel-frequency cepstrum (MFCC) was represented by Davis <cit.>, which is a representation of the short-term power spectrum of a speech signal. Around 2000, the GMM-UBM framework for text-independent speaker recognition proposed by Reynolds reduced the GMM's dependence on the training set <cit.>. In 2006, Campbell proposed the Gaussian mixture super vector-support vector machine model (GSV-SVM) <cit.> based on GMM-UBM and support vector machine becoming the predominant technologies. After 2010, models like joint factor analysis (JFA) <cit.>, and i-vector<cit.> based on Gaussian super vector made the enormous promotion for speaker recognition system. Based on i-vector, Kenny was inspired by the conventional linear discriminant analysis (LDA) <cit.> for face recognition and proposed Probabilistic Linear Discriminant Analysis (PLDA) <cit.>, which is the probabilistic form of LDA. The fourth stage began in this century (2010) when deep learning began to be introduced into speaker recognition. For speaker verification task, deep models are employed both in feature extraction (such as Deep RBMs, Speaker-discriminant DNN) and training phase. For speaker identification task, bottleneck (BN) features were proposed for nonlinear feature transformation and dimensionality reduction <cit.> <cit.>. <cit.> presented DAE-based dereverberation for feature extraction and built a robust distant-talking speaker recognition method. This research is inspired by the great success of deep learning in computer vision <cit.>, data mining <cit.>, speech processing <cit.>, and other areas <cit.>. In order to improve the robustness of i-vector framework on cross-channel conditions and explore the possible direction for applying deep learning to speaker recognition, instead of applying PLDA we intend to use the Stacked Auto-encoders to get the abstract extraction of the i-vector and classifiers like SVM and neural network to do the final classification, which is new to this field. § THE FRAMEWORK FOR ROBUST SPEAKER RECOGNITION BASED ON STACKED AUTO-ENCODERS The basic framework of the Stacked Auto-encoders based speaker recognition model can be illustrated in Fig. 1. After pre-processing and feature extraction, the speaker and channel independent speech are employed for UBM training. The UBM is then used to extract the i-vector of the enrollment and test speech. Unlike the traditional i-vector framework, which uses linear discriminant analysis (LDA) to reduce dimension and increase the discrimination between speaker subspaces, we use stacked auto-encoders to reconstruct the i-vector with lower dimension and different classifiers can be chosen to achieve final classification. §.§ I-vector Extraction for Speaker Recognition The i-vector framework is now considered as the state-of-the-art for speaker recognition, firstly proposed in 2009 <cit.>. This algorithm combines the strengths of GMM supervector SVMs <cit.> and Joint Factor Analysis (JFA) <cit.> which introduces the total variability space (T) containing the differences not only between the speakers but also between the channels. The i-vector framework can be illustrated as below. After feature extraction, the Universal Background Model (UBM) is obtained using EM. Then the zero and first order Baum Welch Statistics can be computed as: N_c^k=∑_tγ_c,t^k F_c^k=∑_tγ_c,t^ko_t^k Given the statistics above, the T matrix is trained using the maximum likelihood estimate (MLE). Step E: Randomly initialize the total variability matrix T before training. Then calculate the variance and mean of the speaker factor l_T(S)=I+T^TΣ^-1NN(s)T y(S)=l_T^-1(s)T^TΣ^-1FF(s) Step M: Maximum likelihood revaluation. The statistics of all the enrollment data was added: N_c=∑_sN_c(s) A_c=∑_sN_c(s)l_T^-1(s) C=∑_sFF(s)y(s)^T After obtaining matrix T, i-vector can be extracted. The processes of extraction are: First, calculating the Baum-Welch statistic of the corresponding speaker then the estimation of i-vector using matrix T can be calculated as: E[w_s,k]=(I+T^TΣ^-1N_h(s)T )^-1T^TΣ^-1F_h(s) Where Σ is the covariance matrix of the UBM. In general, the dimension of i-vector range from 400 to 600. After obtaining the initial i-vector, linear discriminate analysis (LDA) with Fisher criterion is normally used to reduce the dimensionality (typical dimensions are 200 dimensions) as well as increase the discrimination between speaker subspaces. Then, the within class covariance normalization (WCCN) is performed so that the speaker subspace can be orthogonal. Finally, to score the verification trails, the log-likelihood ratio (LLR) was computed between same (H1) versus different speakers hypotheses (H0): llr= lnp(x_1,x_2|H_1)/p(x_1|H_0) p(x_2|H_0) §.§ Stack Auto-encoders for Robust Speaker Recognition As early as 1986, Rumelhart proposed the auto-encoder <cit.> for dimensionality reduction of high-dimensional data. After the improvement by Hinton, the concept of deep auto-encoder is proposed which is an unsupervised learning for nonlinear feature extraction mainly used for data representation and reconstruction. The most outstanding feature of auto-encoders is its link to latent variable space, which make it special among generative model. The main objective of the auto-encodes is to learn the internal representation of the input data with a more compact form, which means to extract the essences of the data while losing the minimal useful information. The model aims to give priority to the significant parts of the original data to learn about data. Architecturally, the basic framework of an auto-encoders is a feed-forward network similar to MLP but with different purpose, illustrated in Fig. 2. Instead of getting a prediction output of the input data, auto-encoders aims in reconstructing the input data to get a low dimensional representation. The auto-encoder always includes two parts – encoder ϕ and the decoder ψ which can be defined as transitions between feature space with different dimension: ϕ:X→F ψ:F→X ϕ , ψ=min_ϕ , ψ||X-(ψ∘ϕ)X||^2 During the encoder phase, the input x is mapped to code z∈ R^P=F: z=σ(Wx+b) As for decoder phase, z is mapped backwards to input space to reconstruct x: x^'=σ^'(W^'z+b^') The object function of auto-encoders is to minimize the reconstruction errors in order to restore input x: L(x,x^')=||x-x^'||^2=||x-σ^'(W^'(σ(Wx+b))+b^')||^2 A stacked auto-encoder network contains multiple hidden layers and is symmetric about the intermediate layer, containing one input layer and corresponding output layer with 2r-1 hidden layers. Suppose m nodes of input layerdenoted as x=(x_1,x_2,...,x_m)^T∈ R^m, the hidden layer vector is h_k=(h_k,1,h_k,2,...,h_k,n_k)^T∈ R^n_k, and the output vector is x'=(x'_1,x'_2,...,x'_m)^T∈ R^m then each hidden layers of the auto-encoders can be denoted as: h_1=σ_h_1(W^1x+b^1) h_k=σ_h_k(W^kh_k-1+b^k),2 ≤ k ≤ 2r-1 x'=σ'_h_k(W^2rh_2r-1+b^2r) The hidden layer code can be regarded as the compression of the input space if it has lower dimensionality. By connecting multiple similar encoders, the output of then_th layer is regarded as the input of the n + 1 layer. After multi-layer training, the auto-encoder can extract the essence features from the original data, and then construct another neural network of or add a classifier such as SVM or LR, the classification can be efficiently implemented. We want to have the auto-encoder with the capacity of learning the useful properties of the input data and it can be achieved by constraining the hidden layer to have a smaller dimension than the input layer. An auto-encoder with lower dimension of the hidden layer is called undercomplete. The undercomplete architecture makes the auto-encoders to capture the most significant features of the input data. If the decoder φ is linear and the loss function is the mean squared error, the auto-encoder is just like PCA which is to learn the principal subspace of the input data. Auto-encoders with nonlinear encoder and decoder has much stronger generalization capacity than PCA. However, the strong learning capacity of the encoder and decoder can easily lead to over-fitting without extracting useful knowledge from input. § EXPERIMENTS AND RESULTS §.§ Database Description Two databases are applied in this thesis: the TIMIT corpus and the 2006 NIST Speaker Recognition Evaluation Training Set. We employed 354 speakers from TIMIT, with 3540 speech utterances in total, from which 2000 were used for training the UBM, the remaining 1540 for i-vector extraction. And for the NIST 2006 database, we employed 400 speakers in total and 300 for modeling the UBM and 100 for constructing the speaker recognition system. Each speaker has eight two-channel (4-wire) conversations collected over telephone channels which are mostly in English and four other languages. In this work, our experiments can be divided into two main parts based on TIMIT and NIST 2006 databases, in terms of the gender detection and speaker recognition. §.§ Speaker Gender Detection As to prove the feasibility of this method, we first apply it to achieve a binary classification task – speaker gender detection. Speaker gender detection is not necessarily considered as an end itself, however it can be used as a pre-processing step in Automatic Speech Recognition, allowing the selection of gender dependent acoustic models <cit.> <cit.>. In recent research, this system has been used for selecting gender specific emotion recognition engines <cit.> and defining different strategies for different gender <cit.>. §.§.§ Evaluation of Performance In this experiment, ACC, AUC and MCC are employed for evaluate the performance of the model which are the common evaluation method for binary classification. The ROC curve is used as a measurement for classifier model based on TPR and FPR. The AUC (Area Under Curve) is an evaluation index often used in the binary classification model, defined as the area under the ROC curve. The TPR and FPR can be denoted as below: FPR=FP/FP+TN TPR=TP/TP+FN Then the classification accuracy can be defined as: ACC=TP+TN/FP+TN+TP+FN Where TP – True Positives, FP – False Positives, FN – False Negatives, TN – True Negatives. The ROC curve can keep itself unchanged when the distribution of positive and negative samples changes. When classification is completely random, the AUC is close to 0.5, and the closer the value of AUC is to 1, the better the model prediction effect is. The Matthews correlation coefficient is another measurement usually used to evaluate the binary classifier model, calculated as: MCC=TP× TN -FP × FN/√((TP+FP)(TP+FN)(TN+FP)(TN+FN)) The range is [-1,1], MCC=-1 denoting the worst prediction, 0 denoting the random prediction and 1 denoting the best prediction. §.§.§ Results and Analysis In this experiment, the method presented above is applied to the TIMIT database. 108 males and 92 females are selected for UBM training, each speaker includes 10 speech signals of 10s, and 12-dimensional MFCC are extracted by 256 frames. After normalization, the UBM with 64 Gauss components is trained. To extract the i-vector, 77 males and 77 females (different from the data used for training the UBM) are employed. Then the zero and first order Baum Welch Statistics are computed and a total variability space is learned, which was applied to extract the i-vector with 400 dimension. To reconstruct the i-vector, we apply one layer (200 nodes) and two layers (200 and 40 nodes respectively) of auto-encoder network respectively. Then we apply the Support Vector Machine (SVM) and two layers neural networks as the back-end classifier. The results are as follows. In this case, applying two layers of auto-encoders and neural network as the back-end classifier achieved a better classification performance with 98.4% accuracy. As for SVM classifier, it may lead to overfitting problem since the accuracy on the training set is much better than testing set. §.§ Speaker Verification We applied the undercomplete auto-encoders to NIST 2006 challenge. The basic flowchart of the method is similar with the one shown in 3.4.2. We employed 400 speakers in total and 300 for modeling the UBM and 100 for constructing the speaker recognition system. VAD and feature scaling are used for pre-processing. One Hot Encoder is used for converting the categorical label into numerical label. For this multiple classification task, we employ the recognition rate and confusion matrix to evaluate our model. §.§.§ Results and comparison In this experiment, a 64 diagonal component UBM is trained, along with a 400 dimensional i-vector extractor. Firstly, we use undercomplete auto-encoders to reconstruct the feature with two hidden layers, which have 200 and 50 nodes respectively. After the final classification, we achieved 91.25% recognition rate and the confusion matrix of the results is shown below. As the figure shows, the simply used auto-encoders may not achieve a good performance in the final classification task while the reconstruction loss is small enough, which indicate that the undercomplete auto-encoders may fail to learn the useful representation of the input data when given to strong learning ability. § CONCLUSION In this section, we reviewed the general concept of auto-encoders and its basic form – undercomplete auto-encoders. We intend to take the advantages of the strong capability of represent the feature and replace the LDA in the traditional i-vector frame to realize dimensionality reduction. We employed the proposed model to deal with two classification task – gender detection and speaker recognition and we also explore the performance of different back-end classifiers after using stacked auto-encoders for feature extraction. The performance of undercomplete auto-encoders is not quite good and it seems to lose some useful knowledge of the input data when given to strong learning ability. In next section, we will explore more variant of auto-encoders and compare the performance on the same dataset with the original model. This research was supported by National Natural Science Foundation of China (No.61901165, 61501199), Science and Technology Research Project of Hubei Education Department (No. Q20191406), Hubei Natural Science Foundation (No. 2017CFB683), and self-determined research funds of CCNU from the colleges’ basic research and operation of MOE (No. CCNU20ZT010). spmpsci
http://arxiv.org/abs/2307.02114v1
20230705084229
Kinetic theory for spin-polarized relativistic plasmas
[ "Daniel Seipt", "Alec G. R. Thomas" ]
physics.plasm-ph
[ "physics.plasm-ph", "hep-ph" ]
[email protected] Helmholtz Institut Jena, Fröbelstieg 3, 07743 Jena, Germany GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt, Germany The Gérard Mourou Center for Ultrafast Optical Science, University of Michigan, Ann Arbor, Michigan 48109, USA The investigation of spin and polarization effects in ultra-high intensity laser-plasma and laser-beam interactions has become an emergent topic in high-field science recently. In this paper we derive a relativistic kinetic description of spin-polarized plasmas, where QED effects are taken into account via Boltzmann-type collision operators under the local constant field approximation. The emergence of anomalous precession is derived from one-loop self-energy contributions in a strong background field. We are interested, in particular, in the interplay between radiation reaction effects and the spin polarization of the radiating particles. For this we derive equations for spin-polarized quantum radiation reaction from moments of the spin-polarized kinetic equations. By comparing with the classical theory, we identify and discuss the spin-dependent radiation reaction terms, and radiative contributions to spin dynamics. Kinetic theory for spin-polarized relativistic plasmas Alec G. R. Thomas ====================================================== § INTRODUCTION There has recently been substantial recent interest in the effects of lepton and gamma-ray spin/polarization on strong-field quantum-electrodynamics (QED) emission processes that occur in ultraintense laser interactions with electron beams or plasma <cit.>. This is motivated by the continuing development of multi-petawatt class laser systems around the world, which should enable researchers to access QED critical strength fields in combination with relativistic plasma dynamics <cit.>. Strong field QED is important for particles with momentum p^μ interacting in electromagnetic fields for which the invariant quantity χ_p = ||F^μνp_ν||/(m c E_S)≡[γ√((E⃗+β⃗×B⃗)^2-(β⃗·E⃗)^2)]/E_S is of order 1 or larger, where E_S = m^2c^3/|e|ħ is the Sauter-Schwinger field, often also called the QED critical field. The electromagnetic fields in the laboratory frame are typically much weaker than E_S. Under such conditions, the dominant quantum processes are the first order processes of photon emission by a lepton (nonlinear Compton scattering, in a laser field) or the decay of a photon into an electron-positron pair (the nonlinear Breit-Wheeler process). These processes depend on the spin/polarization state of both the leptons and photons <cit.>. Studies of strong field QED have often made use of a semiclassical approach where the particles follow classical trajectories punctuated by point-like quantum emission/decay events that are calculated probabilistically under the assumptions that the `formation length' is short compared to the characteristic space/time scales of the electromagnetic fields (the local constant field approximation, LCFA); only first order processes are dominant; the fields are “weak” such that the field configuration in the rest frame of the particle is a crossed electric and magnetic field <cit.>. Motivated by this, recent studies have used an extended version of this model where the emission events are modified to include polarization dependent quantum rates combined with a classical spin-pusher using the Thomas Bargmann-Michel-Telegdi (T-BMT) equation <cit.>. However, this is done on an ad-hoc basis, and it is not immediately clear that, for example, the inclusion of the anomalous (g-2) magnetic moment in the T-BMT equation is consistent with the other assumptions. Other authors have been interested in the feedback of electron spin effects on classical expressions of radiation reaction <cit.>. In this paper, we develop a kinetic description for leptons in strong fields starting from transport equations resulting from a Wigner operator formalism in the mean field approximation <cit.> and include coupling of the fermion field to the high energy photons through a Boltzmann-like collision operator representing the point-like quantum emission/decay events <cit.>. By taking moments of a delta-distribution in phase-space we derive effective quasi-classical equations of motion for leptons, including the effects of spin and radiation reaction. We start in section <ref> by reviewing the “classical” equations of motion with radiation reaction and note some properties, including the relative strength of spin corrections. In section <ref> we derive the set of Boltzmann-like transport equations for the particles, comprising the classical transport and collision type operators for the quantum emissions. Section <ref> introduces the fluid equations for the transport by taking moments of the distribution functions <cit.>. In section <ref>, by assuming a delta-distribution, we derive the single particle equations of motion and discuss the consequences of the expressions. Further discussion of the results is presented in section <ref>. We conclude in section <ref>. We employ rationalized Heaviside-Lorentz units with ħ=c=ε_0=1 and Minkowski metric η^μν = diag(1,-1,-1,-1) throughout. We define dimensionless electron momenta according to p^μ/m→ p^μ and coordinates according to mx^μ→ x^μ, and mτ→τ for proper time. We also absorb the magnitude of the electron charge |e| = √(4πα), where α is the fine structure constant, and the electron mass into the electromagnetic field strength according to |e|F^μν/m^2→ F^μν. This means that in all the following equations, a factor q=±1 simply gives the sign of charge. § CLASSICAL RADIATION REACTION AND SPIN §.§ Classical Radiation Reaction for Orbital Motion It is a well known fact that the radiation emitted by accelerated charges acts back onto the electron motion <cit.>. For the classical orbital equations of motion these so-called radiation reaction effects can be taken into account by adding a radiation reaction force f_rad^μ to the Lorentz force, u^μτ = q F^μν u_ν + f_rad^μ , where τ is the particle's proper time, u^μ is the four-velocity/momentum and F^μν is the electromagnetic field tensor. Many forms of the radiation reaction force f_rad^μ have been proposed in the literature, but no definite consensus on the “correct” equation of motion has been reached so far. Let us write the radiation reaction force generically as follows: f_rad^μ = τ_R Δ^μν R_ν, with the dimensionless radiation reaction time constant τ_R = 2α/3, and a projector Δ^μν = η^μν - u^μ u^ν onto the three-dimensional space-like hyperplane perpendicular to the four-velocity u^μ. This is necessary to ensure that the radiation reaction force and hence the total acceleration of the particle is orthogonal to its velocity, i.e. to ensure the validity of the subsidiary condition u^2=1. The specific form of the vector R^μ varies for the different models of classical radiation reaction (RR). In particular, for the Lorentz-Abraham-Dirac (LAD) form of the RR force we have <cit.> R^μ_LAD = ü^μ , such that f_rad^μ = τ_R (ü^μ + u^μu̇^2) after an integration by parts. However, the LAD form of radiation reaction is known to have some mathematical issues in the form of runaway solutions and preacceleration. These phenomena are related to the fact that LAD contains the second derivative of u, a third initial condition—the initial acceleration—has to be provided. However, it is not independent and only a proper choice leads to physical solutions of the LAD (see also detailed discussion in Refs. <cit.>). Many alternative forms of the equations of motion for classical radiation reaction have been derived in order to cure the deficiencies of the LAD equation. Most notably, the Landau-Lifshitz (LL) form of radiation reaction has been derived from LAD by reducing the order of the differential equation by iteration <cit.>. (See also the recent Refs. <cit.>.) Iteration means treating the radiation reaction term in the LAD equation as a perturbation and approximating the jerk ü using the Lorentz force, ü = q (F.u)τ = q (u.∂ F).u +q F.u̇. Iterating the LAD equation twice with this expression yields the LL equation, which has attracted some considerable interest in the recent years due to its usefulness in numerical simulations of high-intensity laser-plasma interactions. In fact, many of the RR models in the literature are connected to LAD by successive iterations and the quasi-constant approximations, even though their original derivation often was not following this path. Iterating LAD only once immediately leads to the Eliezer/Ford-O'Connell (EFO) equation <cit.>, iterating a second time directly gives the Landau-Lifshitz (LL) form of RR as mentioned above. In addition to iterating the RR equations one can also take a quasi-constant approximation by neglecting the u.∂ F terms. The quasi-constant approximation of EFO is known as Mo-Papas (MP) equation <cit.>, while the quasi-constant limit of LL is the Herrera (H) equation <cit.>. The specific forms of the radiation reaction vector R^μ and the RR force Δ^μν R_ν are summarized in Table <ref>. For a discussion of the different RR models in general see <cit.>, and as a limit of QED see also Ref. <cit.>. §.§ Covariant form of spin precession with radiation reaction The well established relativistic equation of motion for the covariant spin four-vector of a particle in a slowly varying external field is the Bargmann-Michel-Telegdi (BMT) equation <cit.>, which reads S^μτ = q g/2[ F^μβ S_β + u^μ (S_α F^αβ u_β ) ] - u^μ (S_αu̇^α ) , where S^μ is the spin four-vector and g is the Landé-factor. For an electron, g≈ 2, and the deviation from 2 is the anomalous magnetic moment a_e ≡ (g-2)/2. The one-loop perturbative QED result derived by Schwinger <cit.> is a_e^1-loop=α/2π, and the experimentally measured value is a_e = 1.15965218076(28) × 10^-3 <cit.>. The last term in Eq. (<ref>) ensures that the spin four-vector S^μ remains orthogonal to the four-velocity u^μ. This is necessary since the spin-four vector is space-like, and in the electron rest frame has no time-component. The form of Eq. (<ref>) ensures that S.u=0 as long as u^2=1, which then ensures that the length of the spin four-vector is conserved under the BMT equation (S.S)τ =0. This now has the consequence that the T-BMT equation also acquires a contribution from radiation reaction. Taking the form for the radiation reaction force as described above and plugging this into (<ref>) we get S^μτ = qg/2 F^μνS_ν - q a_e u^μ (u.F.S) - τ_R u^μ (S.R) , where the last term appears due to the radiation reaction force. Clearly, different forms of R^μ for the various RR models will yield different contributions to the T-BMT equation, as shown in the last column of Table <ref>. Employing for instance the Herrera RR-force (LL in a quasi-constant field), we immediately get as the most simple consistent classical RR model with spin-precession: u^μτ = q F^μν u_ν + τ_R [ F^μν F_νλ u^λ - u^μ ( u_α F^αβ F_βλ u^λ ) ] S^μτ = qg/2 F^μν S_ν - q a_e u^μ (u_α F^αβ S_β) - τ_R u^μ (S_α F^αβ F_βλ u^λ) . The significance of the RR term in the second line of the T-BMT equation is to ensure orthogonality of the spin-vector and the velocity vector S.u=0 and the constancy of S_μ S^μ for all times. It should be mentioned that these contributions are not small. If neglected, the solutions of the equations of motion vastly differ (see Figure <ref>). §.§ Noncovariant form of the T-BMT equation: Rest-frame spin vector Here, we show that the RR term in the covariant T-BMT equation also gives a non-zero contribution to the precession of the spin-vector in the rest frame of the particle, and estimate the order of magnitude of the RR correction. The equation of motion for spin vector s⃗ in the electron rest-frame of the electron can be derived from the covariant BMT-equation and has the following form <cit.>: s⃗τ = q s⃗×Ω⃗ , where S^μ = ( u⃗·s⃗ , s⃗ + u⃗ ( u⃗·s⃗) /1+γ) , s⃗ = S⃗ - u⃗ ( u⃗·S⃗)/γ(1+γ) The angular velocity vector Ω⃗ around which s⃗ precesses is derived straightforwardly from the covariant BMT equation (<ref>), which first yields s⃗τ = qg/2[ y⃗ - u⃗y^0/(1+γ)] - s⃗× (u⃗×u̇⃗̇ ) /1+γ , with y^μ = (y^0,y⃗) = F^μν S_ν - u^μ (u.F.S) and the last term being the Thomas precession <cit.>. Without radiation reaction taken into account, u̇⃗̇ is governed by the Lorentz force equation and the angular velocity vector is given by Ω⃗= (1+a_eγ) B⃗ - ( a_e γ + γ/1+γ) (v⃗×E⃗ ) -a_eγ^2/1+γv⃗ (v⃗·B⃗) , where v⃗ = u⃗/γ. Here it should be emphasized that all quantities defining Ω⃗ are taken in the lab frame, but s⃗ is the rest-frame spin vector. With radiation reaction taken into account, we find that Ω⃗ acquires a contribution in addition to Eq. (<ref>), due to the radiation reaction terms in the orbital equation Eqn. (<ref>): Ω⃗→Ω⃗+ δΩ⃗_RR , δΩ⃗_RR = τ_R/qγ/1+γ (v⃗×R⃗ ) , where R⃗ is the spatial part of R^μ. Clearly, the specific form of the radiation reaction correction to spin precession depends on the specific form of R⃗ in the various radiation reaction models. Taking here the most simple Herrera form, i.e. Landau-Lifshitz with negligible field gradients, R⃗_H = L⃗×B⃗ + E⃗(E⃗·u⃗), with the Lorentz force vector L⃗ = γE⃗ + u⃗×B⃗ we find δΩ⃗_RR = τ_R γ^2/q(1+γ)[ (v⃗·B⃗) (E⃗ - v⃗×B⃗) - (v⃗·E⃗) (B⃗ - v⃗×E⃗) ] . Because of the cross product v⃗×R⃗_H only the sub-leading terms in the ultra-relativistic expansion of R⃗_H contribute to δΩ⃗_RR; the leading term drops out. We can now estimate under what conditions the correction δΩ⃗_RR becomes relevant. For ultrarelativistic particles, γ≫ 1, the magnitude of δΩ⃗_RR can be estimated as ||δΩ⃗_RR|| ∼τ_R γ F^2, where F∼ ||E⃗||, ||B⃗|| stands for the magnitude of the electromagnetic field in the laboratory frame. We have to compare it with the typical magnitude of the usual precession vector Ω⃗, Eqn. (<ref>). The latter has two contributions which scale as ||Ω⃗||∼ F and ||Ω⃗|| ∼ a_eγ F for the normal and anomalous precession, respectively. From this we can derive two criteria, a classical one and a quantum one, under which the contribution δΩ⃗_RR becomes relevant. First, the classical criterion compares the RR term with the normal precession. In order for the RR correction to become comparable we have to fulfill the criterion τ_R γ F ∼ 1. With τ_R∼α this can be rewritten as γ F/F_class,cr∼ 1, where F_class,cr=F_S/α is the classical critical field, with the Schwinger field F_S [Recall that we had normalized the field strength to the Schwinger field strength m^2/|e|. Thus, formally F_S=1, which we write out explicitly here for clarity.]. Thus, according to this criterion the RR corrections to the BMT equation become relevant if the background field in the rest frame of the particle is on the order of the classical critical field, which is much larger than the critical field of QED, hence quantum effects can be expected to become important much earlier. This criterion can also be recast in the form αχ∼ 1, where χ is the quantum nonlinearity parameter (but αχ is a classical parameter as the ħ in α and χ cancel). The second criterion is a quantum one and related to the anomalous magnetic moment. It reads τ_R F ∼ a_e. Taking for the anomalous moment the 1-loop Schwinger value, a_e=α/2π, we obtain that the corrections become relevant if F/F_S∼ 1. Thus, according to this criterion radiation reaction effects for spin precession become relevant if the field in the laboratory frame is on the order of the Schwinger field. Both of these corrections seem out of reach for present day (and near future) high-intensity laser-plasma or laser-beam interactions. Those are typically characterized by F ≪ F_S and χ≳ 1. However, αχ∼ 1, as required by the first criterion, is still hard to reach. Moreover, for both criteria the applicablity of the classical approach ceases to be valid for much weaker fields than required for RR to affect spin precession. In particular in the latter case one already approaches the regime where one might see a breakdown of the Furry expansion of strong-field QED (Ritus-Narozhny conjecture)<cit.>. We thus can conclude that the kind of RR effects discussed so far are negligible from a phenomenological standpoint. Nonetheless, we should emphasize that RR effects are crucial in the covariant form of the T-BMT equation to ensure the orthogonality u.S=0 throughout the complete time-evolution. This concludes our classical investigations of the interplay between radiation reaction effects and spin precession. In the following we will derive a kinetic description for polarized particles relevant for contemporary high-intensity laser-plasma interactions, taking into account quantum effects. From those we will then derive effective semiclassical single-particle equations of motion for radiating particles with spin. § SPIN DEPENDENT KINETIC EQUATIONS Here, we derive Boltzmann-type kinetic equations for describing high-intensity laser interactions with polarized particles. In principle, quantum transport theory, e.g. the Wigner operator formalism <cit.> or the nonequilibrium 2PI approach <cit.>, can provide equations for the evolution of statistical ensembles of particles to all orders in ħ, and in the presence of strong background fields. However, the resulting equations are often extremely involved and thus require a number of approximations. For high-intensity laser-plasma interactions it was found <cit.> that typically the electromagnetic spectrum consists of two well-separated regions: (1) A coherent low-frequency peak describing the background/external and plasma fields including coherent radiation, and (2) an incoherent high-frequency peak. This suggests that for the interaction of the fermions with a strong, weakly varying electromagnetic field one can perform a mean field (Hartree-type) approximation), in which fluctuations of the electromagnetic fields are neglected to lowest order and then introduced as a perturbation. The quantum effects at O(ħ) in the interaction of the fermions with the strong mean-field background can typically be neglected if the field strength fulfils F/γ F_S ≪1, i.e. the fields are weaker than γ times the Schwinger field, and the scale length of the background field ℓ=1/||∇_x ln F||≫_C is long compared to the Compton wavelength <cit.>. Both of these criteria are usually extremely well fulfilled for high-intensity laser-plasma interactions. Some phenomena occurring when the field strength is on the order of the Schwinger field have been discussed recently in Refs. <cit.>. The dominant quantum effects in laser-plasma interactions are hard photon emission by an electron and the decay of a photon into an electron positron pair. Those effects couple the fermionic degrees of freedom to the strong background field and (fluctuating) high-frequency field modes (photons), hence are beyond the mean field approximation <cit.>. Our strategy, therefore, is as follows: We take the leading order of quantum transport theory following from the Wigner operator formalism to derive equations for the classical Vlasov-type transport of the particle number density and a spin density. Quantum effects will be included by constructing quantum collision operators with the spin-resolved photon emission (non-linear Compton) rates in the locally constant field approximation in their integral kernels. A rigorous derivation of the collisional quantum transport equations from first principles, but without special emphasis placed on the particle polarization, can be found for instance in a recent article by <cit.>. §.§ Classical Advection Here, we derive the equations describing the classical transport of the on-shell particle density f and the spin-density a^μ in a strong background field from projections of the Wigner operator in the mean field approximation. Hence, we assume that the BBGKY hierarchy truncates at the one-body level, i.e. we ignore particle-particle correlations (lepton-lepton/lepton-ion collisions etc.). This is valid for relatively dilute, high energy particle distributions. According to <cit.>, the transport equations for the off-shell distributions ℱ(x^ν,p^ν) and 𝒜^μ(x^ν,p^ν), which are the scalar and axial-vector Fierz components of the Wigner function, read p·𝒟ℱ =0 , p·𝒟𝒜^μ = - F^μν𝒜_ν , to lowest order in ħ, complemented by the subsidiary condition p_μ𝒜^μ=0, where p^ν is an off-shell momentum, p^2≠ 1. At the same order, the operator 𝒟_μ is given by 𝒟_μ = ∂/∂ x^μ + F_μν∂/∂ p_ν , where F_μν is the electromagnetic mean field. From quantum constraints at lowest order in ħ it follows that the distribution functions must be on-shell<cit.>, which we implement by writing ℱ(x^ν , p^ν ) = 2δ(p^2 - 1) θ(p^0) f(t,x⃗, p⃗) , 𝒜^μ(x^ν , p^ν ) = 2δ(p^2 - 1) θ(p^0) a^μ(t,x⃗, p⃗) , which singles out the positive energy (electron) mass shell. The transport equations for the on-shell distributions follow straightforwardly by integrating Eqns. (<ref>) and (<ref>) over p^0. They read 𝒯 f(t,x⃗,p⃗) = 0 , 𝒯 a^λ(t,x⃗,p⃗) = q F^λν a_ν . From here on, the momentum p will always be on-shell with energy ϵ_p⃗=√(1+p⃗^2) and p^2=1. We also explicitly introduced the sign of the particle charge q, i.e. q=-1 for electrons. The corresponding transport equations for positrons can be obtained by setting q=+1. The on-shell Vlasov-type transport operators on the left-hand side of Eqns. (<ref>) and (<ref>) are 𝒯 = p.∂_x - q p.F.∂_p = ϵ_p⃗∂_t + p⃗·∇_x⃗ + q L⃗·∇_p⃗ , with the partial coordinate derivative (∂_x)_μ = (∂_t,∇_x), and the on-shell momentum partial derivative ∂_p = (0,∇_p⃗), and with L⃗ = ϵ_p⃗E⃗ + p⃗×B⃗. Because of the subsidiariy condition p_μ𝒜^μ=0, the time-component of the axial vector is not independent but rather a function of the spatial components. Therefore, similar to the treatment of the spin vector in Section <ref>, we can introduce an axial-vector (spin-)density a⃗ in the rest-frame of the particle according to a^μ = (p⃗·a⃗ , a⃗ + p⃗ (p⃗·a⃗)/1+ϵ_p⃗) , which fulfills the subsidiary condition p.a=0 automatically. Working here with a⃗ instead of the covariant a^μ will simplify the discussion spin-dependent radiation effects due to the RR term in the covariant classical BMT equation (<ref>). The rest-frame spin-density a⃗ obeys the transport equation 𝒯a⃗ = q a⃗×[ B⃗ - p⃗×E⃗/1+ϵ_p] . We can recognize the term in the square brackets on the right-hand side of the equation as Ω⃗, Eqn. (<ref>), but without the anomalous magnetic moment, a_e=0. Thus, so far the equations only describe the normal spin precession. So far the transport equations are equivalent to purely classical descriptions, and at this level do not mix f and a⃗. A mixture comes about through quantum effects. The quantum effects in the interaction of the electrons with the strong background fields have been neglected by going to the leading order in ħ in the quantum transport. The corresponding 𝒪(ħ^1) have been calculated, e.g. in Ref. <cit.>, but, as we argued above that they are negligible for the typical conditions we are interested in (E/E_S≪ϵ_p). What is not negligible, however, are the quantum effects due to a coupling of the charged particle dynamics in background fields to high-frequency photon modes. These interactions are the root cause of quantum radiation reaction and must be taken into account for a consistent description of high-intensity laser-plasma interactions. This will be done by adding to the right hand side of Eqns. (<ref>) and (<ref>) appropriate collision operators. This approach is similar to what has been done previously in the literature <cit.>, where the kinetic equations were formulated for all particle polarization effects neglected. The details will be given below in Section <ref>. As has been discussed recently in the literature<cit.>, for a consistent treatment of strong-field QED effects one needs to include also the effect of electron self-energy loop corrections at the same order in α. As will be discussed in more detail in Section <ref>, in the quantum kinetic approach the only effect of the loop is to provide the anomalous spin-precession. The full transport equations for a polarized plasma will therefore be of the form 𝒯 f = ( ∂ f/∂τ)_rad + ( ∂ f /∂τ)_loop , 𝒯a⃗ - q a⃗×Ω⃗_0 = ( ∂a⃗/∂τ)_rad + ( ∂a⃗/∂τ)_loop , where Ω⃗_0 denotes Eqn. (<ref>) with a_e=0. §.§ Radiation Emission Contributions: Quantum Collision Operators In this section we are discussing how to include the effect of radiation emission in Eqns. (<ref>) and (<ref>). While the transport operators 𝒯 describe the interaction of the electrons with the strong background fields in the mean field approximation, the radiation emission involves a coupling to high-energy photons, i.e. fluctuating field modes. To describe the photon emission process we will employ the Furry expansion of strong-field QED at O(α) <cit.>. For ultrarelativistic particles ϵ_p≫1, interaction with a strong field, the formation length of the photons is short compared to the scale lengths of the strong background field, and the photon emission rate can be calculated in the locally constant crossed field approximation (LCFA). The requirements are that the normalized scalar (E^2-B^2) and pseusoscalar (E⃗·B⃗) field invariants are small compared to both unity and the quantum nonlinearity parameter χ∼ϵ_p E. The formation length of high-energy photons becomes short if the parameter ξ = E ℓ≫ 1, with the field scale length ℓ. For instance in a plane wave, we might identify ℓ = _L with the reduced laser wavelength _L=1/ω_L such that ξ becomes the usual classical laser-intensity parameter. In the quantum regime for χ≳ 1 it was found that additionally one has to require ξ^3/χ≫ 1 <cit.>. (See also Refs. <cit.> for further discussions of the limitations of the LCFA for soft photon emission.) Under these conditions, photon emission is a local process (a short-range interaction) and can be described by local collision operators, 𝒞_f = ( ∂ f /∂τ)_rad and 𝒞_a⃗ = ( ∂a⃗/∂τ)_rad, where each of the collision operators consists of the usual gain and loss terms, e.g. 𝒞_f = ḟ_gain - ḟ_loss. For ultrarelativistic particles, the photon is emitted into a narrow cone around its instantaneous velocity. We therefore can employ a collinear emission approximation, where the emitted photon momentum is assumed to be parallel to the electron momentum at the moment of emission. This assumption breaks the energy conservation during emission, but the energy lack is of order 1/ϵ_p^2, i.e. negligibly small for ϵ_p≫1 <cit.>. The photon emission rates for polarized electrons can be represented using the Stokes vectors n⃗_i,f of the incident/final electrons, which describe the state and degree of polarization of a particle. For instance n⃗_i^2=1 means the incident particles are perfectly polarized and n⃗_f^2=0 means the final particles are completely unpolarized. In our kinetic approach, the role of the Stokes vectors is taken by the local polarization degree s⃗(t,x⃗,p⃗) ≡a⃗(t,x⃗,p⃗) / f(t,x⃗,p⃗) of a plasma element in phase space. The Stokes vectors, together with the LCFA building blocks for the differential rates w_A, where the label A runs over the various polarization contributions, completely describe the photon emission for polarized particles. The LCFA building blocks and the explicit formulae for the definition of the collinear differential photon emission rates w_A(p⃗, k⃗) and the total photon emission rates W_A(p⃗) are given in Appendix <ref>. §.§.§ The gain term The `gain' of the distribution functions at momentum p⃗ due to radiation emission must come from photon emission processes with final electron momentum p⃗, integrated over all possible initial states that lead to said final state. If a photon with momentum k⃗ is emitted, the initial electron momentum was p⃗+k⃗ by means of the collinear emission approximation, integrated over all photon momenta k⃗. The integral kernels thus must be a linear combination of the distribution functions f, a⃗ at p⃗+k⃗ and the differential photon emission rates w_A(p⃗+k⃗,k⃗). The probability for photon emission with the initial and final electron polarization taken into account can be compactly expressed with help of the electron Stokes vectors and Müller matrices<cit.> as R = 1/2 N_i M N_f^T, where the Müller matrix collates the LCFA building blocks and N_i,f=(1,n⃗_i,f), see Appendix <ref>. Calculating just the expression N_i M, with the identification of n⃗_i∼s⃗, tell us the correct way how the LCFA building blocks have to be combined with the densities f and a⃗: f N_iM = f(1,s⃗) M ∼ f ( w_0 + s⃗·w⃗_i w⃗_f + s⃗·w_if) ∼ḟ_gainȧ⃗̇_gain Focusing for now on the gain term for f, ḟ_gain(p⃗) = ∫^3k⃗/ω_k𝒩 f(p⃗+k⃗) [w_0(p⃗+k⃗,k⃗) + s⃗(p⃗+k⃗) ·w⃗_i(p⃗+k⃗,k⃗) ] , where ω_k is the photon frequency and the normalization factor 𝒩 = ϵ_p⃗/ϵ_p⃗+k⃗ has to be included because the rates are expressed as probability per unit proper time for momentum p⃗+k⃗, while the final change on the left-hand-side of the equation (<ref>) is with regard to momentum p⃗. The integral for ȧ⃗̇_gain is constructed analogously. §.§.§ The loss term The loss terms describe the `loss' of particles from the momentum `mode' p⃗ due to radiation emission. Since any radiation emission alters the electrons' momentum state, the final electron polarization does not matter for the loss term and should be summed over. In order to find the form of the loss terms it is useful to assume the case of electrons circulating a constant B-field. In this case the direction is parallel to B⃗ in the lab frame, i.e. stationary. The particles will be polarized along this axis, and the polarization vector is non-precessing. Let us first introduce the fractions of particles f^σ with spin up (σ=+1) or down (σ=-1) as f^σ = 1/2(f + σ ·a⃗) = 1/2(f + σ a) . In this representation, the loss terms for the f^σ(p⃗ ) due to photon emission, hence the electron losing momentum, can be straightforwardly written as ḟ_loss^σ(p⃗) = f^σ(p⃗) ∑_σ'=±1 W^σσ'(p⃗) , i.e. with the total emission rates summed over all possible spin-flip and non-flip transitions. Here, W^σσ' = 1/2 ( W_0 + σ W_i + σ' W_f + σσ' W_1 ), thus ∑_σ' W^σσ' = ( W_0 + σ W_i ). With Eqn. (<ref>) it is now straightforward to read off the corresponding expressions for f and a as ḟ_loss = f W_0 + a W_i , ȧ_loss = a W_0 + f W_i . This can now be generalized to arbitrary directions of a⃗ as ḟ_loss = f W_0 + a⃗·W⃗_i , ȧ⃗̇_loss = a⃗ W_0 + f W⃗_i . Combining everything from the preceeding subsection we obtain as results for the collision operators 𝒞_f(p⃗) = ∫^3k⃗/ω_kϵ_p⃗/ϵ_p⃗+k⃗[ f(p⃗+k⃗) w_0 (p⃗+k⃗, k⃗) + a⃗(p⃗+k⃗) ·w⃗_i(p⃗+k⃗,k⃗) ] - f(p⃗) W_0(p⃗) - a⃗(p⃗) ·W⃗_i(p⃗ ) , 𝒞_a⃗(p⃗) = ∫^3k⃗/ω_kϵ_p⃗/ϵ_p⃗+k⃗[ f(p⃗+k⃗) w⃗_f (p⃗+k⃗, k⃗) + a⃗(p⃗+k⃗) ·w_if(p⃗+k⃗,k⃗) ] - f(p⃗) W⃗_i(p⃗) - a⃗(p⃗) W_0(p⃗ ) . §.§ Electron self-energy loop contributions and anomalous precession As has been discussed recently in the literature, the inclusion of loop contributions at the same order in α as the emission processes is necessary to maintain unitarity <cit.>. For high-energy photon emission at order α the corresponding loop contribution originates from the interference of the electron self-energy loop with the 𝒪(α^0) part of electron propagation. As has been argued, in e.g. <cit.>, a convenient way of implementing this can be achieved by taking the combined Müller matrices for emission, M, and the loop M^L, M +M^L especially in the resummation of quantum radiation reaction effects <cit.>. We have to adapt this to our kinetic description. The basic strategy for constructing the loop contribution to the kinetic equations will be as follows: Calculate for the loop contribution, e.g. (∂ f/∂τ)_loop, the same combinations of the LCFA building blocks as for the emission operator, just with the corresponding expressions replaced by the loop expressions R_A → R_A^L (see Eqns. (<ref>)–(<ref>)). The loop insertion does not change the particle momentum, and the expressions R_A^L given in Appendix <ref> are already integrated over all photon momenta running around the loop. Thus, in the “gain”-parts of (∂ f/∂τ)_loop and (∂a⃗/∂τ)_loop we actually do not have any expressions which have to be integrated over photon momenta. For the scalar density f this means that the loop contribution must be zero, and indeed, ( ∂ f /∂τ)_loop = f R_0^L + a⃗·R⃗_i^L - f R_0^L - a⃗·R⃗_i^L = 0 . For the axial-vector density we obtain the nonzero result ( ∂a⃗/∂τ)_loop = f R⃗_f^L + a⃗·R_if^L - f R⃗_i^L - a⃗ R_0^L = α∫_0^1 λλGi(z)/√(z)ã⃗̃· ( -) , where the , and are unit vectors relating to the instantaneous rest frame field components and form an approximately orthogonal basis; see Appendix <ref>. This is nothing but the anomalous spin-precession in disguise. To see this clearly we first note that the integral over the Scorer function Gi yields the one-loop field-dependent value of the anomalous magnetic moment a_e(χ) = α/χ∫_0^1 λλGi(z)/√(z) , which was derived by <cit.>. For weak fields, χ→0, it approaches Schwinger's 1-loop value a_e(χ→0)=α/2π, and a_e(χ) decreases monotonically to zero as χ increases. We thus have for the loop contribution to the collision operator for the axial vector the following result: ( ∂a⃗/∂τ)_loop = χ a_e(χ) a⃗· ( -) = - χ a_e(χ) a⃗× = - a⃗×Ω⃗_anomalous(a_e(χ)) , ignoring a term in (·), which should be negligible under the LCFA approximation and with the anomalous part of the spin-precession vector, Eqn. (<ref>), Ω⃗_anomalous = Ω⃗- Ω⃗_0 = a_e χ. In summary, we have shown here that in the kinetic approach the loop contribution gives exactly the anomalous term of spin-precession, with the field-dependent anomalous moment a_e(χ). Combining this with the normal precession, which was obtained from the classical transport, it becomes clear that the axial-vector density a⃗(t, x⃗, p⃗) at each point in phase space precesses around a local value of Ω⃗ as given in Eq. (<ref>), but with the field-dependent anomalous moment a_e→ a_e(χ). § MOMENT HIERARCHY AND RELATIVISTIC FLUID EQUATIONS An infinite chain of equations describing the transport of bulk plasma properties, including particle number densities, energy density, pressure, heat, etc., can be obtained by calculating momentum moments of the kinetic equations <cit.> and truncated by some choice of closure to yield fluid equations for the electron and positron species in the plasma. Here, we derive two-fluid equations for the electron and positron components of a relativistic plasma <cit.> with the particle spin-polarization taken into account up to second order. Our aim will be, in particular, to work out the effects of the spin-dependence in the collision operators and how they affect radiation reaction. The moment equations for a nonrelativistic electron gas with spin effects were discussed in Ref. <cit.>. A relativistic transport equation for particles with spin was given in <cit.> using an extended phase space, see also Refs. <cit.>, and <cit.> for the equivalence of the extended phase space with our method. In order to calculate the relativistic moment equations we have to integrate the two kinetic equations for f and a⃗ for a generic moment of the kernel Ψ = Ψ(p^α,p^β, ...) and with the Lorentz invariant measure ∫^3 p⃗ /ϵ_p. Introducing the short-hand notation for the momentum space integrals ⟨ Y ⟩ = ∫^3 p⃗/ϵ_p f Y with f as the weight we find ∂/∂ x^μ⟨Ψ p^μ⟩ = - q F_μ^ ν⟨ p^μ∂Ψ/∂ p^ν⟩ + ∫^3 p⃗/ϵ_pΨ𝒞_f , ∂/∂ x^μ⟨Ψ p^μs⃗⟩ = - q F_μ^ ν⟨s⃗ p^μ∂Ψ/∂ p^ν⟩ + q ⟨Ψs⃗×Ω⃗⟩ + ∫^3 p⃗/ϵ_pΨ𝒞_a⃗ , where in terms involving the electromagnetic fields we have integrated by parts and neglected the surface contributions. We recall the definition of polarization degree s⃗≡a⃗ / f. §.§.§ Moments of the scalar distribution f Zero order moment For the zero order moment, i.e. with the kernel function Ψ=1, of the scalar transport equation for f, we find number density current conservation ∂/∂ x^μ J^μ = ∫^3 p⃗/ϵ_pΨ𝒞_f = 0 , where the current density is J^μ = ∫^3 p⃗/ϵ_p p^μ f = ⟨ p^μ⟩ . Clearly, the derivative in the field advection term in Eqn. (<ref>) vanishes since Ψ is constant. The integral over the collision operator 𝒞_f vanishes because the gain and loss terms must exactly cancel when integrated over all electron momenta p⃗ to conserve the electron number during photon emission. To see this explicitly one may conveniently substitute the integration variable in the gain term p⃗ = q⃗ - k⃗, with ^3 p⃗=^3 q⃗ to arrive at ∫^3 p⃗/ϵ_p𝒞^gain_f(p⃗) = ∫^3q⃗/ϵ_q⃗∫^3k⃗/ω_k[ f(q⃗) w_0 (q⃗, k⃗) + a⃗(q⃗) ·w⃗_i(q⃗,k⃗) ] = ∫^3q⃗/ϵ_q f(q⃗) ∫^3k⃗/ω_kw_0 (q⃗, k⃗) + ∫^3q⃗/ϵ_qa⃗(q⃗)·∫^3k⃗/ω_kw⃗_i(q⃗,k⃗) = ∫^3 q⃗/ϵ_q𝒞^loss_f(q⃗) First order moment Next, we discuss the first order moment of f, with Ψ=p^μ. The left-hand-side of the equation can be rewritten as the divergence of the energy-momentum tensor of the plasma, T^μν = ⟨ p^μ p^ν⟩. Evaluating the momentum derivatives in the field-advection term it takes the form of the Lorentz-force. Combining with the collision terms, we obtain ∂_μ T^μν = q F^νμJ_μ -⟨ p^ν (I_0 +s⃗·I⃗_i) ⟩ , where I_A = ∫λλ R_A/λ are the first moments of the photon spectrum, i.e. the normalized radiated power. We now need to show that the integral over the collision operator 𝒞_f with Ψ=p^μ yields the last term in (<ref>). To see this, we first make the same substitution p⃗ = q⃗ - k⃗ as in (<ref>). Considering energy conservation, we write ϵ_p⃗ = ϵ_q⃗ - ω_k + δ, where δ is the energy error introduced by not including the momentum contribution from the background fields <cit.>. For ultrarelativistic particles and in the collinear emission approximation<cit.> we have δ = 1/2|q⃗| - 1/2|p⃗|≈λ/2ϵ_p (1-λ). The corresponding term of the integrated collision operator is by a factor of 1/ϵ_p^2≪1 smaller than the leading order term and can therefore be neglected. The leading order term reads ∫^3 p⃗/ϵ_p p^μ𝒞_f(p⃗) = - ∫^3 p⃗/ϵ_p∫^3k⃗/ω_k k^μ[ f(p⃗) w_0 (p⃗, k⃗) . . + a⃗(p⃗) ·w⃗_i(p⃗,k⃗) ] . For the integral over the photon momentum we make use of the definition of the rate w_A, which is given in the Appendix, Eqn. (<ref>). By means of the delta function, the photon four-momentum k^μ = (|k⃗|, k⃗) →λ (|p⃗|,p⃗)≃λ p^μ, and thus ∫^3 p⃗/ϵ_p p^μ𝒞_f(p⃗) = - ∫^3 p⃗/ϵ_p p^μ[ f(p⃗) I_0 + a⃗(p⃗) ·I⃗_i ] = - ⟨ p^μ ( I_0 + s⃗(p⃗) ·I⃗_i ⟩ , This represents the (momentum averaged) effect of quantum radiation reaction due to hard photon emission. In particular, the term s⃗(p⃗) ·I⃗_i ∼ s_B I_i describes the spin-polarization dependence of radiative energy loss. Second-order moment For the second-order moment with weight Ψ = p^ν p^λ we are led to the definition of the stress-flow tensor <cit.> U^μνλ = ⟨ p^μ p^ν p^λ⟩, the trace of which equals the current U^μν_μ = J^ν, and which obeys the transport equation ∂_μ U^μνλ = q F^νμ T_μ^λ + q F^λμ T_μ^ν + ⟨ p^ν p^λ (K_0 +s⃗·K⃗_i) ⟩ -2 ⟨ p^ν p^λ (I_0 +s⃗·I⃗_i) ⟩ . The quantities K_A = ∫_0^1 λλ^2 R_A/λ are the second moments of the photon emission spectrum, and describe the electron momentum spreading due to the stochasticicy of quantized photon emission. The term in the second line of Eqn. (<ref>) is the spin-dependent radiative cooling effect. §.§.§ Moments of the axial-vector equation a⃗ Zero-order moment Working out the zeroth moment, Ψ=1, of the axial vector equation we derive an equation for the spin current J⃗_s^μ = ∫^3 p⃗/ϵ_p p^μa⃗(p⃗) =⟨ p^μs⃗⟩ . As for the scalar current, the field advection term vanishes. However, the spin current is not conserved. It obeys the transport equation ∂_μJ⃗_s^μ = q ⟨s⃗×Ω⃗⟩ + ⟨W⃗_f- W⃗_i + s⃗· W_if - s⃗ W_0 ⟩ . The first term on the right-hand-side is follows straightforwardly from spin-precession term in the transport equation Eqn. (<ref>). It causes a depolarization of the plasma if s⃗(t,x⃗,p⃗) precesses at different frequency at either different points in space (i.e. different field strength) or for different momenta <cit.>. The second term stems from integral over the collision operator 𝒞_a⃗, where the calculation proceeds similar to the scalar case 𝒞_f above. This second term is responsible for the radiative polarization of the electrons, including the Sokolov-Ternov <cit.> or Baier-Katkov-Strakhovenko <cit.> radiative polarization effects. With help of the expressions given in Appendix <ref>, the combination of rates in the radiative polarization term can be reformulated as W⃗_f - W⃗_i + s⃗·W_if - s⃗ W_0 = ( W_f - W_i ) + (s_B + s_E ) W_3 + s_K W_4 = α∫_0^1 λ λ^2/1-λ[ s⃗Ai'(z)/z - Ai(z)/√(z). . - s_K ( Ai_1(z) + Ai'(z)/z) ] , where we have expanded the spin-polarization s⃗ in the three principal directions along the electric (magnetic) field () in the particle rest frame, and =×, according to s⃗ = s_B + s_E + s_K. Higher moments With the weight functions Ψ = p^ν and Ψ = p^ν p^λ we define the spin energy-momentum tensor T⃗_s^μν = ⟨ p^μ p^νs⃗⟩ and spin stress-flow-tensor U⃗^μνλ_s⃗ = ⟨ p^μ p^ν p^λs⃗⟩, respectively. They obey the transport equations ∂_μT⃗_s^μν = q F^νμ (J⃗_⃗s⃗)_μ + q ⟨ p^νs⃗×Ω⃗⟩ + ⟨ p^ν ( W⃗_f- W⃗_i + s⃗·W_if - s⃗ W_0 ) ⟩ -⟨ p^ν ( I⃗_f +s⃗·I_if) ⟩ , ∂_μU⃗_s^μνλ = q F^νμ (T⃗_⃗s⃗)_μ^λ + q F^λμ (T⃗_⃗s⃗)_μ^ν + q ⟨ p^ν p^λs⃗×Ω⃗⟩ + ⟨ p^ν p^λ ( W⃗_f- W⃗_i + s⃗·W_if - s⃗ W_0 ) ⟩ -2 ⟨ p^ν p^λ ( I⃗_f +s⃗·I_if) ⟩ + ⟨ p^ν p^λ ( K⃗_f +s⃗·K_if) ⟩ . § SINGLE PARTICLE EQUATIONS OF MOTION From the moment hierarchy we can now derive effective semiclassical single-particle equations of motion for a radiating lepton including the effects of its spin polarization. Note that the spin polarization is a classical quantity representing the expectation of the four components of the particle spin vector and so the single particle equations should be interpreted as the trajectory of an “average” lepton and not of an individual lepton. To find those, we first note that the current density for a point particle may be written as <cit.> J^μ(x) = ∫τ u^μ(τ) δ^(4) (x - x(τ)) . By analogy with Eq. (<ref>), using the definition in Eq. (<ref>) we deduce that the appropriate form of the spin current density for a point particle should be J_s^μ(x) = ∫τ u^μ(τ)s⃗(τ) δ^(4) (x - x(τ)) . Hence, the scalar and axial vector distribution functions for a single point particle are f(t,x⃗,p⃗) = ∫τϵ_p⃗δ^(3) ( p⃗ - u⃗(τ)) δ^(4) (x-x(τ)) , a⃗(t,x⃗,p⃗) = ∫τs⃗(τ) ϵ_p⃗δ^(3) ( p⃗ - u⃗(τ)) δ^(4) (x-x(τ)) . From these definitions of the distribution functions, the energy momentum tensor for a point particle is T^μν = ∫τ u^μ(τ) u^ν(τ) δ^4(x-x(τ)) . From this, we can derive the momentum conservation equation, starting with ∂_μ T^μν = ∫τ u^ν(τ) u^μ(τ) ∂_μδ^4(x-x(τ)) = - ∫τ u^ν(τ) d/dτδ^4(x-x(τ)) = ∫τ u^ν(τ)/τδ^4(x-x(τ)) . Thus, from Eq. (<ref>) we find u^μ/τ = q F^μν(x(τ)) u_ν(τ) - u^μ(τ) (I_0(χ) + s⃗(τ) ·I⃗_i(χ)) , where χ = χ(x(τ),u(τ)) is the local value of the quantum parameter at the particle location. This quasi-classical equation for the particle acceleration contains the Lorentz force, and additional a spin-dependent radiation reaction term as the second term on the right. In the limit of unpolarized particles s⃗=0, our result agrees with the literature, e.g. Refs. <cit.>. The quasi-classical equation for the polarization vector evolution is obtained from Eqn. (<ref>) as s⃗/τ = q s⃗(τ) ×Ω⃗(τ) + W⃗_f - W⃗_i + s⃗· (W_if -W_0 1⃗) , where again the four-divergence is transformed into a proper time derivative. Eqn. (<ref>) describes the spin precession, here with the anomalous magnetic moment being the field-depdendent value a_e(χ), Eq. (<ref>). The additional terms describe the radiative polarization of the electrons, hence are the generalization of the Sokolov-Ternov effect. In the limit of weak fields, i.e. in the lowest order in χ this equation is known as the Baier-Katkov-Strakhovenko equation equation <cit.>. Both these effects have been recently employed for simulating spin effects in laser-electron interactions, see e.g. Refs. <cit.>. While a derivation of the precession part had been given recently in <cit.>, no formal derivation of the full equation is known to us. § DISCUSSION §.§ Comparison of the kinetic equation results with classical RR and the spin-dependent Gaunt factor By comparing the kinetic equation results for the radiative energy loss with the classical RR equations we can subsume all quantum effects into a spin-dependent Gaunt factor 𝔤_s⃗: ⟨ p^μ (I_0 + s⃗·I⃗_i) ⟩ = ⟨ p^μ𝔤_s⃗(χ) I_class⟩ , where I_class = 2/3α m χ^2 is the lowest order term in the asymptotic expansion of I_0 as χ→0. Thus, 𝔤_s⃗(χ) = -3/2χ^2∫_0^1 λ λ( _1(z) + 2g'(z)/z. . + λ(z)/√(z) (s⃗·) ) . By using the tables in Appendix <ref>, we can write down the series expansion of the spin-dependent Gaunt factor for small χ≪1 as 𝔤_s⃗(χ) ≃ 1 - ( 55√(3)/16 + 3/2 s_B )χ + (48 + 105 √(3)/8 s_B ) χ^2 - ( 8855√(3)/32 + 300 s_B ) χ^3 . As can be seen from this equation, only the degree of polarization along the rest frame magnetic field, s_B = s⃗·, affects the Gaunt factor. Positive values of s_B reduce the Gaunt factor and hence radiative losses, while negative s_B increase 𝔤_s⃗. Thus, in an arbitrary electromagnetic field, electrons which have some degree of polarization in the direction of the rest frame magnetic field experience less radiation reaction than unpolarized electrons. The polarization orthogonal to does not affect the strength of radiative losses. The function 𝔤_s is plotted in the upper panel of Fig. <ref> By inspecting the effective single-particle equation, Eq. (<ref>), it is straightforward to find that (<ref>) violates the relativistic constraint u̇.u=0<cit.>. This behavior is a consequence of the ultra-relativistic and collinear emission approximation made in the collision operators, under which we kept only the leading order terms in inverse power of γ. It can be seen that in the limit 𝔤_s⃗→ 1 Eq. (<ref>) agrees with the leading order ultrarelativistic approximation of the classical RR force in the Herrera form. This immediately indicates how to restore u̇.u=0 for Eqn. (<ref>): u̇^μ = q F^μν u_ν + τ_R 𝔤_s⃗Δ^μ_ν F^νλ F_λκ u^κ . It would be interesting to investigate whether one automatically gets the `correct' classical limit if the angular distribution of photon emission is taken into account correctly in the collision operators instead of relying on the collinear emission approximation <cit.>. By analogy to 𝔤_s we can also define a `Gaunt factor' 𝔥_s for the spreading effect as the ratio of K_0+s⃗·K⃗_i and its leading asymptotic term for χ→0, K_0∼α m χ^3 55/24√(3) as 𝔥_s⃗(χ) = -24√(3)/55χ^3∫_0^1 λ λ^2 ( _1(z) + 2g'(z)/z. . + λ(z)/√(z) (s⃗·) ) ≃ 1 - ( 448 √(3)/55 + 63/22 s_B ) χ + (777/4 + 480 √(3)/11 s_B) χ ^2 . The asymptotic expansion for small χ shows that the spreading is reduced if the electron have positive s_B, and increased if s_B is negative. The function 𝔥_s and its asymptotic approximation for small χ is plotted in the lower panel of Fig. <ref>. §.§ Evolution of average energy and average energy spread, and energy-polarization-correlation To define an average of a quantity such as the average energy of a plasma particle we need to divide the corresponding phase space integral by a suitable number density. This choice is not unique and several definitions exist in the literature. To facilitate comparison of our spin-dependent results with spin-independent ones from the literature<cit.>, we will choose here to divide by the zero-component of the current four-vector n=J^0, corresponding to the laboratory frame density. This choice means that the equations lose their Lorentz covariance, but are more convenient to work with than other choices. We define a laboratory frame average of a quantity X as follows: X̅≡⟨ϵ_p X ⟩/n = ⟨ϵ_p X ⟩/⟨ϵ_p ⟩ . Thus, the average fluid velocity is v̅^μ = J^μ /n = (1, v̅⃗̅), where v̅⃗̅ = ∫^3p⃗ v⃗ f / ∫^3p⃗ f with v⃗ = p⃗ / ϵ_p. We should note, however, that v̅^μ is not a valid definition of a fluid four-velocity as it does not square to unity; v̅_μv̅^μ = 1 - v̅⃗̅^2 < 1. But we can define a Lorentz factor, γ̅ = (1 - v̅⃗̅^2)^-1/2 such that u^μ≡γ̅v̅^μ agrees with the commonly used Eckart frame fluid four-velocity <cit.> and n = γ̅√(J.J), see also Appendix <ref>. We can now define an average four-momentum as p̅^μ = T^0μ/n, average energy ϵ̅_p = T^00/n (equivalent to an “effective temperature” for the species) and average spin polarization as s̅⃗̅ = J⃗^0_s⃗/n = ⟨ϵ_p s⃗⟩ /⟨ϵ_p⟩. Moreover, the average squared energy is ϵ_p^2 = U^000/n. With this, we first note that the current conservation yields the auxiliary equation d_t n + n ∇·v̅⃗̅ = 0 , having introduced the convective derivative d_t =∂_t +v̅⃗̅·∇. From Eqn. (<ref>) we can derive equations for the average energy and momentum. We'll focus here on the average energy, i.e. ν=0 component: n d_t ϵ̅_p =-∇·Q⃗ + q J⃗·E⃗ -⟨ϵ_p ( I_0 + s⃗·I⃗_i ) ⟩ with δϵ = ϵ_p - ϵ̅_p and Q⃗ = n v⃗δϵ =⟨p⃗δϵ⟩ is the spatial part of the intrinsic heat flux four-vector Q^μ, which is related to the total heat flux four-vector Q_T^μ = ⟨ϵ_p p^μ⟩ through Q^μ = Q_T^μ - ϵ̅_p J^μ. The first term on the right-hand-side represents the energy diffusion (i.e. divergence of the heat flux), the second term represents Ohmic energy gain, and the third term represents the radiative energy loss, including the spin-dependence. Next we want to derive an equation for the evolution of variance of the energy, i.e. the second central moment σ^2_ϵ = ϵ_p^2 - ϵ̅_p^2. From Eqn. (<ref>) we first derive the equation for ϵ_p^2 as nd_t ϵ_p^2 = - ∇·⟨p⃗ (ϵ_p^2 - ϵ_p^2) ⟩ + q Q⃗·E⃗ + ⟨ϵ_p^2( K_0 + s⃗·K⃗_i ) ⟩ - 2 ⟨ϵ_p^2( I_0 + s⃗·I⃗_i ) ⟩ . With d_tσ^2_ϵ = d_t ϵ_p^2 - 2 ϵ̅_p d_t ϵ̅_p, and employing Eqn. (<ref>) we find for the energy variance nd_t σ^2_ϵ = - ∇·⟨p⃗ ( δϵ^2-σ_ϵ^2)⟩ +2 ( q E⃗ - ∇ϵ̅_p ) ·Q⃗ + ⟨ϵ_p^2 (K_0 + s⃗·K⃗_i)⟩ - 2 ⟨ϵ_pδϵ ( I_0 + s⃗·I⃗_i) ⟩ . The first term on the right hand side represents some measure of the transport of energy spread. With the interpretation of the average energy ϵ̅_p as an effective temperature of the electrons, the term ∇ϵ̅_p can be interpreted as thermoelectric effective force. Analogous to the electric field acting on the current leading to energy gain/loss in Eqn. (<ref>), the electric field and thermoelectric term act on the heat flux to increase the energy spread. The last two terms govern how the energy spreading of the plasma particles changes due to radiation effects including the particle polarization. According to Ref. <cit.>, where the corresponding terms were discussed for unpolarized electrons, the second-to-last term is the radiative heating of the beam due to the stochasticy of photon emission. The last term is a radiative beam cooling due to a phase space contraction because radiation losses are larger for higher-energy particles. For the evolution of average spin and the spin-energy-moment we find n d_t s̅⃗̅ = - ∇·Σ + q ⟨s⃗×Ω⃗⟩ + ⟨W⃗_f -W⃗_i + s⃗·W_if - s⃗ W_0 ⟩ , nd_t ϵ_p s⃗ = - ∇·⟨p⃗ (ϵ_ps⃗ - ϵ_p s⃗) ⟩ + q J⃗_s·E⃗ + q ⟨ϵ_p s⃗×Ω⃗⟩ + ⟨ϵ_p(W⃗_f -W⃗_i + s⃗·W_if - s⃗ W_0 ) ⟩ - ⟨ϵ_p ( I⃗_f + s⃗·I_if ) ⟩ , where we have defined the spin diffusion tensor Σ = nv⃗δs⃗ = ⟨p⃗δs⃗⟩, with δs⃗ = s⃗ - s̅⃗̅. This yields for the covariance between average spin-polarization and average energy, cov(ϵ_p, s⃗) = ϵ_p s⃗ - ϵ̅_p s̅⃗̅, the following evolution equation: n d_t cov(ϵ_p, s⃗) = - ∇·⟨p⃗ (δϵδs⃗ - cov(ϵ_p,s⃗)) ⟩ + ( q E⃗ - ∇ϵ̅_p ) ·Σ - Q⃗·∇s̅⃗̅ + q ⟨s⃗×Ω⃗δϵ⟩ + ⟨ (W⃗_f -W⃗_i + s⃗·W_if - s⃗ W_0 ) δϵ⟩ - ⟨ϵ_p( I⃗_f + s⃗·I_if ) ⟩ + s̅⃗̅ ⟨ϵ_p (I_0 + s⃗·I⃗_i )⟩ . In addition to a thermoelectric force driving the spin diffusion Σ, this equation also contains a force due to spin gradients ∇s̅⃗̅ coupling to the heat flux. The interpretation of the remaining terms is as follows: The terms in the third line stem from spin-precession and radiative polarization, respectively, and affect the correlation between spin and particle energy in a significant way if the particle energy spread δϵ is large. The two terms in the last line are coming from spin-dependent radiation reaction. If we denote the contributions from this last line by nδ_I cov(ϵ_p, s⃗) and expand the spin-vector along the principal directions, s⃗ = s_B +s_E + s_k, we obtain δ_I cov(ϵ_p, s_B) = - I_f - δ s_B I_0 + s̅_B s_B I_i - s_B I_3 , δ_I cov(ϵ_p, s_E) = - δ s_E I_0 + s̅_E s_B I_i - s_E I_3 , δ_I cov(ϵ_p, s_K) = - δ s_K I_0 + s̅_K s_B I_i - s_K I_4 , which demonstrates the coupling of the different spin-polarization components; s_B affects the correlations of s_E and s_K, but not the other way around. In particular, for an initially unpolarized distribution only Eqn. (<ref>) is nonzero. §.§ Sokolov-Ternov effect To discuss the Sokolov-Ternov effect from our kinetic equations, let us assume we have a globally constant direction for all electrons. That could be for instance a uniform and homogeneous magnetic field B⃗ = B_0 ẑ, with electrons circulating in a steady state with constant energy on cyclotron orbits, and with p_z=0. Sokolov and Ternov <cit.> introduced the fractions n^σ of electrons with spin-up (σ=+1) and spin-down (σ=-1) and formulated rate equations for those quantities involving the spin-flip rates. Here, we can then define the number density of electrons with spin up/down, n^σ, analogous to Eqn. (<ref>) as n^σ = 1/2∫^3 p⃗ (f + σ a_z) = J^0 + σ·J⃗_s⃗^0/2 . Since the integrated collision integral for f vanishes, i.e. the current J^μ is conserved, as was shown above, the evolution equation for the n^σ are essentially given by the momentum integral over 𝒞_a as in Eqn. (<ref>), ( ∂ n^σ/∂ t)_rad = σ/2∫^3p⃗/ϵ_p f(p⃗) [ W_f - W_i + s_z W_3 ] . This is not exactly identical to the textbook treatment of the Sokolov-Ternov effect, since on the right-hand-side of the equation it is not straightforward to identify the densities n^σ. Instead, the distributions f and polarization degree s_z are weighted with the photon emission rates. In a homogeneous magnetic field we have χ= |p⃗| B_0 and thus different parts of the distribution function have different photon emission rates. That means, different parts of the distribution spin-polarize at different rate. If we are looking for a stationary state of the polarization degree, then we should require that the integrand of on the right-hand-side of Eqn. (<ref>) must vanish for each value of p⃗ separately. This then leads to the Sokolov-Ternov equilibrium for each fraction of particles with parameter χ. Using the asymptotic series of the rates for small χ in Appendix <ref> we find s_z^eq(χ) ∼W_i-W_f/W_3≃ - 8/5√(3) + 13/50χ - 92 √(3)/125χ^2 . The leading term is the textbook Sokolov-Ternov result <cit.>. The χ-dependent corrections predict that the equilibrium polarization degree decreases with increasing χ. This phenomenon has been observed numerically for the radiative polarization of electrons in a rotating electric field Refs. <cit.>. Of course, the exact dynamics of spin polarization is more involved due to the temporal change of χ due to radiation reaction. § SUMMARY In this paper we have derived the kinetic equations for a spin-polarized relativistic plasma with the leading quantum effects due to the interaction of the leptons with a strong field taken into account via Boltzmann-type collision operators, 𝒯 f = 𝒞_f , 𝒯a⃗ = q a⃗×Ω⃗+ 𝒞_a⃗ , for the scalar density f and the spin (axial-vector) density a⃗. The transport operator 𝒯 is given in Eqn. (<ref>), the collision operators by Eqns. (<ref>)–(<ref>), and the precession vector Ω⃗ by Eqn. (<ref>), with the field-dependent anomalous moment a_e=a_e(χ). The local quantum collision operators were derived using the locally constant field approximation of the photon emission rates and electron self-energy expressions. The loop contribution provides exactly the anomalous precession. Starting from a classical description of radiation reaction we discussed the relevance of radiation reaction for the covariant BMT equation and found a RR correction term to spin-precession. While this term is crucial for the evolution of the covariant spin-vector S^μ, under typical conditions of high-intensity laser-plasma interactions this correction term is not relevant for the dynamics of the spin-vector s⃗ in the electron rest frame. From the kinetic equations we have derived effective single-particle equations of motion for a spinning particle, taking into account the spin-dependent radiation reaction and radiative polarization. We derived the moment hierarchy and obtained equations for the mean energy, energy spreading, and the energy-spin-correlation. In this paper we focused on the interplay between the spin polarization and radiation reaction effects. Hence, the discussion was restricted to a polarized electron plasma without pair production effects taken into account. A generalization of our results to achieve a consistent description of a QED plasma of electrons, positrons and photons should be straightforward. AGRT acknowledges support from the US NSF award #2108075 and NSF/GACF award #2206059. § POLARIZATION RESOLVED PHOTON EMISSION RATES Here, we summarize all the polarization resolved photon emission rates appearing in the collision operators. Following Ref. <cit.> we define the differential rate for the collinear emission of a photon with momentum k⃗ by an electron with momentum p⃗ as w_A(p⃗,k⃗) = ∫_0^1 λ ω_k δ^(3)(k⃗-λp⃗) R_A/λ , where λ is the fractional momentum transfer from the electron to the photon, and the label A denotes the various polarization dependent contributions. The corresponding total rates are given by W_A(p⃗ ) = ∫^3 k⃗/ω_k w_A(p⃗,k⃗) = ∫_0^1 λ R_A/λ = R_A , where ^3 k⃗/ω_k is the Lorentz invariant phase space element of the emitted photon, with ω_k=|k⃗|. The R_A/λ are the angularly-integrated LCFA rate building blocks for the polarization dependent photon emission rate (see e.g. Ref. <cit.>), which are given here as a probability per unit proper time as R_0/λ = - α[ _1(z) + 2g '(z)/z] , R⃗_i/λ = - αλ(z)/√(z) = R_i/λ , R⃗_f/λ = - αλ/1-λ(z)/√(z) = R_f/λ , R_if/λ = R_1/λ ( + ) + R_2/λ = R_0/λ1⃗ + R_3/λ ( + ) + R_4/λ , R_1/λ = - α[ _1(z)+2'(z)/z] , R_2/λ = - α[ (2g-1)_1(z) +2g '(z)/z] , R_3/λ = + αλ^2/1-λAi'(z)/z , R_4/λ = - αλ^2/1-λAi_1(z) . The label A=0 stands for the contribution of unpolarized particles, A=i(f) is for initially(finally) polarized electrons. The different polarization correlation contributions (A=if) are be denoted by A=1,2,3,4. Ai is the Airy function with argument z=(λ/χ (1-λ))^2/3, Ai' its derivative, and Ai_1 the integral Ai_1(z) = ∫_0^∞ x Ai(x+z), g=1+λ^2/2(1-λ), and χ is the quantum parameter χ = √(p.F^2.p) = √( (ϵ_p E⃗(t,x⃗) + p⃗×B⃗(t,x⃗) )^2 - (p⃗·E⃗(t,x⃗))^2) . The principal directions , appearing in the above rates are unit vectors in the direction of the magnetic field and the electric field in the rest frame of the electron, and = ×. In the ultrarelativistic approximation, ϵ_p ≫ 1, and when the angle between p⃗ and the field is not small we can write = -qE⃗ + p̂⃗̂×B⃗ - p̂⃗̂(p̂⃗̂·E⃗)/√((p̂⃗̂×E⃗)^2 + (p̂⃗̂×B⃗)^2 - 2p̂⃗̂· (E⃗×B⃗)) , = -qB⃗ - p̂⃗̂×E⃗ - p̂⃗̂(p̂⃗̂·B⃗)/√((p̂⃗̂×E⃗)^2 + (p̂⃗̂×B⃗)^2 - 2p̂⃗̂· (E⃗×B⃗)) , where p̂⃗̂ = p⃗ / |p⃗|. For positrons, with q=+1, the principal directions point opposite to the fields in the particle rest frame. Under this approximation, in the rest frame of the particle the electromagnetic field appears to be a crossed field, thus ⊥, and , , form an orthonormal basis. Moreover, because of the collinear emission approximation these unit vectors are the same for incident and final particles. The photon emission probability for polarized electrons can be compactly expressed employing the Müller matrix M and the Stokes vectors n⃗_i,f of the incident and final electrons as <cit.> R/λ = 1/2 N_i M N_f^T , where N_i,f = (1,n⃗_i,f). It should be noted that the N_i,f are not Minkowski four-vectors, despite having four components. The Müller matrix itself is constructed by collecting the LCFA building blocks above into a 4×4 dimensional matrix emission probability rate M = ( R_0/λ 𝐑_f/λ 𝐑_i/λ R_if/λ) . Equivalent representations for the polarization resolved nonlinear Compton rates have been recently given in <cit.>, and a subset of these rates for electrons spin-polarized along the B-field direction was also given in <cit.>. The rates can be also represented in the form of a density matrix <cit.>. In addition to the photon emission contribution at O(α) we also need the O(α) contribution from the one-loop electron self-energy. More precisely, these contributions are coming from the interference of the one-loop self-energy at 𝒪(α) with the `free' propagation of the electron. The expression are given by <cit.> R_0^L = - R_0 , R⃗_i^L = R⃗_f^L = α∫λλAi(z)/√(z) , R^L_if = R_0^L 1_3 + α∫λλGi(z)/√(z) ( -) , where Gi is the Scorer function <cit.>. Airy and Scorer functions are the real and imaginary parts of the integral Ai(z) + i Gi(z) = 1/π∫_0^∞ t e^ i(zt+ t^3/3) , The Müller matrix M^L for the loop is constructed analogously to Eqn. (<ref>) as <cit.> M^L = ( R_0^L 𝐑_f^L 𝐑_i^L R_if^L ) . § ASYMPTOTICS OF THE PHOTON EMISSION SPECTRUM MOMENTS The moments of the photon emission spectrum are defined as follows W_A = ∫_0^1 λ R_A/λ , I_A = ∫_0^1 λλ R_A/λ , K_A = ∫_0^1 λλ^2 R_A/λ , where the subscript A runs over the different building blocks of the spin-dependent photon emission rate. For for small χ→0 the asymptotic expansions of the Q ∈{ W_A,I_A,K_A } can be written as Q = α∑_n=0^∞β_n^{Q}χ^n , where the coefficients β_n^{Q} are presented in Table <ref>. § A NOTE ON THE FLUID VELOCITY AND RELATIVISTIC HYDRODYNAMICS Here, we briefly discuss how one could properly define a fluid velocity. According to the literature, e.g. Ref. <cit.>, there are two commonly used options for defining the fluid velocity u^μ: (i) The Eckart frame via the vanishing of particle number/mass diffusion (i.e. via the particle number current), and (ii) the Landau frame via the vanishing of energy diffusion as the normalized eigenvector of T^μν. The Eckart approach defines a fluid four-velocity through J^μ = N u^μ, with the fluid velocity u^μ (u^2=1) and N is the invariant number density, i.e. the number density measured in the fluid rest frame. Now, the particle momentum p^μ may be split into contributions parallel and perpendicular to u^μ as p^μ = κ_p u^μ + Δ^μν p_ν . with Δ^μν≡η^μν - u^μ u^ν and κ_p ≡ u_μ p^μ. With this we have ⟨ p^μ⟩ = ⟨κ_p ⟩ u^μ + ⟨ p^μ - κ_p u^μ⟩ i.e. the vanishing of the second term means there is no particle diffusion current in the Eckart frame. The number density is defined as N=⟨κ_p ⟩ = J^μ u_μ = √(J^μ J_μ). By the primary definition we can also use u^μ = J^μ / √(J.J) to define the fluid velocity. Particle number current conservation ∂.J=0 yields the following equation: Ṅ + Nθ = 0, where θ = ∂_μ u^μ is the expansion scalar (four-divergence of the fluid velocity) and Ṅ = (u^μ∂_μ)N is the comoving or convective derivative. The energy momentum tensor can be expanded into its irreducible tensor components T^μν = ⟨ p^μ p^ν⟩ = e u^μ u^ν + u^μ H^ν + u^ν H^μ - pΔ^μν +π^μν , with energy density e=⟨κ_p^2⟩, energy diffusion current (equivalent to heat flow in the Eckart description) H^μ = ⟨κ_p Δ^μ_νp^ν⟩, hydrostatic pressure p = - 1/3Δ_μν⟨ p^μ p^ν⟩, and shear-stress (or viscous pressure) tensor π^μν = Δ^μν_αβ⟨ p^α p^β⟩, with the double-symmetric, traceless rank-4 projection operator orthogonal to u^μ, Δ^μν_αβ = 1/2( Δ^μ_αΔ^ν_β + Δ^μ_βΔ^ν_α) - Δ^μνΔ_αβ/Δ^λ_λ . Moreover, π^μ_μ = 0, u_μπ^μν=0 and u_μ H^μ = 0. Thus, the trace of the energy momentum tensor T^μ_μ = e-3p = ⟨ p^μ p_μ⟩ = ⟨ 1⟩ = ∫^3 p⃗/ϵ_p f. § REFERENCES 87 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ivanov, Kotkin, and Serbo(2004)]Ivanov_QED_spin author author D. Y. Ivanov, author G. L. Kotkin, and author V. G. Serbo, title title Complete description of polarization effects in emission of a photon by an electron in the field of a strong laser wave, https://doi.org/10.1140/epjc/s2004-01861-x journal journal The European Physical Journal C - Particles and Fields volume 36, pages 127–145 (year 2004)NoStop [Ivanov, Kotkin, and Serbo(2005)]Ivanov_QED_spin2 author author D. Y. Ivanov, author G. L. Kotkin, and author V. G. Serbo, title title Complete description of polarization effects in e+ e- pair productionby a photon in the field of a strong laser wave, https://doi.org/10.1140/epjc/s2005-02125-1 journal journal The European Physical Journal C - Particles and Fields volume 40, pages 27–40 (year 2005)NoStop [King, Elkina, and Ruhl(2013)]King_PRA_2013 author author B. King, author N. Elkina, and author H. Ruhl, title title Photon polarization in electron-seeded pair-creation cascades, https://doi.org/10.1103/PhysRevA.87.042117 journal journal Phys. Rev. A volume 87, pages 042117 (year 2013)NoStop [Del Sorbo et al.(2017)Del Sorbo, Seipt, Blackburn, Thomas, Murphy, Kirk, and Ridgers]Sorbo_PRA author author D. Del Sorbo, author D. Seipt, author T. G. Blackburn, author A. G. R. Thomas, author C. D. Murphy, author J. G. Kirk, and author C. P. Ridgers, title title Spin polarization of electrons by ultraintense lasers, https://doi.org/10.1103/PhysRevA.96.043407 journal journal Phys. Rev. A volume 96, pages 043407 (year 2017)NoStop [Seipt et al.(2018)Seipt, Del Sorbo, Ridgers, and Thomas]Seipt_PRA_2018 author author D. Seipt, author D. Del Sorbo, author C. P. Ridgers, and author A. G. R. Thomas, title title Theory of radiative electron polarization in strong laser fields, https://doi.org/10.1103/PhysRevA.98.023417 journal journal Phys. Rev. A volume 98, pages 023417 (year 2018)NoStop [Seipt et al.(2019)Seipt, Del Sorbo, Ridgers, and Thomas]Seipt_PRA_2019 author author D. Seipt, author D. Del Sorbo, author C. P. Ridgers, and author A. G. R. Thomas, title title Ultrafast polarization of an electron beam in an intense bichromatic laser field, https://doi.org/10.1103/PhysRevA.100.061402 journal journal Phys. Rev. A volume 100, pages 061402 (year 2019)NoStop [Del Sorbo et al.(2018)Del Sorbo, Seipt, Thomas, and Ridgers]Sorbo_PPCF author author D. Del Sorbo, author D. Seipt, author A. G. R. Thomas, and author C. P. Ridgers, title title Electron spin polarization in realistic trajectories around the magnetic node of two counter-propagating, circularly polarized, ultra-intense lasers, https://doi.org/10.1088/1361-6587/aab979 journal journal Plasma Phys. Control. Fusion volume 60, pages 064003 (year 2018)NoStop [Seipt and King(2020)]Seipt_PRA_2020 author author D. Seipt and author B. King, title title Spin- and polarization-dependent locally-constant-field-approximation rates for nonlinear compton and breit-wheeler processes, https://doi.org/10.1103/PhysRevA.102.052805 journal journal Phys. Rev. A volume 102, pages 052805 (year 2020)NoStop [Seipt et al.(2021)Seipt, Ridgers, Del Sorbo, and Thomas]Seipt_NJP_2021 author author D. Seipt, author C. P. Ridgers, author D. Del Sorbo, and author A. G. R. Thomas, title title Polarized qed cascades, https://doi.org/10.1088/1367-2630/abf584 journal journal New J. Phys. volume 23, pages 053025 (year 2021)NoStop [Chen et al.(2019)Chen, He, Shaisultanov, Hatsagortsyan, and Keitel]Chen_PRL_2019 author author Y.-Y. Chen, author P.-L. He, author R. Shaisultanov, author K. Z. Hatsagortsyan, and author C. H. Keitel, title title Polarized positron beams via intense two-color laser pulses, https://doi.org/10.1103/PhysRevLett.123.174801 journal journal Phys. Rev. Lett. volume 123, pages 174801 (year 2019)NoStop [Li et al.(2019)Li, Shaisultanov, Hatsagortsyan, Wan, Keitel, and Li]Li_PRL_2019 author author Y.-F. Li, author R. Shaisultanov, author K. Z. Hatsagortsyan, author F. Wan, author C. H. Keitel, and author J.-X. Li, title title Ultrarelativistic electron-beam polarization in single-shot interaction with an ultraintense laser pulse, https://doi.org/10.1103/PhysRevLett.122.154801 journal journal Phys. Rev. Lett. volume 122, pages 154801 (year 2019)NoStop [Li et al.(2020a)Li, Shaisultanov, Chen, Wan, Hatsagortsyan, Keitel, and Li]Li_PRL_2020_gamma_ray author author Y.-F. Li, author R. Shaisultanov, author Y.-Y. Chen, author F. Wan, author K. Z. Hatsagortsyan, author C. H. Keitel, and author J.-X. Li, title title Polarized ultrashort brilliant multi-gev rays via single-shot laser-electron interaction, https://doi.org/10.1103/PhysRevLett.124.014801 journal journal Phys. Rev. Lett. volume 124, pages 014801 (year 2020a)NoStop [Li et al.(2020b)Li, Chen, Wang, and Hu]Li_PRL_2020_helicity_transfer author author Y.-F. Li, author Y.-Y. Chen, author W.-M. Wang, and author H.-S. Hu, title title Production of highly polarized positron beams via helicity transfer from polarized electrons in a strong laser field, https://doi.org/10.1103/PhysRevLett.125.044802 journal journal Phys. Rev. Lett. volume 125, pages 044802 (year 2020b)NoStop [Wan et al.(2020)Wan, Wang, Guo, Chen, Shaisultanov, Xu, Hatsagortsyan, Keitel, and Li]Wan_PRR_2020 author author F. Wan, author Y. Wang, author R.-T. Guo, author Y.-Y. Chen, author R. Shaisultanov, author Z.-F. Xu, author K. Z. Hatsagortsyan, author C. H. Keitel, and author J.-X. Li, title title High-energy -photon polarization in nonlinear breit-wheeler pair production and polarimetry, https://doi.org/10.1103/PhysRevResearch.2.032049 journal journal Phys. Rev. Res. volume 2, pages 032049 (year 2020)NoStop [Guo et al.(2020)Guo, Wang, Shaisultanov, Wan, Xu, Chen, Hatsagortsyan, and Li]guo_stochasticity_2020 author author R. T. Guo, author Y. Wang, author R. Shaisultanov, author F. Wan, author Z. F. Xu, author Y. Y. Chen, author K. Z. Hatsagortsyan, and author J. X. Li, title title Stochasticity in radiative polarization of ultrarelativistic electrons in an ultrastrong laser pulse, https://doi.org/10.1103/PhysRevResearch.2.033483 journal journal Physical Review Research volume 2, pages 1–11 (year 2020), note arXiv: 2003.11200 Publisher: American Physical SocietyNoStop [Thomas et al.(2020)Thomas, Hützen, Lehrach, Pukhov, Ji, Wu, Geng, and Büscher]Thomas:2020 author author J. Thomas, author A. Hützen, author A. Lehrach, author A. Pukhov, author L. Ji, author Y. Wu, author X. Geng, and author M. Büscher, title title Scaling laws for the depolarization time of relativistic particle beams in strong fields, https://doi.org/10.1103/PhysRevAccelBeams.23.064401 journal journal Physical Review Accelerators and Beams volume 23, pages 064401 (year 2020), note arXiv: 2001.07084NoStop [Büscher et al.(2020)Büscher, Hützen, Ji, and Lehrach]buscher_generation_2020 author author M. Büscher, author A. Hützen, author L. Ji, and author A. Lehrach, title title Generation of polarized particle beams at relativistic laser intensities, https://doi.org/10.1017/hpl.2020.35 journal journal High Pow Laser Sci Eng volume 8, pages e36 (year 2020)NoStop [Dai et al.(2021)Dai, Shen, Li, Shaisultanov, Hatsagortsyan, Keitel, and Chen]Dai_MRE_2022 author author Y.-N. Dai, author B.-F. Shen, author J.-X. Li, author R. Shaisultanov, author K. Z. Hatsagortsyan, author C. H. Keitel, and author Y.-Y. Chen, title title Photon polarization effects in polarized electron–positron pair production in a strong laser field, https://doi.org/10.1063/5.0063633 journal journal Matter and Radiation at Extremes volume 7, pages 014401 (year 2021)NoStop [Chen et al.(2022a)Chen, Hatsagortsyan, Keitel, and Shaisultanov]Chen_PRD_2022 author author Y.-Y. Chen, author K. Z. Hatsagortsyan, author C. H. Keitel, and author R. Shaisultanov, title title Electron spin- and photon polarization-resolved probabilities of strong-field qed processes, https://doi.org/10.1103/PhysRevD.105.116013 journal journal Phys. Rev. D volume 105, pages 116013 (year 2022a)NoStop [Zhang et al.(2020)Zhang, Bulanov, Seipt, Arefiev, and Thomas]Zhang:Perspective author author P. Zhang, author S. S. Bulanov, author D. Seipt, author A. V. Arefiev, and author A. G. R. Thomas, title title Relativistic Plasma Physics in Supercritical Fields, https://doi.org/10.1063/1.5144449 journal journal Phys. Plasmas volume 27, pages 050601 (year 2020), https://arxiv.org/abs/2001.00957 arXiv:2001.00957 NoStop [Chen et al.(2022b)Chen, Hatsagortsyan, Keitel, and Shaisultanov]chen_electron_2022 author author Y.-Y. Chen, author K. Z. Hatsagortsyan, author C. H. Keitel, and author R. Shaisultanov, title title Electron spin- and photon polarization-resolved probabilities of strong-field QED processes, https://doi.org/10.1103/PhysRevD.105.116013 journal journal Phys. Rev. D volume 105, pages 116013 (year 2022b), note publisher: American Physical SocietyNoStop [Torgrimsson(2021a)]Torgrimsson:NJP2021 author author G. Torgrimsson, title title Loops and polarization in strong-field QED, https://doi.org/10.1088/1367-2630/abf274 journal journal New J. Phys. volume 23, pages 065001 (year 2021a), https://arxiv.org/abs/2012.12701 arXiv:2012.12701 NoStop [Ridgers et al.(2014)Ridgers, Kirk, Duclous, Blackburn, Brady, Bennett, Arber, and Bell]Ridgers:JCompPhys2014 author author C. P. Ridgers, author J. G. Kirk, author R. Duclous, author T. G. Blackburn, author C. S. Brady, author K. Bennett, author T. D. Arber, and author A. R. Bell, https://doi.org/10.1016/j.jcp.2013.12.007 journal journal J. Comput. Phys. volume 260, pages 273–285 (year 2014), https://arxiv.org/abs/1311.5551 arXiv:1311.5551 NoStop [Blackburn(2020)]Blackburn:RevModPlasma2020 author author T. G. Blackburn, title title Radiation reaction in electron–beam interactions with high-intensity lasers, https://doi.org/10.1007/s41614-020-0042-0 journal journal Rev. Mod. Plasma Phys. volume 4, pages 5 (year 2020), https://arxiv.org/abs/1910.13377 arXiv:1910.13377 NoStop [Gonoskov et al.(2022)Gonoskov, Blackburn, Marklund, and Bulanov]Gonoskov_RMP_2022 author author A. Gonoskov, author T. G. Blackburn, author M. Marklund, and author S. S. Bulanov, title title Charged particle motion and radiation in strong electromagnetic fields, https://doi.org/10.1103/RevModPhys.94.045001 journal journal Rev. Mod. Phys. volume 94, pages 045001 (year 2022)NoStop [Li et al.(2021)Li, Decyk, Miller, Tableman, Tsung, Vranic, Fonseca, and Mori]Li_JCP_2021 author author F. Li, author V. K. Decyk, author K. G. Miller, author A. Tableman, author F. S. Tsung, author M. Vranic, author R. A. Fonseca, and author W. B. Mori, title title Accurately simulating nine-dimensional phase space of relativistic particles in strong fields, https://doi.org/https://doi.org/10.1016/j.jcp.2021.110367 journal journal Journal of Computational Physics volume 438, pages 110367 (year 2021)NoStop [Wen, Keitel, and Bauke(2017)]Wen:PRA2017 author author M. Wen, author C. H. Keitel, and author H. Bauke, title title Spin-one-half particles in strong electromagnetic fields: Spin effects and radiation reaction, https://doi.org/10.1103/PhysRevA.95.042102 journal journal Physical Review A volume 95, pages 042102 (year 2017)NoStop [Geng et al.(2020)Geng, Ji, Shen, Feng, Guo, Han, Qin, Wang, Wang, Wu, Yan, Yu, Zhang, and Xu]geng_spin-dependent_2020 author author X. S. Geng, author L. L. Ji, author B. F. Shen, author B. Feng, author Z. Guo, author Q. Q. Han, author C. Y. Qin, author N. W. Wang, author W. Q. Wang, author Y. T. Wu, author X. Yan, author Q. Yu, author L. G. Zhang, and author Z. Z. Xu, title title Spin-dependent radiative deflection in the quantum radiation-reaction regime, https://doi.org/10.1088/1367-2630/ab623b journal journal New J. Phys. volume 22, pages 013007 (year 2020), note publisher: IOP PublishingNoStop [Torgrimsson(2021b)]torgrimsson_resummation_2021 author author G. Torgrimsson, title title Resummation of Quantum Radiation Reaction in Plane Waves, https://doi.org/10.1103/PhysRevLett.127.111602 journal journal Phys. Rev. Lett. volume 127, pages 111602 (year 2021b), note publisher: American Physical SocietyNoStop [Vasak, Gyulassy, and Elze(1987)]Vasak:AnnPhys1987 author author D. Vasak, author M. Gyulassy, and author H. T. Elze, title title Quantum transport theory for abelian plasmas, https://doi.org/10.1016/0003-4916(87)90169-2 journal journal Annals of Physics volume 173, pages 462–492 (year 1987)NoStop [Neitz and Di Piazza(2013)]Neitz:PRL2013 author author N. Neitz and author A. Di Piazza, title title Stochasticity Effects in Quantum Radiation Reaction, @noop journal journal Phys. Rev. Lett. volume 111, pages 054802 (year 2013)NoStop [Ridgers et al.(2017)Ridgers, Blackburn, Del Sorbo, Bradley, Slade-Lowther, Baird, Mangles, McKenna, Marklund, Murphy, and Thomas]Ridgers:JPP2017 author author C. P. Ridgers, author T. G. Blackburn, author D. Del Sorbo, author L. E. Bradley, author C. Slade-Lowther, author C. D. Baird, author S. P. D. Mangles, author P. McKenna, author M. Marklund, author C. D. Murphy, and author A. G. R. Thomas, title title Signatures of quantum effects on radiation reaction in laser-electron-beam collisions, https://doi.org/10.1017/S0022377817000642 journal journal J. Plasma Phys. volume 83, pages 715830502 (year 2017), https://arxiv.org/abs/1708.04511 arXiv:1708.04511 NoStop [Dirac(1938)]Dirac:ProcRoySoc1938 author author P. Dirac, title title Classical theory of radiating electrons, https://doi.org/10.1098/rspa.1938.0124 journal journal Proc. R. Soc. London. Ser. A. Math. Phys. Sci. volume 167, pages 148–169 (year 1938)NoStop [Ford and O'Connell(1993)]Ford:PLA1993 author author G. W. Ford and author R. F. O'Connell, title title Relativistic form of radiation reaction, https://doi.org/10.1016/0375-9601(93)90755-O journal journal Phys. Lett. A volume 174, pages 182–184 (year 1993)NoStop [Mo and Papas(1971)]Mo:PRD1971 author author T. Mo and author C. Papas, https://doi.org/10.1103/PhysRevD.4.3566 title New equation of motion for classical charged particles, (year 1971)NoStop [Landau and Lifschitz(1992)]book:Landau2 author author L. D. Landau and author E. M. Lifschitz, @noop title Klassische Feldtheorie, edition 12th ed., series Lehrbuch der Theoretischen Physik, Vol. volume 2 (publisher Akademie Verlag, year 1992)NoStop [Herrera(1977)]Herrera:PRD1977 author author J. C. Herrera, title title Equation of motion in classical electrodynamics, https://doi.org/10.1103/PhysRevD.15.453 journal journal Phys. Rev. D volume 15, pages 453–456 (year 1977)NoStop [Jackson(1998)]book:Jackson author author J. D. Jackson, @noop title Classical Electrodynamics (publisher Wiley, year 1998)NoStop [Lorentz(1909)]Lorentz1909 author author H. A. Lorentz, @noop title The theory of electrons, and its applications to the phenomena of light and radiant heat (publisher Columbia University Press, address New York, year 1909)NoStop [Abraham(1904)]Abraham author author M. Abraham, @noop title Theorie der Elektrizität (publisher Teubner, address Leipzig, year 1904)NoStop [Rohrlich(1961)]Rohrlich:AnnPhys1961 author author F. Rohrlich, title title The equations of motion of classical charges, https://doi.org/10.1016/0003-4916(61)90028-8 journal journal Annals of Physics volume 13, pages 93–109 (year 1961)NoStop [Klepikov(1985)]Klepikov:PhysUspekh1985 author author N. P. Klepikov, title title Radiation damping forces and radiation from charged particles, https://doi.org/10.1070/PU1985v028n06ABEH005205 journal journal Sov. Phys. Uspekhi volume 28, pages 506–520 (year 1985)NoStop [Ekman, Heinzl, and Ilderton(2021)]ekman_reduction_2021 author author R. Ekman, author T. Heinzl, and author A. Ilderton, title title Reduction of order, resummation, and radiation reaction, https://doi.org/10.1103/PhysRevD.104.036002 journal journal Phys. Rev. D volume 104, pages 036002 (year 2021), note publisher: American Physical SocietyNoStop [Ekman(2022)]ekman_reduction_2022 author author R. Ekman, title title Reduction of order and transseries structure of radiation reaction, https://doi.org/10.1103/PhysRevD.105.056016 journal journal Phys. Rev. D volume 105, pages 056016 (year 2022), note publisher: American Physical SocietyNoStop [Burton and Noble(2014)]Burton:ContempPhys2014 author author D. A. Burton and author A. Noble, title title Aspects of electromagnetic radiation reaction in strong fields, https://doi.org/10.1080/00107514.2014.886840 journal journal Contemp. Phys. volume 55, pages 110–121 (year 2014), https://arxiv.org/abs/1409.7707 arXiv:1409.7707 NoStop [Ilderton and Torgrimsson(2013a)]Ilderton:PRD2013b author author A. Ilderton and author G. Torgrimsson, title title Radiation reaction from QED: Lightfront perturbation theory in a plane wave background, @noop journal journal Phys. Rev. D volume 88, pages 25021 (year 2013a)NoStop [Bargmann, Michel, and Telegdi(1959)]Bargmann:PRL1959 author author V. Bargmann, author L. Michel, and author V. L. Telegdi, title title Precession of the Polarization of Particles Moving in a Homogeneous Electromagnetic Field, https://doi.org/10.1103/PhysRevLett.2.435 journal journal Phys. Rev. Lett. volume 2, pages 435–436 (year 1959)NoStop [Schwinger(1951)]Schwinger:PR1951 author author J. Schwinger, title title On Gauge Invariance and Vacuum Polarization, https://doi.org/10.1103/PhysRev.82.664 journal journal Phys. Rev. volume 82, pages 664 (year 1951)NoStop [Workman et al.(2022)Workman et al.]Workman:2022ynf author author R. L. Workman et al. (collaboration Particle Data Group), title title Review of Particle Physics, https://doi.org/10.1093/ptep/ptac097 journal journal PTEP volume 2022, pages 083C01 (year 2022)NoStop [Note1()]Note1 note Recall that we had normalized the field strength to the Schwinger field strength m^2/|e|. Thus, formally F_S=1, which we write out explicitly here for clarity.Stop [Fedotov et al.(2023)Fedotov, Ilderton, Karbstein, King, Seipt, Taya, and Torgrimsson]fedotov_advances_2023 author author A. Fedotov, author A. Ilderton, author F. Karbstein, author B. King, author D. Seipt, author H. Taya, and author G. Torgrimsson, title title Advances in QED with intense background fields, https://doi.org/10.1016/j.physrep.2023.01.003 journal journal Physics Reports volume 1010, pages 1–138 (year 2023)NoStop [Bialynicki-Birula, Górnicki, and Rafelski(1991)]bialynicki-birula_phase-space_1991 author author I. Bialynicki-Birula, author P. Górnicki, and author J. Rafelski, title title Phase-space structure of the Dirac vacuum, https://doi.org/10.1103/PhysRevD.44.1825 journal journal Physical Review D volume 44, pages 1825–1835 (year 1991)NoStop [Fauth, Berges, and Di Piazza(2021)]fauth_collisional_2021 author author G. Fauth, author J. Berges, and author A. Di Piazza, title title Collisional strong-field QED kinetic equations from first principles, https://doi.org/10.1103/PhysRevD.104.036007 journal journal Physical Review D volume 104, pages 036007 (year 2021), note arXiv: 2103.13437 Publisher: American Physical SocietyNoStop [Gonoskov et al.(2015)Gonoskov, Bastrakov, Efimenko, Ilderton, Marklund, Meyerov, Muraviev, Sergeev, Surmin, and Wallin]Gonoskov:PRE2015 author author A. Gonoskov, author S. Bastrakov, author E. Efimenko, author A. Ilderton, author M. Marklund, author I. Meyerov, author A. Muraviev, author A. Sergeev, author I. Surmin, and author E. Wallin, title title Extended particle-in-cell schemes for physics in ultrastrong laser fields: Review and developments, https://doi.org/10.1103/PhysRevE.92.023305 journal journal Physical Review E volume 92, pages 023305 (year 2015), note publisher: American Physical SocietyNoStop [Gao, Liang, and Wang(2021)]Gao:JMPA2020 author author J.-H. Gao, author Z.-T. Liang, and author Q. Wang, title title Quantum kinetic theory for spin-1/2 fermions in Wigner function formalism, https://doi.org/10.1142/S0217751X21300015 journal journal Int. J. Mod. Phys. A volume 36, pages 2130001 (year 2021), https://arxiv.org/abs/2011.02629 arXiv:2011.02629 NoStop [Al-Naseri et al.(2020)Al-Naseri, Zamanian, Ekman, and Brodin]al-naseri_kinetic_2020 author author H. Al-Naseri, author J. Zamanian, author R. Ekman, and author G. Brodin, title title Kinetic theory for spin-1/2 particles in ultrastrong magnetic fields, https://doi.org/10.1103/PhysRevE.102.043203 journal journal Physical Review E volume 102, pages 1–9 (year 2020), note publisher: American Physical SocietyNoStop [Brodin et al.(2023)Brodin, Al-Naseri, Zamanian, Torgrimsson, and Eliasson]brodin_plasma_2023 author author G. Brodin, author H. Al-Naseri, author J. Zamanian, author G. Torgrimsson, and author B. Eliasson, title title Plasma dynamics at the Schwinger limit and beyond, https://doi.org/10.1103/PhysRevE.107.035204 journal journal Phys. Rev. E volume 107, pages 035204 (year 2023), note publisher: American Physical SocietyNoStop [Elkina et al.(2011)Elkina, Fedotov, Kostyukov, Legkov, Narozhny, Nerush, and Ruhl]Elkina:PRSTAB2011 author author N. V. Elkina, author A. M. Fedotov, author I. Y. Kostyukov, author M. V. Legkov, author N. B. Narozhny, author E. N. Nerush, and author H. Ruhl, title title QED cascades induced by circularly polarized laser fields, https://doi.org/10.1103/PhysRevSTAB.14.054401 journal journal Phys. Rev. Spec. Top. - Accel. Beams volume 14, pages 054401 (year 2011)NoStop [Nerush, Bashmakov, and Kostyukov(2011)]Nerush:PhysPlas2011 author author E. N. Nerush, author V. F. Bashmakov, and author I. Y. Kostyukov, title title Analytical model for electromagnetic cascades in rotating electric field, https://doi.org/10.1063/1.3624481 journal journal Phys. Plasmas volume 18, pages 83107 (year 2011)NoStop [Bulanov et al.(2013)Bulanov, Schroeder, Esarey, and Leemans]Bulanov:PRA2013 author author S. S. Bulanov, author C. B. Schroeder, author E. Esarey, and author W. P. Leemans, title title Electromagnetic cascade in high-energy electron, positron, and photon interactions with intense laser pulses, https://doi.org/10.1103/PhysRevA.87.062110 journal journal Phys. Rev. A volume 87, pages 062110 (year 2013)NoStop [Ilderton and Torgrimsson(2013b)]Ilderton:PLB2013 author author A. Ilderton and author G. Torgrimsson, title title Radiation reaction in strong field QED, @noop journal journal Phys. Lett. B volume 725, pages 481 (year 2013b)NoStop [Heinzl, Ilderton, and King(2021)]heinzl_classical_2021 author author T. Heinzl, author A. Ilderton, and author B. King, title title Classical Resummation and Breakdown of Strong-Field QED, https://doi.org/10.1103/PhysRevLett.127.061601 journal journal Physical Review Letters volume 127, pages 061601 (year 2021), note arXiv: 2101.12111 Publisher: American Physical SocietyNoStop [Dinu et al.(2016)Dinu, Harvey, Ilderton, Marklund, and Torgrimsson]Dinu:PRL2016 author author V. Dinu, author C. Harvey, author A. Ilderton, author M. Marklund, and author G. Torgrimsson, title title Quantum Radiation Reaction: From Interference to Incoherence. https://doi.org/10.1103/PhysRevLett.116.044801 journal journal Phys. Rev. Lett. volume 116, pages 044801 (year 2016)NoStop [Blackburn et al.(2018)Blackburn, Seipt, Bulanov, and Marklund]Blackburn2018 author author T. G. Blackburn, author D. Seipt, author S. S. Bulanov, and author M. Marklund, title title Benchmarking semiclassical approaches to strong-field QED: Nonlinear Compton scattering in intense laser pulses, https://doi.org/10.1063/1.5037967 journal journal Physics of Plasmas volume 25, pages 083108 (year 2018), note publisher: American Institute of Physics Inc.Stop [Di Piazza et al.(2018)Di Piazza, Tamburini, Meuren, and Keitel]DiPiazza:PRA2018 author author A. Di Piazza, author M. Tamburini, author S. Meuren, and author C. H. Keitel, title title Implementing nonlinear Compton scattering beyond the local-constant-field approximation, https://doi.org/10.1103/PhysRevA.98.012134 journal journal Physical Review A volume 98, pages 012134 (year 2018), note arXiv: 1708.08276NoStop [Ilderton, King, and Seipt(2019)]ilderton_extended_2019 author author A. Ilderton, author B. King, and author D. Seipt, title title Extended locally constant field approximation for nonlinear Compton scattering, https://doi.org/10.1103/PhysRevA.99.042121 journal journal Phys. Rev. A volume 99, pages 042121 (year 2019)NoStop [Blackburn et al.(2020)Blackburn, Seipt, Bulanov, and Marklund]Blackburn:PRA2020 author author T. G. Blackburn, author D. Seipt, author S. S. Bulanov, and author M. Marklund, title title Radiation beaming in the quantum regime, https://doi.org/10.1103/PhysRevA.101.012505 journal journal Phys. Rev. A volume 101, pages 012505 (year 2020), https://arxiv.org/abs/1904.07745 arXiv:1904.07745 NoStop [Ritus(1970)]Ritus:JETP1970 author author V. I. Ritus, title title Radiative Effects and Their Enhancement in an Intense Electromagnetic Field, @noop journal journal Sov. Phys. J. Exp. Theor. Phys. volume 30, pages 1181 (year 1970)NoStop [Braginskii(1965)]braginskii_transport_1965 author author S. Braginskii, title title Transport Processes in Plasma, @noop journal journal Reviews of Plasma Physics volume 1, pages 205 (year 1965)NoStop [De Groot, Van Leeuwen, and Van Weert(1980)]DeGroot:1980dk author author S. R. De Groot, author W. A. Van Leeuwen, and author C. G. Van Weert, @noop title Relativistic Kinetic Theory. Principles and Applications (publisher North Holland Publishing Company, address Amsterdam, year 1980)NoStop [Hurst et al.(2014)Hurst, Morandi, Manfredi, and Hervieux]hurst_semiclassical_2014 author author J. Hurst, author O. Morandi, author G. Manfredi, and author P. A. Hervieux, title title Semiclassical Vlasov and fluid models for an electron gas with spin effects, https://doi.org/10.1140/epjd/e2014-50205-5 journal journal European Physical Journal D volume 68, pages 1–11 (year 2014), note arXiv: 1405.1184NoStop [Ekman, Asenjo, and Zamanian(2017)]Ekman:PRE2017 author author R. Ekman, author F. A. Asenjo, and author J. Zamanian, title title Relativistic kinetic equation for spin-1/2 particles in the long-scale-length approximation, https://doi.org/10.1103/PhysRevE.96.023207 journal journal Phys. Rev. E volume 96, pages 023207 (year 2017), https://arxiv.org/abs/1702.00722 arXiv:1702.00722 NoStop [Zamanian, Marklund, and Brodin(2010)]Zamanian2010 author author J. Zamanian, author M. Marklund, and author G. Brodin, title title Scalar quantum kinetic theory for spin-1/2 particles: Mean field theory, https://doi.org/10.1088/1367-2630/12/4/043019 journal journal New Journal of Physics volume 12 (year 2010), 10.1088/1367-2630/12/4/043019, note arXiv: 0910.5165NoStop [Brodin et al.(2011)Brodin, Marklund, Zamanian, and Stefan]Brodin:PPCF2011 author author G. Brodin, author M. Marklund, author J. Zamanian, and author M. Stefan, title title Spin and magnetization effects in plasmas, https://doi.org/10.1088/0741-3335/53/7/074013 journal journal Plasma Physics and Controlled Fusion volume 53, pages 074013 (year 2011), note arXiv: 1010.0572 ISBN: 0741-3335NoStop [Zamanian et al.(2010)Zamanian, Stefan, Marklund, and Brodin]zamanian_extended_2010 author author J. Zamanian, author M. Stefan, author M. Marklund, and author G. Brodin, title title From extended phase space dynamics to fluid theory, https://doi.org/10.1063/1.3496053 journal journal Physics of Plasmas volume 17, pages 102109 (year 2010)NoStop [Seipt et al.(2017)Seipt, Heinzl, Marklund, and Bulanov]Seipt_PRL_2017 author author D. Seipt, author T. Heinzl, author M. Marklund, and author S. Bulanov, title title Depletion of Intense Fields, https://doi.org/10.1103/PhysRevLett.118.154803 journal journal Physical Review Letters volume 118, pages 154803 (year 2017)NoStop [Hazeltine and Mahajan(2002)]hazeltine_fluid_2002 author author R. D. Hazeltine and author S. M. Mahajan, title title Fluid Description of Relativistic, Magnetized Plasma, https://doi.org/10.1086/338696 journal journal ApJ volume 567, pages 1262 (year 2002), note publisher: IOP PublishingNoStop [Mane, Shatunov, and Yokoya(2005)]Mane:RPP2005 author author S. R. Mane, author Y. M. Shatunov, and author K. Yokoya, title title Spin-polarized charged particle beams in high-energy accelerators, https://doi.org/10.1088/0034-4885/68/9/R01 journal journal Reports on Progress in Physics volume 68, pages 1997–2265 (year 2005)NoStop [Sokolov and Ternov(1968)]book:Sokolov author author A. A. Sokolov and author I. M. Ternov, @noop title Synchrotron Radiation (publisher Akademie Verlag, address Berlin, year 1968)NoStop [Baier and Katkov(1967)]baier_radiational_1967 author author V. N. Baier and author V. M. Katkov, title title Radiational polarization of electrons in inhomogeneous magnetic field, https://doi.org/10.1016/0375-9601(67)90608-1 journal journal Physics Letters A volume 24, pages 327–329 (year 1967)NoStop [Baier(1972)]bauier_radiative_1972 author author V. N. Baier, title title Radiative polarization of electrons in storage rings, @noop journal journal Sov. Phys. Usp. volume 14, pages 695 (year 1972)NoStop [Barut(1980)]barut author author A. O. Barut, @noop title Electrodynamics and Classical Theory of Fields and Particles (publisher Dover Books on Physics, year 1980)NoStop [Niel et al.(2018)Niel, Riconda, Amiranoff, Duclous, and Grech]Niel:PRE2017 author author F. Niel, author C. Riconda, author F. Amiranoff, author R. Duclous, and author M. Grech, title title From quantum to classical modeling of radiation reaction: A focus on stochasticity effects, https://doi.org/10.1103/PhysRevE.97.043209 journal journal Phys. Rev. E volume 97, pages 043209 (year 2018), https://arxiv.org/abs/1707.02618 arXiv:1707.02618 NoStop [Tang et al.(2021)Tang, Gong, Yu, Shou, and Yan]tang_radiative_2021 author author Y. Tang, author Z. Gong, author J. Yu, author Y. Shou, and author X. Yan, title title Radiative polarization dynamics of relativistic electrons in an intense electromagnetic field, https://doi.org/10.1103/PhysRevA.103.042807 journal journal Physical Review A volume 103, pages 042807 (year 2021), note arXiv: 2107.00597NoStop [Ilderton, King, and Tang(2020)]Ilderton2020 author author A. Ilderton, author B. King, and author S. Tang, title title Loop spin effects in intense background fields, https://doi.org/10.1103/PhysRevD.102.076013 journal journal Physical Review D volume 102, pages 076013 (year 2020), note arXiv: 2008.08578 Publisher: American Physical SocietyNoStop [Dai et al.(2023)Dai, Jiang, Jiang, Shaisultanov, and Chen]dai_effects_2023 author author Y.-N. Dai, author J.-J. Jiang, author Y.-H. Jiang, author R. Shaisultanov, and author Y.-Y. Chen, https://doi.org/10.48550/arXiv.2304.05401 title Effects of angular spread in nonlinear Compton scattering, (year 2023), note arXiv:2304.05401 [hep-ph]NoStop [sco()]scorer https://dlmf.nist.gov/9.12 title Scorer function properties: https://dlmf.nist.gov/9.12, NoStop
http://arxiv.org/abs/2307.00719v1
20230703025936
Tracking Tensor Ring Decompositions of Streaming Tensors
[ "Yajie Yu", "Hanyu Li" ]
math.NA
[ "math.NA", "cs.NA" ]
Yajie Yu College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, P.R. China; [email protected] Hanyu Li College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, P.R. China; [email protected] or [email protected] Tracking Tensor Ring Decompositions of Streaming TensorsThe work is supported by the National Natural Science Foundation of China (No. 11671060) and the Natural Science Foundation of Chongqing, China (No. cstc2019jcyj-msxmX0267) Yajie Yu Hanyu Li Received: date / Accepted: date ====================================================================================================================================================================================================================================== Tensor ring (TR) decomposition is an efficient approach to discover the hidden low-rank patterns for higher-order tensors, and streaming tensors are becoming highly prevalent in real-world applications. In this paper, we investigate how to track TR decompositions of streaming tensors. An efficient algorithm is first proposed. Then, based on this algorithm and randomized techniques, we present a randomized streaming TR decomposition. The proposed algorithms make full use of the structure of TR decomposition, and the randomized version can allow any sketching type. Theoretical results on sketch size are provided. In addition, the complexity analyses for the obtained algorithms are also given. We compare our proposals with the existing batch methods using both real and synthetic data. Numerical results show that they have better performance in computing time with maintaining similar accuracy. 15A69 68W20 § INTRODUCTION Tensor ring (TR) decomposition <cit.> is an important tool for higher-order data analysis. It decomposes an Nth-order tensor into a cyclic interconnection of N 3rd-order tensors, and hence has the advantage of circular dimensional permutation invariance. Specifically, for a tensor X∈R^I_1 × I_2 ×⋯× I_N, it has the TR format as follows X(i_1, ⋯, i_N) =(G_1(i_1) G_2(i_2) ⋯G_N(i_N)) =(∏_n=1^N G_n(i_n)), where G_n(i_n) = G_n(:,i_n,:) ∈R^R_n × R_n+1 is the i_n-th lateral slice of the core tensor (TR-core) G_n ∈R^R_n × I_n × R_n+1. Note that a slice is a 2nd-order section, i.e., a matrix, of a tensor obtained by fixing all the tensor indices but two. The sizes of TR-cores, i.e., R_k with k=1,⋯,N and R_N+1=R_1, are called TR-ranks. Additionally, we use the notation ( {G_n}_n=1^N ) to denote the TR decomposition of a tensor. In contrast to the two most popular tensor decompositions, i.e., CANDECOMP-PARAFAC (CP) and Tucker decompositions <cit.>, TR decomposition avoids the NP-hard problem of computing the CP-rank and the curse of dimensionality due to the core tensor in Tucker decomposition. These advantages stem from the algorithms for finding TR-ranks being stable and the number of parameters of TR decomposition scaling linearly with the tensor order N. Furthermore, it is also feasible and convenient to directly implement some algebra operations in TR format <cit.>, e.g., addition, dot product, norm, matrix-by-vector, etc., which is conducive to significantly enhancing the computational efficiency. Therefore, TR decomposition has already been utilized effectively in scientific computing to tackle intractable higher-dimensional and higher-order problems. This makes the problem of fitting ( {G_n}_n=1^N ) to a tensor X be increasingly important. The above fitting problem can be written as the following minimization problem: min_G_1, ⋯, G_N( {G_n}_n=1^N ) - X_F, where ·_F denotes the Frobenius norm of a matrix or tensor. One standard computational method for this problem is to prescribe the fixed TR-ranks first, and then to determine the decomposition via alternating least squares (ALS). It is usually written as TR-ALS. Another classical method is to prescribe a fixed target accuracy first, and then to compute the decomposition via singular value decomposition. See <cit.> for the details on these two methods. Moreover, with the rapid emergence of large-scale problems, the above two methods have been extended to randomized versions <cit.>. In addition, many other algorithms have also been proposed and developed for TR decomposition; see, e.g., <cit.>. However, all of these works are for the case that the whole tensor data X is available and static. As we know, in many practical applications, there only fragmentary data sets are initially available, with new data sets becoming available at the next time step or appearing continuously over time. Live video broadcasts, surveillance videos, network flow, and social media data are examples. Such tensors are called streaming tensors or incremental/online tensors <cit.>. Developing streaming algorithms for TR decompositions, i.e., tracking TR decompositions, of such streaming tensors is both fascinating and necessary. This is because when the initial decomposition is already known, it is more expedient to update the streaming decomposition than to recalculate the entire decomposition. In previous research, some streaming methods have been successively presented for some other tensor decompositions; see e.g., <cit.> for CP decomposition, <cit.> for Tucker decomposition, and <cit.> for tensor train (TT) decomposition <cit.>. A more comprehensive and detailed overview can be found in <cit.>. However, streaming algorithms related to (instead of aiming at) TR decomposition have only been studied in several papers <cit.>. Specifically, He and Atia <cit.> developed a patch-tracking-based streaming TR completion framework for visual data recovery and devised a streaming algorithm that can update the latent TR-cores and complete the missing entries of patch tensors. Yu et al. <cit.> proposed an online TR subspace learning and imputation model by formulating exponentially weighted least squares with Frobenius norm regularization of TR-cores. The alternating recursive least squares and stochastic gradient algorithms were employed to solve the proposed model. Huang et al. <cit.> provided a multi-aspect streaming TR completion method. Whereas, all of these works didn't fully consider the special structure of TR decomposition. From <cit.>, it is shown that exploring the structure can significantly improve the efficiency of the related algorithms. Therefore, in this paper, we focus on developing efficient ALS-based streaming algorithms for TR decomposition via making full use of its structure. Specifically, inspired by the work on CP decomposition in <cit.>, we first propose an efficient streaming algorithm that can incrementally track TR decompositions of streaming tensors with any order. Then, motivated by the ideas in <cit.>, we derive a randomized streaming TR decomposition to deal with streaming large-scale tensors. Three randomized strategies, i.e., uniform sampling, leverage-based sampling, and Kronecker sub-sampled randomized Fourier transform (KSRFT), are used to reduce the dimension of the coefficient and unfolding matrices in the ALS subproblems, which makes the computing time and memory usage be reduced greatly. Moreover, these strategies can also avoid forming the full coefficient and sketching matrices and implementing matrix multiplication between large matrices. The rest of this paper is organized as follows. <Ref> first gives some tensor notations and basic operations, and then briefly reviews the algorithms for TR decomposition. In <Ref>, we present our streaming TR decomposition and its randomized variant as well as three sketching techniques. The evaluation of the computational performance of the proposed algorithms is reported in <Ref>. Finally, <Ref> makes a conclusion and outlines some further directions. The missed proofs and the specific algorithms based on different sketches are given in <Ref> and <Ref>, respectively. § PRELIMINARIES AND RELATED WORKS For convenience on the following presentment, we denote [I] { 1, ⋯, I } for a positive integer I, and set i_1 i_2 ⋯ i_N 1 + ∑_n=1^N(i_n-1)∏_j=1^n-1I_j for the indices i_1 ∈ [I_1], ⋯, i_N ∈ [I_N]. Three unfolding matrices of a tensor X∈R^I_1 × I_2 ⋯× I_N are defined element-wise: Classical Mode-n Unfolding: X_(n)(i_n, i_1 ⋯ i_n-1 i_n+1⋯ i_N)=X(i_1, ⋯, i_N), Mode-n Unfolding: X_[n](i_n, i_n+1⋯ i_N i_1 ⋯ i_n-1)=X(i_1, ⋯, i_N), n Unfolding: X_<n>(i_1, ⋯, i_n, i_n+1⋯ i_N)=X(i_1, ⋯, i_N), which are of size I_n ×∏_j n I_j, I_n ×∏_j n I_j, and ∏_j=1^n I_j ×∏_j=n+1^N I_j, respectively. The tensor-times-matrix (TTM) multiplication of a tensor X∈R^I_1 × I_2 ⋯× I_N and a matrix U∈R^J × I_n is a tensor of size I_1 ×⋯× I_n-1× J × I_n+1×⋯× I_N denoted by X×_n U and defined element-wise via (X×_n U)(i_1, ⋯, i_n-1, j, i_n+1, ⋯, i_N) = ∑_i_n = 1^I_nX(i_1, ⋯, i_n, ⋯, i_N) U(j, i_n). Multiplying an Nth-order tensor by multiple matrices on distinct modes is known as Multi-TTM. In particular, multiplying an Nth-order tensor by the matrices U_j with j=1,⋯, N in each mode implies Y = X×_1 U_1 ×_2 U_2 ⋯×_N U_N. Its mode-n unfolding can be presented as follows: Y_[n] = U_n X_[n]( U_n-1⊗⋯⊗U_1⊗U_N⊗⋯⊗U_n+1)^⊺. We now detail the TR-ALS mentioned in <Ref>, which is a popular algorithm for TR decomposition. To achieve this, we need the following definition. Let X = ( {G_n}_n=1^N ) ∈R^I_1 × I_2 ⋯× I_N. The subchain tensor G^ n∈R^R_n+1×∏_j n I_j × R_n is the merging of all TR-cores except the n-th one and can be written slice-wise via G^ n(i_n+1⋯ i_N i_1 ⋯ i_n-1)=∏_j=n+1^NG_j(i_j) ∏_j=1^n-1G_j(i_j). Thus, according to Theorem 3.5 in <cit.>, the objective in (<ref>) can be rewritten as the following N subproblems G_n(2) = min_G_n(2)1/2G_[2]^ nG_n(2)^⊺-X_[n]^⊺_F, n=1,⋯, N. The so-called TR-ALS is a method that keeps all TR-cores fixed except the n-th one and finds the solution to the LS problem (<ref>) with respect to it. We summarize the method in <Ref>. However, TR-ALS does not fully utilize the structure of the coefficient matrix G_[2]^ n. Yu and Li <cit.> fixed this issue recently and proposed a more efficient algorithm called TR-ALS-NE, which is the basis of our first algorithm in the present paper. We first list the required definitions and property before detailing the algorithm in <Ref>. The outer product of two tensors A∈R^I_1 ×⋯× I_N and B∈R^J_1 ×⋯× J_M is a tensor of size I_1 ×⋯× I_N × J_1 ×⋯× J_M denoted by A∘B and defined element-wise via (A∘B)(i_1, ⋯, i_N, j_1, ⋯, j_M) = A(i_1, ⋯, i_N) B(j_1, ⋯, j_M). The general contracted tensor product of two tensors A∈R^I_1 × J × R_1 × K and B∈R^J × I_2 × K × R_2 is a tensor of size I_1 × I_2 × R_1 × R_2 denoted by A×_2,4^1,3B and defined element-wise via (A×_2,4^1,3B)(i_1, i_2, r_1, r_2) = ∑_j,kA(i_1, j, r_1, k) B(j, i_2, k,r_2). The mode-2 subchain product of two tensors A∈R^I_1 × J_1 × K and B∈R^K × J_2 × I_2 is a tensor of size I_1 × J_1 J_2 × I_2 denoted by A⊠_2 B and defined as (A⊠_2 B)(j_1 j_2) = A(j_1)B(j_2). <cit.> Let A∈R^I_1 × J × K_1, B∈R^K_1 × R × L_1, C∈R^I_2 × J × K_2 and D∈R^K_2 × R × L_2 be 3rd-order tensors. Then (A⊠_2 B)_[2]^⊺ (C⊠_2 D)_[2] = ( (∑_r=1^RB(r)^⊺∘D(r)^⊺) ×_2,4^1,3 (∑_j=1^JA(j)^⊺∘C(j)^⊺) )_<2>. As mentioned in <Ref>, randomized methods have been proposed for TR-ALS <cit.>. Among them, the most relevant algorithms to this paper are TR-ALS-Sampled <cit.> and TR-KSRFT-ALS <cit.>. The sampling techniques of these two algorithms will be detailed in <Ref> after introducing an additional definition. The mode-2 slices-Hadamard product of two tensors A and B is a tensor of size I_1 × J × I_2 denoted by A_2 B and defined as (A_2 B)(j) = A(j)B(j). § PROPOSED METHODS We first propose a streaming algorithm for tracking TR decomposition, and then present its randomized variant. After that, three different sketching techniques based on uniform sampling, leverage-based sampling, and KSRFT, are discussed. §.§ Streaming TR Decomposition Let X^old∈R^I_1 ×⋯× I_N-1× t^old with the N-th mode being the time, and its TR decomposition be ( {G_n^old}_n=1^N ). Now assume that, at the time step τ, a temporal slice X^new∈R^I_1 ×⋯× I_N-1× t^new is added to X^old to form a tensor X∈R^I_1 ×⋯× I_N-1× (t^old + t^new), where t^old≫ t^new. We are interested in finding the TR decomposition ( {G_n}_n=1^N ) of X with the help of ( {G_n^old}_n=1^N ) and the existing intermediate information. In the following, we give the detailed updating formulations. Update Temporal Mode We first consider the update for the TR-core of the temporal mode, i.e., G_N, by fixing the other TR-cores. Specifically, by (<ref>), we have G_N(2) ←min_G_N(2)1/2X_[N] - G_N(2) (G_[2]^ N )^⊺_F = min_G_N(2)1/2[ X_[N]^old; X_[N]^new ] - [ G_N(2)^(1); G_N(2)^(2) ] (G_[2]^ N)_[2]^⊺_F. With <Ref> and the fact from <cit.>, G^ n = G_n+1⊠_2 ⋯⊠_2 G_N⊠_2 G_1⊠_2 ⋯⊠_2 G_n-1, it is clear that G_N(2)←[ X_[N]^oldG_[2]^ N( (G_[2]^ N)^⊺G_[2]^ N)^†; X_[N]^newG_[2]^ N( (G_[2]^ N)^⊺G_[2]^ N)^† ] = [ G_N(2)^old; X_[N]^newG_[2]^ N( H^ N_<2>)^† ] = [ G_N(2)^old; G_N(2)^new ], where H^ N = Z_N-1×_2,4^1,3⋯×_2,4^1,3Z_1 with Z_j = ∑_i_j=1^I_jG_j(i_j)^⊺∘G_j(i_j)^⊺. Thus, G_N(2)^new←X_[N]^newG_[2]^ N( H^ N_<2>)^†, G_N(2)←[ G_N(2)^old; G_N(2)^new ]. Update Non-temporal Modes For each non-temporal mode n ∈ [N-1], we now consider the update of G_n by fixing the remain TR-cores. Specifically, according to (<ref>), we have the following normal equation 0 =X_[n]( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2]_P_n - G_n(2)( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2]^⊺( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2]_Q_n =[ X_[n]^old X_[n]^new ][ (G^ n_old)_[2]; (G^ n_new)_[2] ] - G_n(2)[ (G^ n_old)_[2]^⊺ (G^ n_new)_[2]^⊺ ][ (G^ n_old)_[2]; (G^ n_new)_[2] ] =(P_n^old + X_[n]^new(G^ n_new)_[2]) - G_n(2)( Q_n^old + ( H^ n_new)_<2>), where G^ n_old = ( n+1,⋯,N-1⊠_2G_j) ⊠_2 G_N^old⊠_2 ( 1,⋯,n-1⊠_2G_j), G^ n_new = ( n+1,⋯,N-1⊠_2G_j) ⊠_2 G_N^new⊠_2 ( 1,⋯,n-1⊠_2G_j), H^ n_new = ( n-1,⋯,1×_2,4^1,3Z_j) ×_2,4^1,3Z_N^new×_2,4^1,3( N-1,⋯,n+1×_2,4^1,3Z_j) with Z_N^new =∑_i_N=1^I_NG^new_N(i_N)^⊺∘G^new_N(i_N)^⊺. Note that to derive (<ref>), <Ref>, (<ref>), and the permutation matrix Π_n defined as (X_[n]Π_n)(:,i_1 ⋯ i_n-1 i_n+1⋯ i_N) = X_[n](:,i_n+1⋯ i_N i_1 ⋯ i_n-1) such that X_[n]Π_nΠ_n^⊺( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2] =[ X_[n]^old X_[n]^new ][ (G^ n_old)_[2]; (G^ n_new)_[2] ] ( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2]^⊺Π_nΠ_n^⊺( n+1,⋯,N, 1,⋯,n-1⊠_2G_j)_[2] =[ (G^ n_old)_[2]^⊺ (G^ n_new)_[2]^⊺ ][ (G^ n_old)_[2]; (G^ n_new)_[2] ] have been used. Thus, we achieve the update for G_n as follows P_n←P_n^old + X_[n]^new (G^ n_new)_[2], Q_n←Q_n^old + ( H^ n_new)_<2> , G_n(2)←P_nQ_n^†. The whole process for streaming TR decomposition is summarized in <Ref>, from which we find that the information of previous decomposition can be stored in the complementary matrices P_n and Q_n, and hence the expensive computation can be avoided and the TR-cores can be efficiently updated in an incremental way. With regard to the initialization of streaming TR decomposition, i.e., <Ref> in <Ref>, we can choose any feasible techniques. Inspired by the experimental results in <cit.>, we recommend running the corresponding offline version of <Ref> for finding the initial values. However, in the specific numerical experiments later in this paper, we use the same initial values for various algorithms for convenience; see the detailed description of experiments in <Ref>. The above explanation is also applicable to <Ref> below. From the derivation of <Ref>, it can be seen that the structure of the coefficient matrices in the subproblems is well used. That is, <Ref> is employed to reduce the computational cost. Hence, the algorithm is more efficient than applying TR-ALS directly. More descriptions and comparisons on advantages for using <Ref> can be found in <cit.>. As we know, TR decomposition generalizes the famous TT decomposition by relaxing some constraints <cit.>. So, with a slight change, <Ref> is also applicable to TT decomposition. It is worthy to emphasize that the corresponding method is very different from the ones in <cit.> mentioned in <Ref>. The main difference still lies in that we make full use of the structure introduced before. §.§ Randomized Streaming TR Decomposition We now employ randomized sketching techniques to improve the efficiency of streaming TR decomposition. That is, we consider the following sketched subproblems for streaming tensor with the sketching matrices Ψ_n∈R^m ×∏_j NI_j, min_G_n(2)Ψ_n G_[2]^ nG_n(2)^⊺-Ψ_n X_[n]^⊺_F, n=1,⋯, N. A randomized streaming TR decomposition will be proposed. In the following, we give the specific updating rules. Update Temporal Mode By dividing the corresponding terms into two parts, from (<ref>), we have G_N(2) ← min_G_N(2)1/2[ X_[N]^old (Ψ_N^old)^⊺; X_[N]^new (Ψ_N^new)^⊺ ] - [ G_N(2)^(1)( Ψ_N^oldG_[2]^ N)^⊺; G_N(2)^(2)( Ψ_N^newG_[2]^ N)^⊺ ]_F, where Ψ_N^old∈R^m ×∏_j NI_j and Ψ_N^new∈R^m ×∏_j NI_j. It is clear that G_N(2) ←[ X_[N]^old (Ψ_N^old)^⊺( ( Ψ_N^oldG_[2]^ N)^⊺)^†; X_[N]^new (Ψ_N^new)^⊺( ( Ψ_N^newG_[2]^ N)^⊺)^† ] = [ G_N(2)^old; X_[N]^new (Ψ_N^new)^⊺( ( Ψ_N^newG_[2]^ N)^⊺)^† ] = [ G_N(2)^old; G_N(2)^new ]. Thus, G_N(2)^new←X_[N]^new (Ψ_N^new)^⊺( ( Ψ_N^newG_[2]^ N)^⊺)^†, G_N(2)←[ G_N(2)^old; G_N(2)^new ]. Usually, the sketching matrix Ψ_N^new is not the same as Ψ_N^old, which implies that X_[N]^new will be sketched at each time step. Update Non-temporal Modes As done for streaming TR decomposition in <Ref> and similar to the above deduction, we have P_n←P_n^old + X_[n]^new (Ψ_n^new)^⊺Ψ_n^new(G^ n_new)_[2], Q_n←Q_n^old + (G^ n_new)_[2]^⊺ (Ψ_n^new)^⊺Ψ_n^new(G^ n_new)_[2], G_n(2)←P_nQ_n^†, where Ψ_n^old∈R^m ×∏_j nI_j and Ψ_n^new∈R^m ×∏_j nI_j are sketching matrices and the other notations are the same as the ones in (<ref>), (<ref>), and (<ref>). Unlike the case for streaming TR decomposition in <Ref>, here the calculation of the Gram matrix (G^ n_new)_[2]^⊺ (Ψ_n^new)^⊺Ψ_n^new(G^ n_new)_[2] is quite cheap. So, we do not consider its structure any more though it still exists. Instead, we mainly focus on how to compute Ψ_n^new(G^ n_new)_[2] fast by using the structure of (G^ n_new)_[2] and choosing suitable Ψ_n^new; see <Ref> below. The whole process for randomized streaming TR decomposition is summarized in <Ref>, which shows that, as carried out by <Ref> or <Ref>, different sketching techniques can be used to compute the sketched subchain and input tensors. Moreover, when forming the aforementioned sketched tensors, the un-updated TR-cores and fibers do not need to be sketched again. The corresponding detailed algorithms are presented in <Ref>. Note that, in this case, the theoretical guarantees given in <Ref> still apply. §.§ Different Sketching Techniques We mainly consider three practical sketching techniques: Uniform sampling, leverage-based sampling, and KSRFT. Uniform Sampling That is, Ψ_n = D_nS_n, where S_n∈R^m × J_n with J_n= I_1 I_2⋯ I_N-1, n=N I_1 I_2⋯ I_n-1 I_n+1⋯ I_N-1 t^new, n N is a sampling matrix, i.e., (S_n)_ij= 1, if the j-th row is chosen in the i-th independent random trial with the probability 1/J_n, 0, otherwise, and D_n∈R^m × m is a diagonal rescaling matrix with the i-th diagonal entry (D_n)_ii = √(J_n/m). In practice, the rescaling matrix can be ignored without affecting the performance of algorithms. Furthermore, the sampling can be carried out in TR-cores as done in <Ref>. The detailed algorithm is summarized in <Ref> in <Ref> and the theoretical guarantee is as follows. Let Ψ_n be a uniform sampling matrix as defined above and G̃_n(2)min_G_n(2)X_[n]Ψ_n^⊺ - G_n(2)(Ψ_nG_[2]^ n)^⊺_F. If m ≥( 2 γ R_n R_n+1/ε) max[ 48/εln( 96 γ R_n R_n+1/ε^2 √(δ)), 1/δ] with ε∈ (0,1), δ∈ (0,1), and γ>1, then the following inequality holds with a probability of at least 1-δ: X_[n] - G̃_n(2)(G_[2]^ n)^⊺_F ≤ (1+ε) min_G_n(2)X_[n] - G_n(2)(G_[2]^ n)^⊺_F. The γ in <Ref> determines the maximum value of the row norms of the left singular vector matrix of the coefficient matrix G_[2]^ n (see <Ref>). It can be seen that the more inhomogeneous the coefficient matrix is, the larger the γ is, which leads to less effective for uniform sampling. In this case, the importance sampling is a more reasonable choice. Leverage-based Sampling Two definitions are first introduced. Let A∈R^m× n with m > n, and let Q∈R^m × n be any orthogonal basis for the column space of A. The leverage score of the i-th row of A is given by ℓ_i(A) = Q(i,:)_2^2. Let A∈R^m× n with m > n. We say a probability distribution q=[q_1, ⋯, q_m ]^⊺ is a leverage-based probability distribution for A on [m] if q_i ≥β p_i with p_i=ℓ_i(A)/n, 0<β≤ 1 and ∀ i ∈ [m]. Computing the leverage scores of G^ n_[2]∈R^J_n × R_nR_n+1 directly is expensive. Fortunately, by <cit.>, we can estimate them using the leverage scores related to the TR-cores G_1, ⋯, G_n-1,G_n+1, ⋯, G_N. For each n ∈ [N], let p_n ∈R^I_n be a probability distribution on [I_n] defined element-wise via p_n(i_n) = ℓ_i_n(G_n(2))/(G_n(2))), p^ n be a probability distribution on [J_n] defined element-wise via p^ n(i) = ℓ_i(G^ n_[2])/(G^ n_[2]), q^ n be a vector defined element-wise via q^ n(i_n+1⋯ i_N i_1⋯ i_n-1) = ∏_m=1 m n^Np_m(i_m), and β_n be a constant as in <Ref> defined as β_n = ( R_nR_n+1∏_m=1 m∉{n, n+1}^N R_m^2 )^-1. Then for each n ∈ [N], q^ n(i) ≥β_n p^ n(i) for all i = i_n+1⋯ i_N i_1⋯ i_n-1∈[J_n] and hence q^ n is the leverage-based probability distribution for G^ n_[2] on [J_n]. With this lemma, we can define the sampling matrix in (<ref>) as follows: (S_n)_ij= 1, if the j-th row is chosen in the i-th independent random trial with the probability q^ n(j), 0, otherwise, and the i-th diagonal entry of the diagonal rescaling matrix D_n in (<ref>) is now (D_n)_ii = 1/√(m q^ n(j)). As above, the rescaling matrix can be ignored and the sampling can be carried out in TR-cores as done in <Ref>. The detailed algorithm is summarized in <Ref> in <Ref> and the theoretical guarantee is given in the following. Let Ψ_n be a leveraged-based sampling matrix as defined above and G̃_n(2)min_G_n(2)X_[n]Ψ_n^⊺ - G_n(2)(Ψ_nG_[2]^ n)^⊺_F. If m > ( ∏_j=1^N R_j^2 ) max[ 16/3(√(2)-1)^2ln( 4R_n R_n+1/δ), 4/εδ] with ε∈ (0,1) and δ∈ (0,1), then the following inequality holds with a probability of at least 1-δ: X_[n] - G̃_n(2)(G_[2]^ n)^⊺_F ≤ (1+ε) min_G_n(2)X_[n] - G_n(2)(G_[2]^ n)^⊺_F. KSRFT The definition of KSRFT is listed as follows. The KSRFT is defined as Ψ = √(∏_j=1^N I_j/m)S(⊗_j=1^N (F_j D_j)), where * S∈R^m ×∏_j=1^N I_j : m rows, drawn uniformly with replacement, of the ∏_i=j^N I_j ×∏_j=1^N I_j identity matrix, i.e., it is a unform sampling matrix; * F_j ∈C^I_j × I_j : (unitary) discrete Fourier transform (DFT) of dimension I_j; * D_j ∈R^I_j × I_j : a diagonal matrix with independent random diagonal entries drawn uniformly from {+1,-1} (also called random sign-flip operator). <Ref> shows the method for calculating the sketched subchain and input tensors based on KSRFT, which is summarized from <cit.>. Considering that KSRFT transforms the original TR-ALS subproblems into complex ones, the update of TR-cores needs to do the following slight change: G_N(2)^new←(X̂_S[N]^new)( ( ( Ĝ_S[2]^ N)^⊺))^†, G_N(2)←[ G_N(2)^old; G_N(2)^new ], P_n←P_n^old + P_n + X̂_S[n]^new(Ĝ^ n_new)_S[2], Q_n←Q_n^old + (Ĝ^ n_new)_S[2]^⊺(Ĝ^ n_new)_S[2], G_n(2)←(P_n) (Q_n)^†, where X̂ = X×_1 (F_1 D_1) ×_2 (F_2 D_2) ⋯×_N (F_N D_N), Ĝ_n = G_n×_2 (F_n D_n) for n=1,⋯,N, and (·) and (·) remain the real-value and conjugation of entries of a matrix, respectively. The detailed algorithm is summarized in <Ref> in <Ref> and the theoretical guarantee is given in <Ref>. Let Ψ_n be a KSRFT as defined in <Ref> and G̃_n(2)min_G_n(2)X_[n]Ψ_n^⊺ - G_n(2)(Ψ_nG_[2]^ n)^⊺_F. If m ≥xy with x =ε^-1 (R_nR_n+1) log^2N-3((R_nR_n+1/ε)^R_nR_n+1/2N-1/η), y =log^4((R_nR_n+1/ε)^1/2log^N-1( (R_nR_n+1/ε)^R_nR_n+1/2N-1/η) ) log∏_j n I_j, where ε∈ (0,1) is such that ∏_j n I_j ≲ 1/ε^R_nR_n+1 with R_nR_n+1≥ 2, δ∈ (0,1), and η∈ (0,1), then the following inequality holds with a probability of at least 1-η-2^-Ω(log∏_j n I_j): X_[n] - G̃_n(2)(G_[2]^ n)^⊺_F ≤ (1+ε) min_G_n(2)X_[n] - G_n(2)(G_[2]^ n)^⊺_F. Furthermore, if an assumption on (N-1)/η≤ (R_nR_n+1/ε)^R_nR_n+1/2 also holds, the bound on sketch size can be changed to m ≥ε^-1 (R_nR_n+1)^2(N-1)log^2N-3(R_nR_n+1/ε) log^4(R_nR_n+1/εlog(R_nR_n+1/ε)) log∏_j n I_j . § NUMERICAL EXPERIMENTS In this section, we consider the numerical performance of our STR and rSTR[We use rSTR-U, rSTR-L and rSTR-K to notate rSTR with uniform sampling, leverage-based sampling and KSRFT, respectively.]. Specifically, we first examine their effectiveness and efficiency on two real-world datasets. Then, based on the investigation on synthetic tensors, we show their performance from various perspectives in greater detail. Six baselines have been chosen as competitors to evaluate the performance in our experiments: * TR-ALS (Cold) <cit.>: an implementation of TR-ALS without special initialization. * TR-ALS (Hot): the same as above but the TR decomposition of the last time step is used as the initialization for decomposing the current tensor. * TR-ALS-NE <cit.>: a practical implementation of TR-ALS. * TR-ALS-Sampled-U: a sampling-based algorithm with uniform sampling. * TR-ALS-Sampled <cit.>: the same as above but with leverage-based sampling. * TR-KSRFT-ALS <cit.>: a practical implementation of KSRFT-based algorithm. The computational complexities of the above methods as well as ours are listed in <Ref>. In addition, our STR and rSTR occupy the memory space of I^N-1t^new + (2(N-1)I+t^old)R^2 + (N-1)R^4, which is much smaller than I^N-1(t^old+t^new), the memory space of the other methods. The experimental protocol is the same for all the experiments. Specifically, for a given dataset of size I_1×⋯× I_N, a subtensor of size I_1×⋯× I_N-1× (20%I_N) is first decomposed by TR-ALS and the TR decomposition is used to initialize all the algorithms. After that, a section of size I_1×⋯× I_N-1× t^new of the remaining data is appended to the existing tensor at a time step, immediately following which all the methods record their processing time for this step, as well as calculate the relative errors of their current decompositions by X -X̂_F/X_F=X-( {Ĝ_n}_n=1^N ) _F/X_F, where the TR-cores {Ĝ_n}_n=1^N are computed by various algorithms. Thus, continuing the process, we can report and compare the processing time and relative errors for all the time steps. The same experiment is replicated 10 times for all datasets by using Matlab R2022a on a computer with an Intel Xeon W-2255 3.7 GHz CPU, and 256 GB RAM, and the final results are averaged over these runs. Additionally, we also use the Matlab Tensor Toolbox <cit.>. For the initialization stage, there are some settings of parameters that need to be clarified. Firstly, since we only care about the comparison on relative performance among different algorithms, it is not necessary to pursue the best rank decomposition for each dataset. Hence, unless otherwise stated, the target rank R is always fixed to 5 for all the datasets. Secondly, to find a good initial TR decomposition, the tolerance ϵ (the value of the change in relative error between two adjacent steps) is set to 1e-8 and the maximum number of iterations IT is set to 100. Note that the performance of online algorithms depends on the quality of the initial decomposition <cit.>. However, exploring the impact of initialization is not our main purpose. So, we use the same initialization for both STR and rSTR. Whereas, in practice, it is better to validate the goodness of the initialization to obtain the best subsequent effectiveness. In addition, in terms of method-specific parameters, for the six batch algorithms (i.e., the six baselines), the default settings, ϵ = 1e-10 and IT=50, are used, and we adopt the same sketch size m=1000 in all the randomized algorithms, which has little effect on experiments except for <Ref>, since the rank is changing there. Besides, unless otherwise stated, we always set t^new=5 for all the datasets. §.§ Effectiveness and efficiency The experiments are conducted on two real-world datasets of varying characteristics and higher-order structure. Specifically, we extract 360 gray-scale images from the popular image dataset Columbia Object Image Library (COIL-20)[<https://cave.cs.columbia.edu/repository/COIL-20>] to form a tensor of size 416× 448× 360 and 300 frames of a popular video sequences from Hall[<https://github.com/qbzhao/BRTF/tree/master/videos>] to form a tensor of size 144× 176× 3× 300. For these two tensors, the relative errors and processing time for each time step of various algorithms are reported in <Ref>, from which we can see that the batch methods, i.e, TR-ALS, TR-ALS-NE, TR-ALS-Sampled-U, TR-ALS-Sampled and TR-KSRFT-ALS, have the expected results which we have known from <cit.>. That is, TR-ALS-NE is identical to TR-ALS in terms of accuracy, but takes much less time due to the structure being used in the algorithm; the three randomized algorithms can accelerate the deterministic methods, however, loss some accuracy. In addition, TR-ALS (Cold) is slightly less accurate than TR-ALS (Hot). The main reason is that using previous results as initialization can provide a descending seed point for the ALS algorithm, while TR-ALS (Cold) discards this useful information completely. Our proposed algorithms, i.e., STR and rSTR, show very promising results in terms of accuracy and speed. Specifically, STR is fairly consistent and very similar to the batch methods in accuracy; rSTR performs slightly worse but the differences are not remarkable. However, both of them are much faster than all the batch methods including the randomized ones. Comparing STR and rSTR, the sampling-based rSTR, i.e., rSTR-U and rSTR-L, shows an advantage in computing time, however, the projection-based rSTR, i.e., rSTR-K, is not very competitive in this respect. The main reason is that an expensive step, i.e., the mixing tensor step, needs to be performed at each time step; see <cit.> for more details. While, when the tensor order increases, the advantage of rSTR-K in running time will gradually emerge; see the experiments in <Ref> below. §.§ More comparisons In this subsection, with the synthetic data formed by TR decomposition whose TR-cores are generated by random Gaussian tensors with entries drawn independently from a standard normal distribution, we show the impact on performance of various parameters appearing in the computational complexities of algorithms; see <Ref> for the specific complexities. More specifically, we compare each algorithm by varying five parameters: the order (N), the dimension (I), the dimension of the temporal mode (I_N), the temporal slice size (t^new), and the rank (R_true = R). We first vary N and I. Numerical results on decomposition for four tensors with different orders and dimensions are given in <Ref>, which shows the similar results to the previous experiments in <Ref>. This further validates the effectiveness and efficiency of our algorithms. Note that TR-ALS (Cold) may fluctuate wildly. This is because, as explained above, reinitialization may lead the method to be nonstable. Now, we vary I_N. Specifically, we set X: 30 × 30 × 30 × 30 × I_N with I_N = 120,140,160,180,200. The final relative errors and the total running time for each tensor are measured and displayed in <Ref>, and we can see that, with the errors being almost unchanged, the complexities for all the algorithms increase as the length of processed data grows as expected. Thirdly, we consider the tensor X: 30 × 30 × 30 × 30 × 200 with t^new = 2,4,6,8,10 and record the final relative errors and the total running time when the whole tensor is decomposed. As can be seen from <Ref>, the temporal slice size has little effect on the quality of the decomposition, but in terms of running time, the larger the temporal slice size is, the less the total running time is. This is because, for a tensor with a fixed time dimension, larger temporal slice size means fewer time steps and hence less running time. Finally, we vary the rank R_true=R for tensors X: 30 × 30 × 30 × 30 × 200. The final relative errors and the total running time for R = 3,4,5,6 are reported in <Ref>, from which it is seen that the relative errors for STR and three offline deterministic algorithms, i.e., TR-ALS (Cold), TR-ALS (Hot), TR-ALS-NE, have no change as R varies. Whereas, for randomized algorithms, the errors will increase with the rank growing. This is because, by <Ref>, the sketch size of randomized algorithms is related to the size of the rank, while the former is fixed in our experiments. In comparison, the range of change for our rSTR is smaller, which is mainly due to the use of the better decomposition results from the previous time step. As for the running time, all the randomized algorithms hardly varies as the rank increases, while several deterministic methods increase a little. This is not well reflected in <Ref> because the assumptions there are not satisfied when the rank increases, thus making the overall leading order complexity change. § CONCLUDING REMARKS This paper discusses the problem of tracking TR decompositions of streaming tensors. A streaming algorithm, i.e., STR, is first proposed that can efficiently monitor the new decomposition by employing complementary TR-cores to temporally store the valuable information from the previous time step. Then, we provide a randomized variant of STR, i.e., rSTR, which can permit various randomization techniques conveniently and cheaply due to the use of the structure of the coefficient matrices in TR-ALS. Numerical results on both real-world and synthetic datasets demonstrate that our algorithms are comparable to the accurate batch methods in accuracy, and outperform them considerably in terms of computational cost. There is some room for methodological improvement. By incorporating numerous popular regularizers and constraints, such as nonnegativity, we can further increase the adaptability of our methods, making them more suited for applications such as computer vision. Moreover, our algorithms presume that the rank of TR decomposition remains constant throughout the streaming process. Increasing TR-ranks in streaming TR decompositions is a viable option. In addition, it is also valuable to extend our methods to accommodate streaming tensors that can be modified in any mode, i.e., multi-aspect streaming tensors. § DECLARATIONS §.§ Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. §.§ Competing Interests The authors declare that they have no conflict of interest. spmpsci § APPENDIX § PROOFS We first state some preliminaries that will be used in the proofs, where <Ref> is a variant of <cit.> for multiple right hand sides, <Ref> is a part of <cit.>, and <Ref> is from <cit.>. Let min_XAX - Y_F with A∈R^I × R and I > R, let U∈R^I ×(A) contain the left singular vectors of A, let U^⊥ be an orthogonal matrix whose columns span the space perpendicular to (U) and define Y^⊥U^⊥ (U^⊥)^⊺Y. If Ψ∈R^m × I satisfies σ^2_min (ΨU) ≥1/√(2), U^⊺Ψ^⊺ΨY^⊥_F^2 ≤ε/2^2, for some ε∈ (0,1), then AX̃ - Y_F ≤ (1+ε), where X̃min_XΨAX - ΨY_F. Let A and B be matrices with I rows, and let q∈R^I be a probability distribution satisfying q(i) ≥βA(i, :)^2_2/A_F^2  for all   i ∈ [I]   and some  β∈ (0, 1]. If Ψ∈R^m × I is a sampling matrix with the probability distribution q, then EA^⊺B - A^⊺Ψ^⊺ΨB_F^2 ≤1/β mA_F^2 B_F^2. Let A∈R^I × R with A_2 ≤ 1, and let q∈R^I be a probability distribution satisfying q(i) ≥βA(i, :)^2_2/A_F^2  for all   i ∈ [I]   and some  β∈ (0, 1]. If Ψ∈R^m × I is a sampling matrix with the probability distribution q, ε∈ (0,1) is an accuracy parameter, A_F^2 ≥1/24, and m ≥96 A_F^2/βε^2ln( 96 A_F^2/βε^2 √(δ)), then, with a probability of at least 1-δ, A^⊺A - A^⊺Ψ^⊺ΨA_F^2 ≤ε. §.§ Proof of <Ref> We first state a theorem similar to <cit.>, i.e., the theoretical guarantee of uniform sampling for TR-ALS. Let Ψ_n be a uniform sampling matrix defined as in (<ref>), and G̃_n(2)min_G_n(2)X_[n]Ψ_n^⊺ - G_n(2)(Ψ_nG_[2]^ n)^⊺_F. If m ≥( 2 γ R_n R_n+1/ε) max[ 48/εln( 96 γ R_n R_n+1/ε^2 √(δ)), 1/δ], with ε∈ (0,1), δ∈ (0,1), and γ>1, then the following inequality holds with a probability of at least 1-δ: X_[n] - G̃_n(2)(G_[2]^ n)^⊺_F ≤ (1+ε) min_G_n(2)X_[n] - G_n(2)(G_[2]^ n)^⊺_F. Let U∈R^J_n ×(G_[2]^ n) contain the left singular vectors of G_[2]^ n and (G_[2]^ n) = R_n R_n+1. Then, there is a γ >1 such that U(i,:)_2^2 ≤γ R_n R_n+1/J_n  for all   i ∈[J_n ]. Note that U_F = √(R_n R_n+1). Thus, setting β = 1/γ, we have 1/J_n≥βU(i,:)_2^2/U_F^2. That is, the uniform probability distribution q on [J_n] satisfies (<ref>). Moreover, it is easy to see that U_2 = 1≤ 1, U_F^2 = R_n R_n+1>1/24, and m ≥96R_n R_n+1/βε^2ln( 96 R_n R_n+1/βε^2 √(δ_1)). Thus, noting that Ψ_n is a sampling matrix with the probability distribution q, applying <Ref> implies that U^⊺U - U^⊺Ψ_n^⊺Ψ_nU_2 ≤ε. On the other hand, note that for all i ∈ [R_n R_n+1], |1-σ^2_i(Ψ_nU)| = |σ_i(U^⊺U) - σ_i(U^⊺Ψ_n^⊺Ψ_nU)| ≤U^⊺U - U^⊺Ψ_n^⊺Ψ_nU_2. Thus, choosing ε = 1-1/√(2) gives that σ^2_min (Ψ_nU) ≥1/√(2), therefore (<ref>) is satisfied. Next, we check (<ref>). Recall that (X_[n]^⊺)^⊥U^⊥ (U^⊥)^⊺X_[n]^⊺. Hence, U^⊺ (X_[n]^⊺)^⊥ =0 and (Ψ_nU)^⊺Ψ_n (X_[n]^⊺)^⊥_2^2 = U^⊺Ψ_n^⊺Ψ_n (X_[n]^⊺)^⊥ - U^⊺ (X_[n]^⊺)^⊥_2^2. Thus, noting (<ref>) and (<ref>), applying <Ref>, we get E[ (Ψ_nU)^⊺Ψ_n (X_[n]^⊺)^⊥_2^2 ] ≤1/β mU_F^2 (X_[n]^⊺)^⊥_2^2 = R_n R_n+1^2/β m, where = min_G_n(2)G_[2]^ nG_n(2)^⊺ - X_[n]^⊺_F. Markov’s inequality now implies that with probability at least 1-δ_2 (Ψ_nU)^⊺Ψ_n (X_[n]^⊺)^⊥_2^2 ≤ R_n R_n+1^2/δ_2 β m. Setting m ≥2R_n R_n+1/δ_2 βε and using the value of β specified above, we have that (<ref>) is indeed satisfied. Finally, using <Ref> concludes the proof of the theorem. Proof of <Ref> For the temporal mode N, if m̃_1 ≥( 2 γ R_N R_1/ε̃_1) max[ 48/ε̃_1ln( 96 γ R_N R_1/ε̃_1^2 √(δ)), 1/δ], m̃_2 ≥( 2 γ R_N R_1/ε̃_2) max[ 48/ε̃_2ln( 96 γ R_N R_1/ε̃_2^2 √(δ)), 1/δ], ⋯ according to <Ref>, at each time step we can obtain a corresponding upper error bound between the new coming tensor and its decomposition as follows The 1st time step: X_[N]^old - G̃_N(2)^old(G_[2]^ N)^⊺_F ≤ (1+ε̃_1) min_G_N(2)^oldX_[N]^old - G_N(2)^old(G_[2]^ N)^⊺_F, The 2nd time step: X_[N]^new - G̃_N(2)^new(G_[2]^ N)^⊺_F ≤ (1+ε̃_2) min_G_N(2)^newX_[N]^new - G_N(2)^new(G_[2]^ N)^⊺_F, ⋯ To obtain an upper bound on the error for all current time steps, let ε̃ = min{ε̃_1, ε̃_2, ⋯}, then the following holds with a probability of at least 1-δ: X_[N] - G̃_N(2)(G_[2]^ N)^⊺_F ≤ (1+ε̃) min_G_N(2)X_[N] - G_N(2)(G_[2]^ N)^⊺_F, for m̃≥( 2 γ R_N R_1/ε̃) max[ 48/ε̃ln( 96 γ R_N R_1/ε̃^2 √(δ)), 1/δ]. For the non-temporal mode n, if m'_n ≥( 2 γ R_n R_n+1/ε'_n) max[ 48/ε'_nln( 96 γ R_n R_n+1/ε'_n^2 √(δ)), 1/δ], we have X_[n] - G̃_n(2)(G_[2]^ n)^⊺_F ≤ (1+ε'_n) min_G_n(2)X_[n] - G_n(2)(G_[2]^ n)^⊺_F. Thus, setting ε = min{ε̃, ε'_1, ε'_2, ⋯, ε'_N-1}, the proof can be completed. Along the same line, the proofs of <Ref> can be completed by using <cit.> and <cit.>, repectively. § SPECIFIC ALGORITHMS BASED ON DIFFERENT SKETCHES Randomized streaming TR decomposition with uniform sampling (rSTR-U) Input: Initial tensor X^init, TR-ranks R_1, ⋯, R_N, new data tensor X^new and sampling size m Output: TR-cores {G_n ∈R^R_n × I_n × R_n+1}_n=1^N Randomized streaming TR decomposition with leverage-based sampling (rSTR-L) Input: Initial tensor X^init, TR-ranks R_1, ⋯, R_N, new data tensor X^new and sampling size m Output: TR-cores {G_n ∈R^R_n × I_n × R_n+1}_n=1^N Randomized streaming TR decomposition with KSRFT (rSTR-K) Input: Initial tensor X^init, TR-ranks R_1, ⋯, R_N, new data tensor X^new and sketch size m Output: TR-cores {G_n ∈R^R_n × I_n × R_n+1}_n=1^N
http://arxiv.org/abs/2307.01241v1
20230703172545
Data-driven decoding of quantum error correcting codes using graph neural networks
[ "Moritz Lange", "Pontus Havström", "Basudha Srivastava", "Valdemar Bergentall", "Karl Hammar", "Olivia Heuts", "Evert van Nieuwenburg", "Mats Granath" ]
quant-ph
[ "quant-ph" ]
Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden [][email protected] Leiden Inst. of Advanced Computer Science, Leiden University, Leiden, Netherlands [][email protected] Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden [subfigure]labelformat=empty To leverage the full potential of quantum error-correcting stabilizer codes it is crucial to have an efficient and accurate decoder. Accurate, maximum likelihood, decoders are computationally very expensive whereas decoders based on more efficient algorithms give sub-optimal performance. In addition, the accuracy will depend on the quality of models and estimates of error rates for idling qubits, gates, measurements, and resets, and will typically assume symmetric error channels. In this work, instead, we explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN). The decoding problem is formulated as a graph classification task in which a set of stabilizer measurements is mapped to an annotated detector graph for which the neural network predicts the most likely logical error class. We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated experimental data, even if the matching decoder is given full information of the underlying error model. Although training is computationally demanding, inference is fast and scales approximately linearly with the space-time volume of the code. We also find that we can use large, but more limited, datasets of real experimental data [Google Quantum AI, Nature 614, 676 (2023)] for the repetition code, giving decoding accuracies that are on par with minimum weight perfect matching. The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction, which is competitive in terms of speed, accuracy, and versatility. Data-driven decoding of quantum error correcting codes using graph neural networks Mats Granath ================================================================================== § INTRODUCTION Quantum Error Correction (QEC) is foreseen to be a vital component in the development of practical quantum computing <cit.>. The need for QEC arises due to the susceptibility of quantum information to noise, which can rapidly accumulate and corrupt the final output. Unlike noise mitigation schemes where errors are reduced by classical post-processing <cit.>, QEC methods encode quantum information in a way that allows for the detection and correction of errors without destroying the information itself. A prominent framework for this are topological stabilizer codes, such as the surface code, for which the logical failure rates can be systematically suppressed by increasing the size of the code if the intrinsic error rates are below some threshold value <cit.>. Stabilizer codes are based on a set of commutative, typically local, measurements that project an n-qubit state to a lower dimensional code space representing one or more logical qubits. Errors take the state out of the code space and are then indicated by a syndrome, corresponding to stabilizer violations. The syndrome needs to be interpreted in order to gauge whether a logical bit or phase flip may have been incurred on the logical qubit. Interpreting the syndrome, to predict the most likely logical error, requires both a decoder algorithm and, traditionally, a model of the qubit error channels. The fact that measurements may themselves be noisy, makes this interpretation additionally challenging <cit.>. Efforts are under way to realize stabilizer codes experimentally using various qubit architectures <cit.>. In <cit.>, code distance 3 and 5 surface codes were implemented, using 17 and 49 superconducting qubits, respectively. After initialization of the qubits, repeated stabilizer measurements are performed over a given number of cycles capped by a final round of single qubit measurements. The results are then compared with the initial state to determine whether errors have caused a logical bit- (or phase-) error. The decoder analyses the collected sets of syndrome measurements in post-processing, where the fraction of correct predictions gives a measure of the logical accuracy. The better the decoder, the higher the coherence time of the logical qubit, and in <cit.> a computationally costly tensor network based decoder was used to maximize the logical fidelity of the distance 5 code compared to the distance 3 code. However, with the objective of moving from running and benchmarking a quantum memory to using it for universal quantum computation, it will be necessary to do error correction both with high accuracy and in real time. In the present work, we explore the viability of using a purely data-driven approach to decoding, based on the potential of generating large amounts of experimental data. We use a graph neural network (GNN) which is well suited for addressing this type of data. Namely, a single data point, as in <cit.>, consists of a set of “detectors”, i.e., changes in stabilizer measurements from one cycle to the next, together with a label indicating the measured logical bit- or phase-flip error. This can be represented as a labeled graph with nodes that are annotated by the information on the type of stabilizer and the space-time position of the detector, as shown in Figure <ref>. The maximum degree of the graph can be capped based on removing edges between distant detectors, keeping only a fixed maximum number of neighboring nodes. The latter ensures that each network layer in the GNN (see Figure <ref>) performs a number of matrix multiplications that scales linearly with the number of nodes, i.e., linearly with the number of stabilizer measurements and the overall error rate. We have trained this decoder on simulated experimental data for the surface code using Stim <cit.> as well as real experimental data on the repetition code <cit.>. For both of these, the decoder is on par with, or outperforms, the state-of-the-art matching decoder <cit.>, suggesting that with sufficient data and a suitable neural network architecture, model-free machine learning based decoders trained on experimental data can be competitive for future implementations of quantum error-correcting stabilizer codes. § STABILIZER CODES AND DECODING A stabilizer code is defined through a set of commuting operators constructed from products of Pauli operators acting on a Hilbert space of n data qubits <cit.>. With n_S independent stabilizers the Hilbert space is split into sectors of dimension 2^n-n_S, specified by the parity under each stabilizer. For concreteness we will consider the case n_S=n-1, such that each of the sectors represent a single qubit degree of freedom. Each syndrome measurement is performed with the help of an ancilla qubit following a small entangling circuit with the relevant data qubits. The measured state of the ancilla qubits provide a syndrome S={s_i, i=1,...,n_S | ∈ 0,1 }, and projects the density matrix of the n qubit state into a single 2-dimensional block, a Pauli frame <cit.>. Given uncertainties in the measurements, a number of rounds are typically performed before the information is interpreted by means of a decoder. Defining a pair of anticommuting operators Z_L and X_L that commute with the stabilizer group, provides the logical computational space through Z_L|0⟩_L=|0⟩_L and |1⟩_L=X_L|0⟩_L. Assuming a fixed pair of logical operators for a given code defines the corresponding logical states in each Pauli frame. Thus, a number of subsequent rounds of stabilizer measurements, during which the code is affected by decoherence, transforms the density matrix from the initial state ρ=∑_i,j∈{0,1}ρ_ij|i⟩_L⟨ j|_L to the final state ρ'=∑_i,j∈{0,1}ρ'_ij|i⟩'_L⟨ j|'_L, where |0/1⟩_L (|0/1⟩'_L) are the logical qubit states in the initial (final) Pauli frame. The logical error channels are approximated by ρ →ρ' = ϵ_L(ρ_L) = (1-P)ρ+P_X X_Lρ X_L+P_Z Z_Lρ Z_L+P_Y Y_Lρ Y_L , with Y_L=-iZ_LX_L and P=∑_i=X,Y,ZP_i. In general there may be additional non-symmetric channels (see for example <cit.>), but we will assume that the data (as in <cit.>) does not resolve such channels. The probabilities of logical error, P_i, will be quantified by the complete set of syndrome measurements and depend on single and multi-qubit error channels as well as measurement and reset errors. It is the task of the decoder to quantify these in order to maximize the effectiveness of the error correction. Traditionally this is done through computational algorithms that use a specific error model. The framework that most decoders are based on uses independent and identically distributed symmetric noise acting on individual qubits, possibly, for circuit-level noise, complemented by two-qubit gate errors, faulty measurements and ancilla qubit reset errors. Maximum-likelihood decoders <cit.> aim to explicitly account for all possible error configurations that are consistent with the measured syndromes, with their respective probabilities given by the assumed error model. The full set of error configurations fall in four different cosets that map to each other by the logical operators of the code, thus directly providing an estimate of the probabilities P_i that is limited only by the approximations involved in the calculation and the error model. Even though such decoders may be useful for benchmarking and optimizing the theoretical performance of stabilizer codes <cit.>, they are computationally too demanding for real time operation, even for small codes. The more standard decoders instead are based on the minimum weight perfect matching (MWPM) algorithm <cit.>. Such a decoder aims to find the single, most likely, configuration of single qubit errors consistent with the set of measured stabilizers. Detectors are mapped to nodes of a graph with edges that are weighted by the probability of the pair of nodes. For codes where nodes appear in pairs (such as the repetition or surface code), the most likely error corresponds to pairwise matching such that the total weight of the edges is minimized. This algorithm is fast, in practice scaling approximately linearly with the size of the graph (see Figure <ref>). Nevertheless, it has several short-comings that limits accuracy and applicability: 1) Approximate handling of crossing edges (such as coinciding X and Z errors) means that the effective error model is oversimplified. 2) Except at very low error rates, degeneracies of less likely error configurations are ignored. 3) For models where a single error may give rise to more than two detector events, more sophisticated algorithms are needed <cit.>. These shortcomings can be partially addressed by more sophisticated approaches such as counting multiplicity or using belief propagation <cit.>, but often at the cost of added computational complexity. Other examples of decoder algorithms are based on decoding from small to large scale, such as cellular-automata <cit.>, renormalization group <cit.>, or union-find <cit.>. The latter, in particular, is very efficient, but at the cost of sub-optimal performance. §.§ Related work A number of different deep learning based decoder algorithms have also been formulated, based on supervised learning, reinforcement learning, and genetic neural algorithms <cit.>. Focusing on the works on the surface code and based on supervised learning, these can roughly be separated according to whether they primarily consider perfect stabilizers <cit.>, or include measurement noise or circuit-level noise <cit.>, and whether they are purely data-driven <cit.> or involve some auxiliary, model-informed, algorithm or multi-step reduction of decoding <cit.>. The present work is in the category, realistic (circuit-level) noise, and purely data-driven. It is distinguished primarily in that we 1) Use graph neural networks and graph structured data, and 2) Train and test the neural network decoder on real experimental data. In addition, as in several of the earlier works <cit.>, we emphasize the use of a model-free, purely data-driven, approach. By using experimental stabilizer data, the approximations of traditional model-based decoder algorithms can be avoided. The fact that the real error channels at the qubit level may be asymmetric, due to amplitude damping, have long-range correlations, or involve leakage outside the computational space, is intrinsic to the data. This is also in contrast to other data-driven approaches <cit.> that use stabilizer data to learn the detailed Pauli channels, optimize a decoder algorithm through the edge weights of a matching decoder, or the individual qubit and measure error rates of a tensor network based decoder, as these are all constrained by a specific error model. §.§ Repetition code and surface code The decoder formalism that we present in this work can be applied to any stabilizer code, requiring only a dataset of measured (or simulated) stabilizers, together with the logical outcomes. Nevertheless, to keep to the core issues of training and performance we consider only two standard scalable stabilizer codes: the repetition code and the surface code. The repetition code is defined on a one-dimensional grid of qubits with neighboring pair-wise Z_i⊗ Z_i+1 stabilizers. In the Pauli frame with all +1 stabilizers, the code words are |0⟩_L=|0⟩^⊗ n and |1⟩_L=|1⟩^⊗ n. Consider a logical qubit state |ψ⟩=α |0⟩_L + β |1⟩_L, with complex amplitudes |α|^2+|β|^2=1. The logical bit-flip operator is given by X_L=⊗_iX_i, which sets the code distance d_X=n. Assuming perfect stabilizer measurements and independent and identically distributed single qubit bit-flip error probabilities, decoding the repetition code is trivial. For any set of stabilizer violations, i.e., odd parity outcomes, there are only two consistent configurations of errors that map to each other by acting with X_L. A decoder (maximum-likelihood in the case of this simple error model) would suggest the one with fewer errors. The repetition code, set up to detect bit-flip errors, is insensitive to phase flip errors, as is clear from the fact that a phase-flip (Z) error on a single qubit also gives a phase-flip error (β→ -β) on the logical qubit, corresponding to a code distance d_Z=1. To detect and correct both bit- and phase-flip errors we need a more potent code, the most promising of which may be the surface code. We consider the qubit-efficient “rotated” surface code <cit.> (see Figure <ref>), constructed from weight-4, Z^⊗ 4 and X^⊗ 4, stabilizers (formally stabilizer generators), with complementary weight-2 stabilizers on the boundary. On a square grid of d× d data qubits, the d^2-1 stabilizers give one logical qubit. We define the logical operator X_L as a string of X's on the southwest edge, and a string of Z's on the northwest edge, as shown in Figure <ref>. These are the two (unique up to products of stabilizers) lowest weight operators that commute with the stabilizer group, without being part of said group. Stabilizer measurements are performed by means of entangling circuits between the data qubits and an ancilla qubit. Assuming hardware with one ancilla qubit per stabilizer, and the appropriate gate schedule, these can all be measured simultaneously, corresponding to one round of stabilizer measurements. §.§ Memory experiments on the surface code To train and test our decoder we consider a real or simulated experiment, illustrated schematically in Figure <ref>, to benchmark a surface code as a quantum memory. The following procedure can be used for any stabilizer code: * Initialize the individual qubits: Data qubits in a fixed or random configuration in the computational basis |0⟩ and |1⟩. Ancilla qubits in |0⟩. The initial data qubit configuration is viewed as a 0'th round of measurements that initialize the Z-stabilizers. This also corresponds to an effective measurement ⟨ Z_L⟩_t=0=∏_i∈ Z_LZ_i=± 1. (Northwest row of qubits in Figure <ref>.) * A first round, t=1, of actual stabilizer measurements is performed. The Z-stabilizers are determined, up to bit-flip errors, by the 0'th round. Hence, the difference between the two provides the first round of Z-detectors. The X-stabilizers have randomized outcome, projecting to an even or odd parity state over the four (or two) qubits in the Hadamard (|+⟩, |-⟩) basis. The value of these stabilizers form the reference for subsequent error detecting measurements of the X-stabilizers. Ancilla qubits are reset to 0 after this and subsequent rounds. * Subsequent rounds t=2,...,d_t of Z and X stabilizer measurements provide the input for corresponding detectors based on changes from the previous round. * Finally, data qubits are measured individually in the Z-basis, which provides a final measurement, ⟨ Z_L⟩_t=d_t+1. These also provide a final round of Z-stabilizers, which, since they are provided by the actual qubit outcomes rather than by measuring an ancilla, are perfect stabilizers by definition. The outlined experiment provides a single data point D=({V_Z},{V_X},λ_Z) consisting of set of Z-detectors {V_Z}, over d_t+1 cycles and a set of X-detectors {V_X} over d_t-1 cycles. In addition to the stabilizer type, each detector is tagged with its space-time coordinate, (x,y,t), with 0≤ x,y≤ d and 1≤ t≤ d_t± 1 for Z and X detectors respectively. The logical label is given by λ_Z=1/2|⟨ Z_L⟩_t=0-⟨ Z_L⟩_t=d_t+1|∈{0,1} . The probability of λ_Z=1 is, according to Eqn. <ref>, given by P_X+P_Y, and the probability of λ_Z=0 by P_I+P_Z, corresponding to a logical bit-flip or not. What has been described is a “memory-Z” experiment <cit.>, i.e., one in which we detect logical bit-flips. Qubits are initialized in the computational basis |0⟩ and |1⟩. A “memory-X” experiment prepares the qubits in the Hadamard basis, with the role of X- and Z-stabilizers reversed. Physically, in the lab, one cannot do both experiments in the same run, as Z_L and X_L do not commute. This also implies that each data point only has one of the two binary labels, λ_Z or λ_X, even though there is information in the detectors about both labels. The neural network will be constructed to predict both labels for a given set of detectors, which implies that the learning framework is effectively that of semi-supervised learning, with partially labeled data. Thus, in contrast to a matching based decoder, which breaks the surface code detectors into two independent sets with a corresponding graph for each, the GNN decoder can make use of the complete information. This, in addition to the fact that it is not constrained by the limitations of the matching algorithm itself, provides a possible advantage in terms of prediction accuracy. Additionally, some fraction of the data is incorrectly labeled. This follows from the fact that measured labels will not always be the most likely. In fact, the fraction of incorrectly labeled data corresponds to the logical failure rate that an optimal decoder would provide. For the data that we use, this fraction ranges from marginal (see Figure <ref>) to quite substantial (see Figure <ref>), depending in particular on the number of cycles that are decoded, as the logical failure rate grows exponentially with the number of cycles. We have also assumed that there is no post-processing to remove leakage. Assuming there is some mechanism of relaxation back to the computational qubit subspace, including the last round of measurements, leakage events will be be handled automatically by the neural network decoder, based on the signature they leave in the detector data. § GRAPH NEURAL NETWORK DECODER A graph neural network (GNN) can be viewed as a trainable message passing algorithm, where information is passed between the nodes through the edges of the graph and processed through a neural network <cit.>. The input is data in the form of a graph G=(V,E), with a set of nodes V={i | i=1,..,N} and edges E={(i,j) | i≠ j∈ V}, which is annotated by n-dimensional node feature vectors X⃗_i and edge weights (or vectors) e_ij. The basic building blocks are the message passing graph convolutional layers, which take a graph as input and outputs an isomorphic graph with transformed feature vectors. Specifically, in this work we have used a standard graph convolution <cit.> where for each node the d_in-dimensional feature vector X⃗_i is transformed to new feature vector X⃗'_i with dimension d_out according to X⃗'_i=σ ( W_1X⃗_i+∑_j e_ij W_2X⃗_i ) , where non-existent edges are indicated by e_ij=0. Here W_1 and W_2 are d_out× d_in dimensional trainable weight matrices and σ is an element-wise non-linear activation function which includes a d_out-dimensional trainable bias vector. For the task at hand, which is graph classification, graph convolutions are followed by a pooling layer that contracts the information to a single vector, a graph embedding, which is independent of the dimension of the graph. We use a simple mean-pooling layer X⃗'=N^-1∑_iX⃗_i. For the classification we use two structurally identical, but independent, standard multi-layer feedforward networks that each end with a single node with sigmoid activation that acts as a binary classifier. The weights and biases of the complete network are trained using stochastic gradient descent with a loss function which is a sum of the binary cross entropy loss of the network output with respect to the binary labels. Since the experimental data, or simulated experimental data, only has one of the two binary labels (λ_Z, λ_X) for each complete detector graph, gradients are only calculated for the provided label. To avoid overfitting to the training data we employ two different approaches depending on the amount of available data. In using experimental data from <cit.>, we use a two-way split into a training set and a test set. To avoid diminishing the training data further, we do not use a validation set, and instead train for a fixed number of epochs. We observe (see Figure <ref>) that the test accuracy does not change significantly over a large number of epochs, even though the network continues to overfits. For the case with simulated experimental data (Figure <ref>), we avoid overfitting by replacing a substantial fraction (25%) of the data with new data, generated within the training cycle, after each epoch of training. This effectively emulates a much larger dataset, while keeping with the memory limits set by the available hardware. Since the training set is effectively unbounded, the number of unique detector graphs scales as 2^d_td^2 and the network cannot overfit. Also here, a fixed test set is used to gauge the performance. The GNN training and testing is implemented in PyTorch Geometric <cit.>, simulated data is generated using Stim <cit.>, and the MWPM decoding results use Pymatching <cit.>. The Adam optimizer is used for stochastic gradient descent, using manual learning rate decrements when the training accuracy has leveled out. Details on the training procedure can be found in Appendix <ref>. Several other graph layers were experimented with, including graph attention for both convolutions <cit.> and pooling <cit.>, as well as top_k pooling <cit.>. These were found not to improve results. The width and depth of the final network was arrived at after several rounds of iterations. Naturally, we expect that larger code distances, i.e., larger graphs, will require scaling up the network size. (See also Sec. <ref>) §.§ Data structure As discussed previously the data is in a form D=({V_Z},{V_X},λ_Z/X), consisting of a set of detectors V_Z/X, specified by a space-time coordinate, together with a binary label. Based on this we construct a single graph. Each node corresponds to a detector event, and is annotated by a 5-vector (for the surface code with circuit-level noise) X⃗=(b_1, b_2, x, y, t) containing the space-time coordinate (x,y,t) and two exclusive binary (one-hot encoded) labels with b⃗=(1,0) for an X-stabilizer and b⃗=(0,1) for a Z-stabilizer. (The encoding of the type of stabilizer may be superfluous, as it can be deduced from the coordinate.) We initially consider a complete graph, with edge weights given by the inverse square supremum norm between the vertices, e_ij=(max{|x_i - x_j|, |y_i - y_j|, |t_i - t_j|})^-2. This form of the edge weights is motivated by a naive picture of the minimum number of single data qubit measurement errors that would cause a pair of detectors. The main purpose of the weights is to give some measure of locality, in order to prune the graph. Smaller weight edges are removed, leaving only a fixed maximal node degree, which for the results presented in this work was capped at six. § RESULTS The GNN based decoder has been implemented, trained, and tested on the surface code and the repetition code. The main focus is on using simulated or real experimental data, presented in <ref> and <ref>, respectively. We also present some results on the surface code with perfect stabilizers, <ref>, where we are able to train the network for larger code distances. §.§ Surface code with circuit-level noise We use Stim to generate data with circuit-level noise. Simulated circuits use standard settings for the surface code, containing Hadamard single qubit gates, controlled-Z (CZ) entangling gates, and measure and reset operations. All of these operations, and the idling, contain Pauli noise, scaled by an overall error rate p. (See Appendix <ref>.) Datasets of several million graphs are generated, with partial replacement after each epoch of training to avoid overfitting. Figure <ref> shows test results evaluated at p=1.0· 10^-3 for decoders trained with data using an even mix of error rates p={1.0,2.0,3.0,4.0,5.0}· 10^-3 and memory-Z experiments. The logical failure rate is thus approximately 50% of the true failure rate (up to correlations between failures in X_L and Z_L), but consistent with the type of data that would be experimentally accessible. (We have also tried training and testing with a mix of memory-Z and memory-X experiments, which works as well but takes longer to train to the same accuracy.) The MWPM decoder uses the information provided by the simulated error model to optimize edge weights on the decoding graph, whereas the GNN decoder uses only the data provided by the simulated measurements. Despite this, we find that with sufficient training the GNN decoder outperforms the matching decoder. A different network is trained for each code distance d and for each number of rounds of stabilizer measurements d_t. Figure <ref> shows a representative plot of the training and validation accuracy, evaluated on the mixed error rate dataset. With an active (in-memory) dataset containing 5· 10^6 and given that 25% is replaced in each epoch, 1000 epochs corresponds to a total of 1.25 10^9 data points . §.§ Repetition code using experimental data Having trained GNN based decoders on simulated experimental data in the previous section, we now turn to real experimental data. We use the public data provided together with <cit.>. This contains data on both the d=3 and d=5 surface codes as well as the d=25 bit-error correcting repetition code. All datasets are of the form described in <ref>, thus readily transferred to the annotated and labeled graphs used to train the GNN, as described in <ref>. The datasets contain approximately 10^6 data points for the different codes, code distances, and varying number of stabilizer rounds. Our attempts to train a GNN on the data provided for the various implementations of surface code were generally unsuccessful. While it gave good results on the training data, the logical failure rate on the test set was poor. Given the fact that on the order of 10^9 data points were used for the simulated circuit-level noise on the surface code (<ref>), it is not surprising that the significantly smaller dataset turned out to be insufficient. The network cannot achieve high accuracy without overfitting to the training data given the relatively small dataset. For the repetition code, the data which is provided is of a single type, for a d=25 code measured over d_t=50 rounds. Each round thus contains the measurement of 24 ancilla qubits for the ZZ stabilizers of the two neighboring data qubits along a one-dimensional path. As done in <cit.> this data can be split up into data for smaller repetition codes, by simply restricting to stabilizers over a subset of d subsequent data qubits. In this way the dataset can be increased by a factor 25-(d-1), and used to train a single GNN for each code distance. It should be noted that this is suboptimal, compared to generating the same amount of data on single distance d device, as variations in the performance of the constituent qubits and gates will be averaged out in the dataset. Nevertheless, using this scheme we successfully trained GNN decoders for short distance repetition codes, with test accuracies shown in Figure <ref>. Results for (what we refer to as) “Device-optimized MWPM” is taken from <cit.>. The GNN decoder performs almost on par with this sophisticated matching decoder for d=3. As expected, the relative performance deteriorates with increased code distance. We expect that we would need more training data for larger code distance, but instead we have access to less. As the comparison with the matching decoder that uses a device specific error model may be biased compared to using training data from different devices, as mentioned above, we also give results for an “uniformed” matching decoder with edge weights based on the 1-norm distance between space-time coordinates. It may also be noted that using MWPM corresponds to a near optimal decoder for the repetition code, at least for the case of phenomenological measurement noise where it is equivalent to bit-flip error surface code. This is in contrast to the surface code, for which MWPM is suboptimal, even in the case of perfect stabilizers. Thus, outperforming MWPM for the repetition code may be more challenging than for the surface code. §.§ Surface code with perfect stabilizers To complement the results on simulated and real experimental data we have also trained the GNN decoder on the surface code with perfect stabilizers under depolarizing noise. The same network (see Appendix <ref>) is used as for circuit-level noise, but trained at p=[0.01, 0.05, 0.1, 0.15]. Results up to code distance d=15 are shown in Figure <ref> and found to significantly outperform MWPM. We also compare to a tensor network based <cit.> maximum likelihood decoder (MLD), showing that for code distance d=5 the GNN decoder has converged to the level of being an approximate MLD. We do not attempt to derive any threshold for the GNN decoder. Given a sufficiently expressive network we expect that the decoder would eventually converge to a maximum likelihood decoder, but in practice the accuracy is limited by the training time. It gets progressively more difficult to converge the training for larger code distances, which means that any threshold estimate will be a function of the training time versus code distance. In fact, in principle, since the threshold is a d→∞ quantity, we would not expect that a supervised learning algorithm can give a proper threshold if trained separately for each code distance. Nevertheless, through GNN's it is quite natural to use the same network to decode any distance code, as the data objects (detector graphs) have the same structure. We have investigated training the same network for different code distances and different number of rounds. This shows some promise, but so far does not achieve accuracy levels that can match MWPM. §.§ Scalability We are limited to relatively small codes in this work. For the repetition code using experimental data, it is quite clear that main limitation to scaling up the code distance is the size of the available dataset. For the surface code using simulated data it is challenging to increase the code distance while still surpassing MWPM. As the logical failure rates decrease exponentially with code distance, the test accuracy of the supervised training needs to follow. One way to counter this is to increase the number of stabilizer cycles, d_t, but this also increases the graph size, making the training more challenging from the perspective of increased memory requirements as well as the increased complexity of the data. Nevertheless, it is interesting to explore the intrinsic scalability of the algorithm, by quantifying how the decoding time using a fixed size GNN scales with the code size. Here we present results on the decoding time per syndrome for the surface code, as a function of code volume d^2d_t, at fixed error rate. The network architecture is the same as used for all the results in this paper, as described in Appendix <ref>. In line with expectations, the GNN inference scales approximately linearly with the code volume, i.e. average graph size, T∼ d^2d_t (with d_t=1 for perfect stabilizers). The number of matrix operations per graph convolutional layer, following Equation <ref>, is proportional to the number of nodes times the number of edges. The number of layers is fixed, multiplying this by a constant factor. The feature vector pooling is proportional to the number of nodes, whereas the subsequent dense network classifiers are independent of the graph size. We find that inference scales slightly better than the highly optimized matching decoder. However, several caveats are in order. 1) The size of the GNN is fixed. Larger code sizes may eventually require scaled up networks, unless the error rate is scaled down accordingly 2) The network has not been trained on code distances larger than d=15 (2D). It is only a test of the decoding time, not the accuracy. 3) For GNN inference, the data is batched in order to eliminate a fixed loading time to the GPU. Treating batched data doesn't seem viable for real time decoding. Similarly our graph construction algorithm is slower, scaling quadratically with code volume, and this time has also been removed to get decoding time per graph. These are both issues that most likely can be improved significantly with more hardware efficient algorithms, and, in the longer term, special purpose hardware. § CONCLUSION AND OUTLOOK In this paper we develop a model-free, data-driven, approach to decoding quantum error correcting stabilizer codes, using graph neural networks. A real or simulated memory experiment is represented as a single detector graph, with annotated nodes corresponding to the type of stabilizer and its space-time coordinate, and labeled by the measured logical operation on the encoded qubit. The maximal node degree is capped by cropping edges between distant nodes. The data is used to train a convolutional GNN for graph classification, with classes corresponding to logical Pauli operations, and used for decoding. We show that we can use real and simulated experimental data,for the repetition code and surface code respectively, to train a decoder with logical failure rates on par with minimum weight perfect matching, despite the latter having detailed information about the underlying real or simulated error channels. The use of a graph structure provides an efficient way to store and process the syndrome data. To train the GNN requires significant amounts of training data, but as shown in the case of simulated experiments, data can be produced in parallel with training. Network inference, i.e., using the network as a decoder, is fast, scaling approximately linearly with the space-time dimension of the code. As an extension of this work there are several interesting possibilities to explore. One example is to use a GNN for edge weight generation within a hybrid algorithm with a matching decoder (similarly to <cit.>). This would depart from the pure data-driven approach pursued in this paper, with peak performance limited by the matching decoder, but with the potential advantage of requiring less data to train. An alternative to this, to potentially improve performance and lower data requirements, is to use device specific input into edge weights, or encode soft information on measurement fidelities into edge or node features. Going beyond the most standard codes considered in this paper, we expect that any error correcting code for which labeled detector data can be generated can also be decoded with a GNN. This includes Clifford-deformed stabilizer codes <cit.>, color codes <cit.> or hexagonal stabilizer codes <cit.>, where syndrome defects are not created in pairs, but potentially also Floquet type codes <cit.>. In addition, heterogeneous and correlated noise models <cit.> would also be interesting to explore, where in particular the latter is difficult to handle with most standard decoders. The software code for the project can be found at <cit.>. We acknowledge financial support from the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT). Computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE), partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973. We thank Viktor Rehnberg and Hampus Linander for technical support. § GNN ARCHITECTURE AND TRAINING Figure <ref> displays the architecture of the GNN decoder. The node features are sent through 7 subsequent graph convolutional layers (Equation <ref>). The node features are passed through a rectified linear unit (ReLU) activation function (which corresponds to chopping negative values) after each layer. After the graph convolutional layers, the node features from all nodes are pooled into one high-dimensional vector by computing the mean across all nodes. This vector is then cloned and sent to two identical fully connected neural networks. Both heads consist of 4 dense layers which map the pooled node feature vector down to one real-valued number which is output in the range 0 to 1 through a sigmoid function. The input and output dimension d_in and d_out of the graph convolutional and dense layers can be found in Table <ref>. Networks are trained on NVIDIA Tesla A100 HGX GPU's using the python multiprocessing module to generate data in parallel on a CPU. For gradient descent, samples are batched in batches of size 10^3. The learning rate is set to 10^-4 and decreased manually to 10^-5, whenever the validation accuracy reached a plateau. An example of a training history for d=5 and varying number of surface code cycles d_t is shown in Figure <ref>. For this example, with d_t=5, 100 epochs of training takes approximately 10 hours. The code is available at <cit.>. § STABILIZER CIRCUITS AND ERROR MODEL FOR CIRCUIT-LEVEL NOISE Quantum circuits for weight-four Z- (X-) stabilizers of the surface code are displayed in Figure <ref> (<ref>). The gate set used for the stabilizer measurements consists of the Hadamard gate (H), and the CNOT gate. Under circuit-level noise, single-qubit depolarizing noise gate D_p (which applies gate σ_i, i ∈{X, Y, Z} where any of the gates is applied with probability p/3, and I with probability 1-p) acts on the data qubits before each stabilizer measurement cycle and on each target qubit after single-qubit gates. Two-qubit depolarizing noise gates (which apply gate σ_i σ_j, i,j ∈{I, X, Y, Z}, where II is acted on with probability 1-p, and the rest with probability p/15) act on the two qubits involved after every two-qubit gate. Furthermore, each qubit suffers from reset- and measurement-error with probability p, displayed by operators X_p when measuring and resetting in the computational basis.
http://arxiv.org/abs/2307.03309v1
20230706214218
Thermal intermodulation backaction in a high-cooperativity optomechanical system
[ "Christian M. Pluchar", "Aman R. Agrawal", "Dalziel J. Wilson" ]
quant-ph
[ "quant-ph", "physics.optics" ]
APS/123-QED Wyant College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA The pursuit of room temperature quantum optomechanics with tethered nanomechanical resonators faces stringent challenges owing to extraneous mechanical degrees of freedom. An important example is thermal intermodulation noise (TIN), a form of excess optical noise produced by mixing of thermal noise peaks. While TIN can be decoupled from the phase of the optical field, it remains indirectly coupled via radiation pressure, implying a hidden source of backaction that might overwhelm shot noise. Here we report observation of TIN backaction in a high-cooperativity, room temperature cavity optomechanical system consisting of an acoustic-frequency Si_3N_4 trampoline coupled to a Fabry-Pérot cavity. The backaction we observe exceeds thermal noise by 20 dB and radiation pressure shot noise by 40 dB, despite the thermal motion being 10 times smaller than the cavity linewidth. Our results suggest that mitigating TIN may be critical to reaching the quantum regime from room temperature in a variety of contemporary optomechanical systems. Thermal intermodulation backaction in a high-cooperativity optomechanical system Dalziel J. Wilson August 1, 2023 ================================================================================ Room temperature quantum experiments are a longstanding pursuit of cavity optomechanics <cit.>, spurred by the promise of fieldable quantum technologies <cit.> and simple platforms for fundamental physics tests <cit.>. Recently, ground state cooling has been achieved from room temperature using levitated nanoparticles <cit.>. Ponderomotive squeezing has also been achieved at room temperature, using both levitated nanoparticles <cit.> and an optically stiffened cantilever <cit.>. Despite this progress, however, including the recent development of ultracoherent nanomechanical resonators <cit.>, room temperature quantum optomechanics with rigidly tethered nanomechanical resonators (e.g. strings and membranes) remains elusive, limited to signatures of weak optomechanical quantum correlations <cit.> and cooling to occupations of greater than 10 <cit.>. Overcoming this hurdle is important because tethered nanomechanical resonators are readily functionalized and integrated with chip-scale electronics <cit.>, features that form the basis for optomechanical quantum technologies <cit.>. A key obstacle to room temperature quantum optomechanics is thermal intermodulation noise (TIN) <cit.>, a form of excess optical noise produced in cavity optomechanical systems (COMS) due to the mixing of thermomechanical noise peaks. TIN is especially pronounced in high-cooperativity tethered COMS <cit.>, which commonly employ nanomechanical resonators with free spectral ranges orders of magnitude smaller than the cavity linewidth <cit.>. In conjunction with the cavity's transduction nonlinearity <cit.>, this high mode density can give rise to spectrally broadband TIN orders of magnitude in excess of shot noise <cit.>—a severe impediment to protocols that rely on the observability of quantum backaction, such as ground state cooling <cit.> and squeezing <cit.>. Previous reports of TIN focus on its distortion of cavity-based measurement and methods to reduce it <cit.>. These include reducing optomechanical coupling and cavity finesse, operating at a “magic detuning" where the leading (quadratic) transduction nonlinearity vanishes <cit.>, multi-mode cooling <cit.>, and enhancing mechanical Q. Proposals for cavity-free quantum optomechanics with ultracoherent nanomechanical resonators represent an extreme solution <cit.>. A promising compromise exploits wide-bandgap phononic crystal nanoresonators in conjunction with broadband dynamical back-action cooling <cit.>. Another key insight is that the phase of a resonant cavity probe is insensitive to TIN, allowing efficient feedback cooling even in the presence of thermal nonlinearity <cit.>. With this letter, we wish to highlight a complementary form of TIN—stochastic radiation pressure backaction (TINBA)—that is resilient to some of the above methods and poses an additional obstacle to room temperature quantum optomechanics. While previous studies have observed the effect of thermal nonlinearity on dynamical backaction—as an inhomogeneous broadening of the optical spring and damping effects <cit.>—TINBA, a type of classical intensity noise heating, is subtler, as it appears only indirectly in cavity-based measurements, and can dominate quantum backaction (QBA) even when the thermal nonlinearity is small. As a demonstration, we study TIN in a high cooperativity, room temperature COMS based on an acoustic-frequency Si_3N_4 trampoline coupled to Fabry-Pérot cavity (a popularly proposed system for room temperature quantum experiments <cit.>). Despite the thermal motion of the trampoline being 10 times smaller than the cavity linewidth, we observe TINBA as high as 20 dB in excess of thermal noise and an estimated 40 dB in excess of QBA. We show that this noise can be precisely modeled, despite its apparent complexity, and explore tradeoffs to mitigating it via its strong dependence on temperature, § THEORY: QBA VERSUS TINBA As illustrated in Fig. <ref>, TINBA arises due to a combination of transduction nonlinearity and radiation pressure in cavity-based readout of a multimode mechanical resonator. We here consider the observability of QBA in the presence of TINBA, focusing on a single mechanical mode with displacement coordinate x and frequency ω_m. As a figure of merit, we take the quantum cooperativity C_q = S^QBA_x/S_x^tot - S^QBA_x≈S^QBA_x/S^th_x + S^TIN_x where S_x^QBA[ω] is the single-sided power spectral density of displacement (x) produced by QBA, S_x^tot[ω] is the total displacement spectral density—including thermal motion S_x^th[ω] and TINBA S_x^TIN[ω], defined below—and S_x[ω_m]≡ S_x denotes the spectral density on resonance. To build a model for C_q, we first consider a single-mode COMS characterized by an optomechanical coupling ω_c(x) = ω_c,0+G x. where ω_c is the cavity resonance frequency and G is the optomechanical coupling strength. In the small displacement limit, the coupled Langevin equations describing this system are <cit.> mẍ+mΓẋ + mω^2 x = F_th+ħ G|a^2| ȧ = (-i (ω_0-ω_c(x)) +κ)a + √(κ_in)s_in where Eq. <ref>a describes the displacement of a mechanical oscillator with mass m, resonance frequency ω_m, and damping rate Γ_m, driven by a thermal force F_th and a radiation pressure backaction force F_BA = ħ G |a^2|; and Eq. <ref>b describes the complex amplitude a of the cavity field with energy decay rate κ, driven at rate κ_in by an input field with amplitude s_in and frequency ω_0, and normalized so that |a|^2 = n_c is the intracavity photon number and |s_in|^2 is the input photon flux. Linearizing Eq. <ref> about small fluctuations in a yields S_x[ω] = |χ_eff(ω)|^2 (S_F^th[ω]+S_F^QBA[ω]) where χ_eff (ω) ≈ (1/m)/(ω^2 - ω_eff^2 - i Γ_effω) is the effective mechanical susceptibility accounting for optical stiffening k_opt = mω_eff^2-mω_m^2 and damping Γ_opt = Γ_eff-Γ_m <cit.>, S^th_F[ω] ≈ 4 k_B T m Γ_m, is the thermal force power spectral density <cit.> assuming a bath temperature T≫ħω_m/k_B, and S_F^QBA [ω]= (ħ G n_c)^2 S_RIN^shot[ω] = (ħ G n_c)^28/(n_cκ)/1+4(ω+Δ/κ)^2 is the QBA force produced by shot fluctuations of the intracavity photon number S_n_c^shot[ω] <cit.>, here expressed as a relative intensity noise S_RIN^shot[ω]= S_n_c^shot[ω]/n_c^2, and Δ = ω_0 - ω_c,0 is the laser-cavity detuning. We hereafter specialize to the fast cavity limit ω_m≪κ, in which most room temperature quantum optomechanics experiments with tethered mechanical resonators operate, including ours. In this case C_q > 1 requires S_F^QBA/S_F^th = C_0 n_c/n_th1/1+4Δ^2/κ^2>1 where n_th = k_B T/ħω_0 is the thermal bath occupation and C_0 = 2G^2 ħ/mω_mΓ_mκ = 4 g_0^2/κΓ_m is the vacuum optomechanical cooperativity, expressed on the right-hand side in terms of the vacuum optomechanical coupling rate g_0 = Gx_ZP, where x_ZP= √(ħ/(2mω_m)) is the oscillator's zero-point motion. Turning to TIN, we first re-emphasize that Eq. <ref> considers only a single mechanical mode whose displacement is perturbatively small, G x ≪κ. In practice, however, operating in the fast cavity limit using tethered nanomechanical resonators usually implies that many mechanical modes are simultaneously coupled to the cavity: ω_c = ω_c,0+∑_i G_i x_i. Moreover, for a single mode designed for high cooperativity (Eq. <ref>), low stiffness mω_m^2 and high temperature can readily lead to a root-mean-squared thermal displacement x_th exceeding the nonlinear transduction threshold x_th = √(k_B T/mω_m^2)≳κ/G∼λ/ℱ where ℱ is the cavity finesse. As explored in <cit.>, the combination of these features—multiple mechanical modes exhibiting thermal nonlinearity—can lead to broadband TIN S_n_c^TIN due to the mixing of thermal noise peaks. This in turn gives rise to a TINBA force S_F^TIN[ω]=(ħ G n_c)^2 S_RIN^TIN[ω] which, like classical laser intensity noise <cit.>, can drive the mechanical resonator in excess of QBA. To analyze TINBA in the fast cavity limit, it suffices to consider the steady-state dependence of n_c on detuning, expanded to second order in deviations from the mean value Δ. For convenience, following <cit.>, we define the relative detuning ν = 2Δ /κ and its deviation δν, yielding n_c(δν) ∝ 1-2ν/1+ν^2δν+3ν^2-1/(1+ν^2)^2δν^2. For a single mechanical mode, with δν = 2G x/κ, the second term in Eq. <ref> corresponds to the optical spring force F_BA(x)= k_opt(ν) x. For a multimode optomechanical system, with δν = ∑_n 2 G_n x_n/κ, the third term gives rise to intermodulation noise. To see this, using Wick's theorem <cit.>, the spectrum of ν^2 can be expressed as the self-convolution of double-sided linear spectrum S_νν[ω] S_ν^2[ω] = 4 ∫^∞_-∞ S_νν [ω'] S_νν [ω - ω'] dω'/2π, where S_νν[ω] = ∑_n 4 G_n^2/κ^2 S_xx^n[ω] is the cavity frequency noise including all mechanical modes for which ω_n≲κ. The resulting TIN S_RIN^TIN[ω] = (3ν^2-1)^2/(1+ν^2)^4 S_ν^2[ω] gives rise to a TINBA force (Eq. <ref>) S_F^TIN[ω] = (ħ G n_c)^2 (3ν^2-1)^2/(1+ν^2)^4 S_ν^2[ω]. Three features of TINBA bear emphasis. First, unlike QBA or photothermal heating (which both scale as S_x∝ n_c), TINBA scales quadratically with n_c. Second, unlike the optical spring, TINBA does not vanish on resonance (ν = 0). In fact, it is maximal in this case, simplifying to S_F^TIN[ω,ν = 0] = (ħ G n_c)^2 S_ν^2[ω] corresponding to S_RIN^TIN[ω] = S_ν^2[ω]. Third, there exists a “magic” detuning |ν |= 1/√(3) at which TINBA vanishes, S_F^TIN[ω,ν = ± 1/√(3)] = 0 corresponding to a ∂^2 n_c/∂^2 ν = 0. However, at this detuning, the optical spring is maximized, possibly leading to instability (k_opt≈ -mω_m^2) for large n_c. With these features in mind, we suggest three conditions for observing QBA (C_q≳ 1) in the presence of TIN, valid for all detunings in the bad cavity limit: n_c ≳ (1+ν^2)n_th/C_0 Q_m ≳2ν/1+ν^2n_th S_RIN^TIN ≲1/(1+ν^2)^22 S_ν^ZP/n_th where Q_m=ω_m/Γ_m is the mechanical quality factor and S_ν^ZP = (4g_0^2/γ)/ κ^2 is the normalized zero-point detuning spectral density, related to the zero-point displacement spectral density S_x^ZP = S_x^th/(2n_th) = 4 x_ZP^2/Γ_m by S_ν^ZP=G^2 S_x^ZP/κ^2. The first two conditions are independent of TIN and correspond to S_F^QBA>S_F^th and ω_eff>0, respectively. The last condition implies S_F^TIN<S_F^th when n_c>n_th/C_0, and is given by minimizing the relation C_q=1/1+ν^2(S_RIN^TIN(G,κ,ν)/8/κn_c+n_th/C_0 n_c)^-1 where we have emphasized the depedence of S_RIN^TIN on system parameters and detuning. In the next section, we explore the requirements in Eq. <ref> in a popular membrane-in-the-middle platform, and show <ref>c may not be met even if <ref>a and <ref>b are. § TRAMPOLINE-IN-THE-MIDDLE SYSTEM Our optomechanical system consists of a Si_3N_4 trampoline resonator coupled to a Fabry-Pérot cavity in the membrane-in-the-middle configuration <cit.>—hereafter dubbed “trampoline-in-the-middle” (TIM). As shown in Fig. <ref>a, the quasi-monolithic cavity is assembled by sandwiching the Si device chip between a concave (radius 10 cm) and a plano mirror, with the trampoline positioned nearer the plano mirror to minimize diffraction loss. (This is achieved by etching the plano mirror into a mesa-like structure <cit.>, as shown in Fig. <ref>a.) A relatively short cavity length of L = 415 μm is chosen, to reduce sensitivity to laser frequency noise <cit.>. The bare (without trampoline) cavity finesse is as high as ℱ=3× 10^4 at wavelengths near the coating center, λ = 850 nm. To reduce thermal nonlinearity, ℱ is reduced to ≈ 550 by operating at λ≈ 786 nm, yielding a cavity decay rate of κ = 2π× 0.65 GHz. Si_3N_4 trampolines are popular candidates for room temperature quantum optomechanics experiments <cit.>, owing to their high Q-stiffness ratio. We employ an 85-nm-thick trampoline with a 200× 200 μm^2 central pad, 1700×4.2 μm^2 tethers, and tether fillets designed to optimize strain-induced dissipation dilution of the fundamental mode <cit.>, yielding Q_m = 2.6× 10^7 <cit.>, ω_m≈ 2π× 41 kHz, and a COMSOL-simulated effective mass of m = 12 ng. Care was taken to mount the trampoline without introducing loss; nevertheless, two small dabs of adhesive reduced Q_m to 7.8× 10^6 (we speculate that this is due to hybridization with low-Q chip modes <cit.>, as evidenced by the satellite noise peaks in Fig. <ref>.) The resulting thermal force noise S_F^th≈ 8 × 10^-17 N/√(Hz) is in principle sufficient to observe QBA with an input power of ∼ 10 mW at ℱ∼ 10^3. Challenging this prospect is the fact that the trampoline's thermal motion, x_th = 0.07 nm, near the nonlinear transduction threshold (Eq. <ref>) at ℱ∼ 10^3. Moreover, κ/ω_m∼ 10^4 allows many higher-order trampoline modes to be coupled to the cavity field (Fig. 2b), satisfying the conditions for strong TIN. Measurements characterizing the nonlinearity and cooperativity of our TIM system are shown in Fig. <ref>d-e. A hallmark of thermal nonlinearity is modulation of the steady-state cavity response, as shown in Fig. <ref>d. Here the cavity length is swept across resonance with strong optomechanical coupling, corresponding to the trampoline positioned between nodes of the intracavity standing wave <cit.>. Fitting the transmitted power to P_out(ν(t)) ∝ 1/(1+(ν +8G x_thcos(ω_mν/ν̇)/κ))^2) yields G x_th/κ≈ 0.04 <cit.>, corresponding to g_0 = G x_th/√(2 n_th)∼ 2π× 1 kHz. A more careful estimate of g_0 = 2π× 1.5 kHz was obtained using the frequency sideband calibration method <cit.>, suggesting a vacuum cooperativity of C_0 ≈ 3. In the experiment below, for each input power P_in, the intracavity photon number n_c is determined by recording the optical spring shift versus detuning and comparing to a model, Δω_opt≈ k_opt/(2mω_m) ≈Γ_mC_0 n_c(ν=0) ν/(1+ν^2)^2 (Fig. <ref>e). For all measurements, the thermal nonlinearity is reduced by measurement-based feedback cooling of the fundamental mode. Full details and methods are presented in the SI <cit.>. § OBSERVATION OF TIN BACKACTION We now explore TIN in our TIM system and present evidence that TINBA overwhelms QBA at a sufficiently high intracavity photon number. To this end, the cavity is probed on resonance (ν = 0) with a Titanium-Sapphire laser (M-Squared Solstis) prestabilized with a broadband electro-optic intensity noise eater <cit.>. The output field s_out∝√(κ)a is monitored with two detectors. A balanced homodyne detector records the phase of s_out, which encodes the trampoline displacement, Arg[s_out]∝ x with a shot-noise-limited imprecision of S_x^imp≥ S_x^ZP/(8 C_0 n_c). An avalanche photodiode (APD) records the output intensity |s_out|^2 ∝ n_c and is also used to lock the cavity using the Pound-Drever-Hall technique <cit.>. TIN couples directly to intracavity intensity and indirectly to mechanical displacement, via TINBA. We explore this in Fig. <ref>, by comparing the intensity and phase fluctuations of an optical field passed resonantly through the TIM cavity. An input power of P_in = 0.2 mW is used, corresponding to n_c = 4η P_in/(ħω_cκ) = 4× 10^5 intracavity photons (with η=0.4 determined from the optical spring shift) and an ideal quantum cooperativity of C_q = 7 × 10^-3. The phase noise spectrum is calibrated in displacement units by bootstrapping to a model for the fundamental thermomechanical noise peak <cit.>, yielding the apparent displacement noise spectrum, S_y[ω]≡ S_x[ω]+S_x^imp[ω]. As seen in Fig. <ref>a, S_y[ω] is dominated by thermal noise near mechanical resonances and shot noise far from resonance. The intensity noise (Fig. <ref>b) meanwhile exceeds shot noise and features numerous peaks at intermediate frequencies, suggestive of TIN. To confirm this hypothesis, in <ref>b, we overlay the measured intensity noise with our TIN model (Eq. <ref>) using the phase-noise-inferred thermomechanical noise S_y[ω]-S_x^imp[ω] ≈ S_x^th [ω] as an input. We observe strong agreement over the full measurement band, as highlighted in Fig. <ref>b. We now turn our attention to the spurious peaks in the phase noise spectrum in Fig. <ref>a, which we argue is displacement produced by TINBA. To this end, in Fig. <ref>, we compare S_y[ω] at different probe powers, focusing on Fourier frequencies near the fundamental trampoline resonance. Qualitatively, as shown in Fig. <ref>a-b, we observe an increase in the apparent displacement at larger powers, with a shape that is consistent with the measured intensity noise multiplied by the mechanical susceptibility. To confirm that this is TINBA, in Fig. <ref>d we plot S_y[ω_m+δ] at an offset δ≫Γ_m versus versus input power in the range P_in∈[10 μW,1.1 mW] (n_c∈ [1.7× 10^4,2.2× 10^6]). The observed quadratic scaling with P_in is consistent with TINBA and distinct from QBA and photoabsorption heating. The absolute magnitude of the displacement moreover agrees quantitatively well with our TINBA model, S_x[ω_m+δ] ≈ S_RIN[ω_m+δ]/(mω_m^2γ_m^2)^2 (black line), allowing for statistical uncertainty (gray shading) due to fluctuations in the measured S_RIN[ω]. As an additional consistency check, we measure the coherence between the phase and intensity noise, allowing us to rule out artifacts such as imperfect cancellation of intensity noise in the balanced homodyne receiver. We define the coherence between signals a and b as C_ab[ω] = |S_ab[ω]|^2 / (S_a[ω] S_b[ω]) <cit.>, where S_ab[ω] is the cross-spectrum of a and b, so that C_ab[ω] = 1 for perfectly correlated or anti-correlated signals, and Arg[S_ab] characterizes the relative phase of the correlated signal components. Fig. <ref>c shows the coherence of the phase and intensity noise measurements with n_c = 2 × 10^6, together with a model that predicts the coherence based on the measured TIN noise and the mechanical susceptibility. A high degree of coherence is observed over a ∼ 100 kHz bandwidth surrounding the fundamental resonance. Moreover, near resonance, the argument of the coherence undergoes a π-phase shift, indicative of the response of the mechanical susceptibility, and consistent with TINBA-driven motion. Full details about models and measurements can be found in the SI <cit.>. § IMPLICATIONS FOR QUANTUM OPTOMECHANICS EXPERIMENTS TIN backaction currently limits the quantum cooperativity of our TIM system. For example, at the highest intracavity photon number in Fig. <ref>, n_c = 2× 10^6, S_x^TIN/S_x^th≈ 10^2 and S_x^TIN/S_x^QBA = (S_x^TIN/S_x^th)/C_q^0 ≈ 2500, implying that C_q≈ 4× 10^-4 instead of the ideal value, C_q^0 = C_0 n_c/n_th = 0.04. As implied by Eq. <ref>c, this could have been anticipated by comparing the measured TIN to the a priori zero-point frequency noise S_ν^ZP = C_0/κ≈ 7× 10^-10 scaled by the thermal occupation n_th≈ 1.5× 10^8. In our case, S_RIN^TIN≈ 10^-11 Hz^-1, yielding C_q≲ 10^-3 according to Eq. <ref>: C_q(ν = 0)=(S_TIN^RIN/8/κn_c+n_th/C_0 n_c)^-1≤√(2S_ν^ZP/n_th/S_TIN^RIN) The lower bound of Eq. <ref> (and more generally, Eq. <ref>) applies to any form of classical intensity noise, and results from the fact that classical intensity noise increases quadratically faster with n_c than shot noise. There is, therefore, always a probe strength at which classical backaction overwhelms QBA. To increase this threshold, one must either increase the zero-point spectral density or reduce the intensity noise, leveraging, if possible, the different scaling of these noise terms with system parameters. In the case of TINBA, which originates from thermal nonlinearity (Eq. <ref>), inroads can be made by leveraging the dependence of the nonlinearity on G, κ, and ν. For example, the fact that S_F^TIN∝ (G/κ)^4 and S_F^QOBA∝ (G/κ)^2 suggests that TINBA can be mitigated by using a higher κ (lower finesse ℱ) cavity. In Fig. <ref>b we consider this strategy for our TIM system using the above experimental parameters and ν=0. Evidently, C_q∼ 1 is possible with 100-fold lower ℱ; however, it would require a proportionately larger laser power. This is problematic because of photothermal heating and increased demands on classical laser intensity noise suppression. Fig. <ref>a-b shows the same computation at T = 4 K, revealing that power demands are relaxed in proportional to T, since S_F^TIN∝ T^2. This observation re-affirms the demands of room temperature quantum optomechanics and, conversely, the advantages of cryogenic pre-cooling. Finally, we re-emphasize the strong detuning dependence of TIN and TINBA. Operating at the magic-detuning ν = 1/√(3), as shown in Fig. <ref>c, can eliminate TIN in the optical intensity; however, as evident in the blue data, the phase response of the cavity becomes nonlinear, potentially preventing the observation of quantum correlations generated via the optomechanical interaction. Moreover, in the regime of strong QBA, the associated optical spring (maximal at ν = 1/√(3) in the fast cavity limit) can be substantial. To avoid instability (ω_eff=0), one strategy is to use a dual-wavelength probe with ν = ± 1/√(3), but this doesn't resolve the phase nonlinearity issue. Another promising strategy—not considered in our theoretical analysis—is to exploit optical damping at ν 0 to realize multi-mode cooling <cit.>. The success of this strategy will depend on the details of the system and may benefit by operating in an intermediate regime between the fast and slow cavity limit. § CONCLUSION We have explored the effect of thermal intermodulation noise (TIN) on the observability of radiation pressure quantum backaction (QBA) in a room temperature cavity optomechanical system and argued that TIN back-action (TINBA) can overwhelm QBA under realistic conditions. As an illustration, we studied the effects of TINBA in a high-cooperativity trampoline-in-the-middle cavity optomechanical system and found that it overwhelmed thermal noise and QBA by several orders of magnitude. The conditions embodied by our TIM system—transduction nonlinearity, large thermal motion, and a multi-mode mechanical resonator—can be found in a wide variety of room-temperature quantum optomechanics experiments based on tethered nanomechanical resonators, including an emerging class of systems based on ultracoherent phononic crystal nanomembranes and strings <cit.>. Anticipating and mitigating TINBA in these systems may therefore be a key step to operating them in the quantum regime. In addition to increasing Q, a program combining multi-mode coherent <cit.> or measurement-based <cit.> feedback cooling, dual-probes at the “magic detuning” <cit.>, and, or, engineering of the effective mass and frequency spectrum <cit.>, may be advantageous. § ACKNOWLEDGEMENTS This work is supported by NSF grant ECCS-1945832. CMP acknowledges support from the ARCS Foundation, an Amherst College Fellowship, and a Grant in Aid of Research from Sigma Xi. ARA acknowledges support from a CNRS-UArizona iGlobes fellowship. Finally, the reactive ion etcher used for this study was funded by an NSF MRI grant, ECCS-1725571.
http://arxiv.org/abs/2307.01599v1
20230704093854
A Scalable Reinforcement Learning-based System Using On-Chain Data for Cryptocurrency Portfolio Management
[ "Zhenhan Huang", "Fumihide Tanaka" ]
q-fin.PM
[ "q-fin.PM", "cs.LG" ]
[email protected] [email protected] University of Tsukuba 1-1-1 Tennodai Tsukuba Ibaraki Japan 305-8577 On-chain data (metrics) of blockchain networks, akin to company fundamentals, provide crucial and comprehensive insights into the networks. Despite their informative nature, on-chain data have not been utilized in reinforcement learning (RL)-based systems for cryptocurrency (crypto) portfolio management (PM). An intriguing subject is the extent to which the utilization of on-chain data can enhance an RL-based system's return performance compared to baselines. Therefore, in this study, we propose CryptoRLPM, a novel RL-based system incorporating on-chain data for end-to-end crypto PM. CryptoRLPM consists of five units, spanning from information comprehension to trading order execution. In CryptoRLPM, the on-chain data are tested and specified for each crypto to solve the issue of ineffectiveness of metrics. Moreover, the scalable nature of CryptoRLPM allows changes in the portfolios' cryptos at any time. Backtesting results on three portfolios indicate that CryptoRLPM outperforms all the baselines in terms of accumulated rate of return (ARR), daily rate of return (DRR), and Sortino ratio (SR). Particularly, when compared to Bitcoin, CryptoRLPM enhances the ARR, DRR, and SR by at least 83.14%, 0.5603%, and 2.1767 respectively. <ccs2012> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003241.10003243</concept_id> <concept_desc>Information systems Expert systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Machine learning [500]Information systems Expert systems A Scalable Reinforcement Learning-based System Using On-Chain Data for Cryptocurrency Portfolio Management Fumihide Tanaka ========================================================================================================== § INTRODUCTION Blockchain networks or platforms, each with its native cryptocurrency (cryptos), are numerous today. Analogously, the blockchain can be compared to a company, while cryptocurrency is akin to its publicly traded shares. On-chain data, or on-chain metrics of a blockchain network, are like a company's fundamentals. Just as fundamentals disclose significant information about a company, on-chain data provide precise, comprehensive records of a blockchain network. Cryptocurrency valuations are influenced by factors including typical on-chain metrics such as circulating supply, exchange flows, and balance on exchanges. Most on-chain data are real-time, sequentially recorded, capturing operational details and metrics of a specific blockchain network and its native cryptocurrency. Due to the aforementioned nature of on-chain data, people aspire to utilize and incorporate on-chain data into their systems for price prediction and quantitative trading <cit.>, since the price of crypto can be determined by multiple factors, e.g., hash rate, a typical on-chain metric. Therefore, the incorporation of on-chain data into quantitative trading systems is naturally expected. However, such utilization of on-chain metrics in an RL-based system for PM has not been implemented so far <cit.>. The extent to which this utilization could help the systems outperform the baselines in terms of return performance is an intriguing question that remains unanswered. Hence, we propose CryptoRLPM, a novel and scalable end-to-end RL-based system incorporating on-chain data for cryptocurrency PM. CryptoRLPM, a mid-frequency (10-to-30-minute) PM system, consists of five units covering the process from information comprehension to trading order execution. On-chain metrics are tested and specified for each cryptocurrency, overcoming the issue of metric ineffectiveness. Additionally, we introduce the Crypto Module (CM), based on MSPM <cit.>, to ensure scalability and reusability. Each CM reallocates a single-asset portfolio, including a risk-free asset (cash), necessitating the use of n CMs for an n-asset portfolio. This setup enables trained CMs to be reusable for the reallocation of any given portfolios. Furthermore, this setup facilitates CryptoRLPM to allow scalable portfolios, with the underlying cryptocurrencies of the portfolios able to be changed at any time as desired. Backtesting with three portfolios constructed for this study, CryptoRLPM demonstrates positive accumulated rate of return (ARR), daily rate of return (DRR), and Sortino ratio (SR), outperforming all the baseline. Specifically, CryptoRLPM shows at least a 83.14% improvement in ARR, at least a 0.5603% improvement in DRR, and at least a 2.1767 improvement in SR, compared to the baseline Bitcoin. To the best of our knowledge, CryptoRLPM is the first RL-based system adopting on-chain metrics comprehensively for cryptocurrency PM. The benchmarking results indicate that CryptoRLPM robustly outperforms the baselines. § METHODOLOGY CryptoRLPM is structured into five primary units, which collectively cover the entire process from information comprehension to trading order execution: (i) Data Feed Unit (DFU), (ii) Data Refinement Unit (DRU), (iii) Portfolio Agent Unit (PAU), (iv) Live Trading Unit (LTU), and (v) Agent Updating Unit (AUU). The architecture of CryptoRLPM is illustrated in <ref>. The five units are interrelated, with each one responsible for at least one distinct task. From a holistic perspective, the Data Feed Unit (DFU) and Data Refinement Unit (DRU) function as the base units related to data generation. The Portfolio Agent Unit (PAU) is responsible for the initial training of RL agents for one or more portfolios. The Live Trading Unit (LTU) and the Agent Updating Unit (AUU) handle the live trading functionality, as well as the maintenance of the agent and the reallocation of portfolios. In the subsequent sections, we will break down and explain the technical details and tasks of each unit. However, the introductions to the LTU and AUU will be rather conceptual, as the purpose of this study is to validate the viability and outperformance of CryptoRLPM. While we do not intend to conduct live trading using CryptoRLPM in this study, we plan to present the implementation of its live trading functionality in future studies. §.§ Data Feed Unit (DFU) The Data Feed Unit (DFU) is the most fundamental unit of CryptoRLPM, controlling the acquisition of data both for initial model training and for subsequent ongoing data feed requirements during live trading and model retraining. The system design of DFU is displayed in <ref>. §.§.§ Data Retrieval After confirming the portfolio's underlying cryptocurrencies, the DFU retrieves historical price data and on-chain metrics using Binance REST API and Santiment's SanAPI (SanAPI Basic Subscription) respectively <cit.>. The retrieved data are stored in two separate SQLite databases. The Data Refinement Unit (DRU) will fetch the stored data, and then feed them into the Portfolio Agent Unit (PAU) for model training, Live Trading Unit (LTU) for live trading, and Agent Updating Unit (AUU) for model retraining. §.§.§ On-chain Metrics On-chain metrics refer to the information generated from the decentralized ledgers of blockchains. For instance, Daily Active Addresses, which represent the number of distinct addresses participating in a transfer of a given crypto on a specific day, indicate the daily level of crowd interaction (or speculation) with that crypto <cit.>. Since most blockchains have their own native cryptocurrencies, the on-chain metrics of a specific blockchain provide insights into its real-time status and ongoing activities. If we liken a blockchain to a public company, the blockchain's crypto resembles the company's stock, while on-chain metrics mirror its fundamentals. On-chain metrics, due to blockchain's decentralized nature, offer more accurate and transparent measurements than traditional company fundamentals, and are continually public and recorded in real time. As per the Efficient Market Hypothesis (EMH) <cit.>, a blockchain's crypto valuation presumably reflects all accessible information, including on-chain metrics. Therefore, it is hypothetically to anticipate the incorporation of on-chain data into quantitative trading systems. Nevertheless, to the best of our knowledge, such an integration into an RL-based PM system remains unexplored so far. Available Metrics: The on-chain metrics employed in this study are those available under the SanAPI Basic Subscription Plan, and vary depending on different crypto. Given that on-chain and social metrics are often intertwined on API platforms and in practical applications, we do not distinguish between them in this study; both are considered as on-chain metrics. §.§ Data Refinement Unit (DRU) For any given crypto (e.g., Bitcoin), we conduct correlation tests between the on-chain metrics and three-period returns. <ref> illustrates the system design of the DRU, as indicated by the dashed line. The term "three-period returns" refers to the percentage change (returns) in a crypto's price over periods of 12, 24, and 48 days. For instance, if we employ Bitcoin's daily OHLCV data, then the three-period returns correspond to the percentage changes in Bitcoin's daily closing prices every 12, 24, and 48 days, respectively. §.§.§ Correlation Test for Feature Selection However, the effective metrics in predicting a particular crypto's price may not be applicable to other cryptos, especially considering that not every crypto has the same set of available metrics. This ineffectiveness of metrics has been barely considered in existing studies. Thus, in this study, we design a scheme in the DRU to sort the metrics so that they are specified for each crypto in order to mitigate the issue of ineffective metrics. Our objective is to select valid on-chain metrics from a large pool to construct the environment with which the RL agents interact. To accomplish this, we examine the linear relationship between each of the three-period returns and the on-chain metrics for a specific crypto. This involves determining the Pearson's correlation coefficients between the returns and metrics. The coefficients are divided into three groups, corresponding to the three-period returns. Within each group, the metrics are sorted according to their correlation coefficients, and the highest and lowest five from each group are selected. The selected metrics from all three groups are then ranked by their appearance frequency, and the top-10 metrics are used as valid features to construct the agents' environment in the PAU. Dimension Reduction: To further enhance agent learning efficiency, we apply rolling normalization and rolling PCA to the selected metrics for dimension reduction before feeding them into the subsequent units. The principal components that explain at least 80% of the variance are extracted as the representation of the top-10 metrics and are subsequently fed into the PAU, LTU, and AUU. §.§ Portfolio Agent Unit (PAU) PAU incorporates the key modules from MSPM<cit.>. MSPM is a multi-agent RL-based system designed to address scalability and reusability challenges in RL-based PM. MSPM consists of two key modules, the Evolving Agent Module (EAM) and the Strategic Agent Module (SAM). The EAM leverages a DQN agent to generate asset-specific signal-comprised information. Conversely, the SAM utilizes a PPO agent to optimize the portfolio by connecting with multiple EAMs and reallocating the corresponding assets. As described in <cit.>, the Strategic Agent Modules (SAMs) of MSPM can be built separately rather than jointly. Namely, each SAM reallocates a single-asset portfolio that includes a risk-free asset (i.e., cash). In a similar vein, within CryptoRLPM, we define a Crypto Module (CM) as a composite module, consisting of an Evolving Agent Module (EAM) and a SAM, dedicated to trading a single crypto. Thus, for example, n CMs will be necessary for the reallocation of an n-asset portfolio. With this setup, a trained CM can be integrated into any given portfolio's weighted reallocation alongside other CMs. Furthermore, for efficient training, the EAM within a CM can be optional in certain circumstances, such as when sentiment-included on-chain metrics are fed directly from the DRU to the SAM within the CM. This configuration allows the PAU to be scalable, accommodating variable underlying cryptos in any given portfolio at any time. §.§.§ Settings of PAU <ref> illustrates the system design of the PAU, as framed by the dashed line. For the agent training of crypto x, on-chain metrics are fed into the DRU from the DFU for selection and dimension reduction. Subsequently, these refined metrics, along with the OHLCV data, are transferred from the DRU to the dedicated EAM of crypto x within the PAU. Alternatively, for the sake of efficient training, the refined metrics can be directly fed into the SAM, as shown by the orange dashed line. In this case, the EAM becomes optional, but the high-quality trading signals from the EAM will not be utilized <cit.>. The signals generated by the EAM, in conjunction with the new OHLCV data, constitute the signal-comprised information that is fed into the SAM of crypto x for decision-making. The trained models are stored separately in the Model Storage. The PAU continues to interact with the AUU for model updates and the LTU for live trading. The EAM and SAM settings are adopted from  <cit.> and  <cit.>, albeit with modifications. Detailed descriptions and discussions of these modifications follow: Environment: Each crypto-specific CM is composed of a pair: an EAM (optional) and an SAM. The EAM's RL-based agent interacts with an environment formalized by the historical OHLCV and refined on-chain metrics of the designated crypto. The environment for the SAM's RL-based agent is the combination of signals produced by the trained EAM and new OHLCV, or signal-comprised information. Each CM is reusable and periodically retrained by the AUU. State: Within the dedicated CM, the SAM collaborates with the EAM to establish the weight of a specific crypto. The state v_t that the EAM observes at every time step t includes the recent n-interval (e.g., 30-minute) OHLCV and refined on-chain metrics of the designated crypto, where v_t = (s_t, ρ_t), s_t is the n-interval OHLCV, and ρ_t represents the refined on-chain metrics from the DRU. In line with the original SAM setting in MSPM, the state v^+_t observed by the SAM at time step t involves the new historical OHLCV stack s_t and the trading signals a^sigt. Since the SAM in CryptoRLPM is assigned to one crypto, for v^+t∈ℝ^f × m × n, f denotes the number of features (OHLCV and on-chain metrics), m = 2 signifies the designated crypto and cash, and n represents the recent n intervals. Deep Q-network Agent: As introduced previously, both the EAM and SAM use a Deep Q-network (DQN) agent to interact with their environments. Additionally, for the estimates of action-value functions of the EAM and SAM, Q^θEAM(s_t,a_t) and Q^θSAM(s_t,a_t), we continue adopting the settings in <cit.>, using a 1-D convolutional neural network (CNN) and a simple 4-layer CNN architecture for representation, respectively. Action Space of EAM: At every time step t, the DQN agent in the EAM selects an action a_t—either buy, sell, or hold—for the designated crypto. The actions taken by the EAM establish the crypto's trading signal. These actions, stacked with the new OHLCV, are later fed into the SAM within the same CM. Action Space of SAM: In CryptoRLPM, each CM represents a portfolio consisting of the designated crypto and the risk-free asset (cash), which is reallocated by the SAM within it. The SAM of CryptoRLPM assigns full weight to either the risk-free asset or the crypto. In simple terms, at every time step t, the action a_t taken by the SAM of CryptoRLPM is a choice from [0., 1.] or [1., 0.], indicating the reallocation weight of the portfolio of the designated crypto and cash. With this setting, once an SAM is trained, it can be combined with other CMs and integrated into the voted-weight reallocation of any given multi-crypto portfolio. Reward Function: The reward functions for both the EAM and SAM of CryptoRLPM follow the settings in the original MSPM <cit.>. §.§ Live Trading Unit (LTU) As CryptoRLPM aims to be an end-to-end system for cryptocurrency portfolio management, it naturally incorporates a live trading functionality. In this section, we introduce the Live Trading Unit (LTU) of CryptoRLPM, which manages the live reallocation of the portfolio at 10-to-30-minute intervals. The realization of LTU depends on the APIs of specific exchanges; further implementation details will not be discussed here. The system design of LTU, framed by a dashed line, is shown in <ref>. At each n interval, new data are fetched and refined as per the schemes of the first two units. The newly fetched and refined data are fed into PAU for weight inference of the CMs (each corresponding to a designated crypto) in the portfolio. The set P_t comprises the reallocation weights obtained from all m CMs (cryptos) of the portfolio at time step t: P_t={(p^1_t, …, p^m_t) | p^i_t ∈ℝ^2 for every i ∈{1, …, m}}, and the voted weight w_t will be formalized as the reallocation weight of the portfolio at time step t w_t = ∑ d^i_t/m, for i ∈{1, …, m} The formalized reallocation weight w_t of the portfolio is transformed into the format required by the designated exchange's API (e.g., Binance). Whenever the portfolio's weight is updated and formatted, a reallocation request is sent to the exchange via their APIs. §.§ Agent Updating Unit (AUU) The Agent Updating Unit is responsible for scheduled model re-training and unscheduled updates of CMs. After each fixed interval, set in days, the agent models are re-trained, and the portfolio is updated if there are changes to the underlying cryptos, such as scaling or replacing. § EXPERIMENTS §.§ Preliminaries §.§.§ Portfolios We propose three portfolios for our experiments: * Portfolio(a) includes two cryptos: * Names: Bitcoin (BTC) and Storj (STORJ) * Portfolio(b) includes three cryptos, and shares two cryptos with Portfolio(a): * Names: Bitcoin (BTC), Storj (STORJ), and Bluzelle (BLZ) * Portfolio(c) includes four cryptos, and shares three cryptos with Portfolio(b): * Names: Bitcoin (BTC), Storj (STORJ), Bluzelle (BLZ) and Chainlink (LINK) There are four distinct cryptos denominated by USDT <cit.> (a U.S. dollar equivalent stablecoin) included in the three portfolios. The reusability of CM and scalability of PAU allow the application of the trained crypto-designated CMs to different portfolio-designated PAUs, enhancing efficiency in model training. Consequently, we only need to train four CMs for the four cryptos, and organize these CMs in PAUs to represent and reallocate the three portfolios. §.§.§ Data Ranges In our experiments, the DFU retrieves historical 6-hour OHLCV data (s_t) from <cit.> and on-chain metrics (ρ_t) from <cit.>, which are later refined by the DRU. In this study, the refined metrics are directly fed into the CMs' SAMs from the DRU, leveraging the modularized design of the CM and ensuring efficient training. After that, data is split into three subsets: (i) CM(training) from October 2020 to December 2021; (ii) CM(validation) from January 2022 to February 2022; and (iii) CM(backtesting) from March 2022 to September 2022. Notably, the data ranges for different portfolios vary slightly in practice, based on the varying underlying cryptos. <ref> lists the ranges of the datasets. §.§.§ Performance Metrics To measure the performance of the CryptoRLPM system and its baselines, we employ three performance metrics: (i) Daily rate of return (DRR), (ii) Accumulated rate of return (ARR), and (iii) Sortino ratio (SR) <cit.>. Higher values for these metrics often indicate higher performance. §.§ Results and Discussion §.§.§ Backtesting Performance This study primarily aims to validate the feasibility and effectiveness of the proposed system design, and thus, the baselines used for comparison are the historical performances of the underlying cryptos of each portfolio. We conduct backtesting on our CryptoRLPM system and compare its performance against these baselines. As depicted in <ref>, <ref>, and <ref>, CryptoRLPM consistently outperforms the baselines across all three portfolios, achieving positive values in terms of ARR, DRR, and SR, while the baselines yield negative values on ARR. Specifically, when compared to Bitcoin, CryptoRLPM achieves at least a 83.14% improvement on ARR, a 0.5603% improvement on DRR, and a 2.1767 improvement on SR. These results demonstrate the strong performance of CryptoRLPM in generating capital returns, and validate the system's potential applicability in crypto PM. <ref> details CryptoRLPM's outperformance over the baselines in terms of the ARR, DRR, and SR. It is worth noting that CryptoRLPM achieves promising SR for all portfolios, which indicates CryptoRLPM's robust ability at profit-making and adaptability to the ever-changing market. §.§.§ Scalability of CryptoRLPM and PAU In CryptoRLPM, each crypto is reallocated by a dedicated, decentralized Crypto Module (CM), rendering it a scalable PM system. The scalability means that trained CMs of the underlying cryptos are reusable and changeable for any portfolio. As an example, for a portfolio P_example with three trained CMs/cryptos: [a, b, c], to replace crypto c with a new crypto x, we train a new CM(x), unplug the CM(c), and plug in the trained CM(x). Scaling up or down a portfolio is even easier. To exclude a crypto, say b, we simply unplug CM(b). To add a new crypto y to P_example, we plug in a trained CM(y). <ref> illustrates the scalability of the architecture. Trained CMs for any cryptos are reusable for different portfolios and can be added or removed at will. § LIMITATIONS AND FUTURE WORK In this study, for the efficient training and backtesting of CryptoRLPM, we directly feed refined metrics into PAU from DRU, bypassing EAMs trading signals of CMs. We defer the use of EAMs trading signals to future research, and we anticipate this usage may further enhance CryptoRLPM's performance (ARR, DRR and SR). We also intend to include additional baselines for benchmarking, such as conventional PM strategies, e.g., CRP <cit.>, or RL-based methods, e.g. ARL <cit.>. Our focus lies in validating CryptoRLPM's outperformance through backtesting and benchmarking in this study. We plan to present the live trading functionality of CryptoRLPM in future studies. § CONCLUSION We propose CryptoRLPM, a reinforcement learning (RL)-based system incorporating on-chain data for end-to-end cryptocurrency (crypto) portfolio management (PM). CryptoRLPM’s scalability, embodied in its five units, with the reusability of the Crypto Module (CM), enable changes in portfolios' cryptos at any time, demonstrating the system's adaptability to dynamic market conditions. Additionally, we demonstrate CryptoRLPM's ability to efficiently incorporate on-chain metrics for each crypto, overcoming the challenge of metric ineffectiveness. In backtesting with three portfolios, CryptoRLPM consistently delivered positive accumulated rate of return (ARR), daily rate of return (DRR), and Sortino ratio (SR), outperforming all baselines. In comparison to Bitcoin, a prevalent baseline, CryptoRLPM registers at least a 83.14% improvement in ARR, at least a 0.5603% enhancement in DRR, and at least a 2.1767 improvement in SR. Our study with its findings highlight the substantial potential of integrating on-chain data into RL-based crypto PM systems to enhance return performance. § ACKNOWLEDGEMENT This work was supported by JST SPRING, Grant Number JPMJSP2124. ACM-Reference-Format
http://arxiv.org/abs/2307.02513v1
20230705110520
Diophantine equations with three monomials
[ "Bogdan Grechuk", "Tetiana Grechuk", "Ashleigh Wilcox" ]
math.NT
[ "math.NT", "11D41" ]
Spin-1 Thermal Targets for Dark Matter Searches at Beam Dump and Fixed Target Experiments [ August 1, 2023 ========================================================================================== We present a general algorithm for solving all two-variable polynomial Diophantine equations consisting of three monomials. Before this work, even the existence of an algorithm for solving the one-parameter family of equations x^4+axy+y^3=0 has been an open question. We also present an elementary method that reduces the task of finding all integer solutions to a general three-monomial equation to the task of finding primitive solutions to equations with three monomials in disjoint variables. We identify a large class of three-monomial equations for which this method leads to a complete solution. Empirical data suggests that this class contains 100% of three-monomial equations as the number of variables goes to infinity. Key words: Diophantine equations, two-variable equations, three-monomial equations, generalized Fermat equations, integer solutions. 2020 Mathematics Subject Classification. 11D41 § INTRODUCTION Diophantine equations are polynomial equations with integer coefficients, usually in two or more variables, for which we are interested in finding integer solutions. This topic is almost as old as mathematics. The name “Diophantine” refers to Diophantus of Alexandria, who studied such equations in the 3rd century. In 1920, Dickson <cit.> published an overview of works in this area up to that date, with essentially no proofs, just an annotated bibliography of results, and it still took the whole volume with 800 pages. Any similar overview today would require dozens of volumes, because Diophantine equations attract mathematicians of all levels, with hundreds of papers and many books published every year. For an introduction to the subject, we recommend the 1969 book of Mordell <cit.>, and more recent excellent books of Cohen <cit.> and Andreescu <cit.>. In 1900, Hilbert <cit.> presented a list of 23 mathematical problems for the next centuries. Hilbert's 10th problem asks for a general method for determining whether a given Diophantine equation has any integer solution. This task looks much easier than describing all integer solutions. Nevertheless, Matiyasevich <cit.>, building on an earlier work of Davis, Putnam and Robinson <cit.>, proved in 1970 that the answer to Hilbert’s 10th problem is negative, and no such general method exists. While solving all Diophantine equations is impossible, researchers focus on developing methods for solving restricted families of equations. For example, one may restrict the degree of the equation or the number of variables. A notable result in this direction is a deep theorem of Grunewald and Segal <cit.>, that proves the existence of a finite algorithm which, given a quadratic equation in any number of variables, determines whether its integer solution set is finite or infinite, and, in the former case, lists all the solutions. Also, a combination of the main theorems in <cit.>, <cit.>, and <cit.> imply the existence of an algorithm for solving all cubic equations in 2 variables. However, there are no known methods for solving cubic equations in 3 variables, or quartic equations in 2 variables, see <cit.>. Another important parameter of a Diophantine equation which we may restrict is the number of its monomials. Equations having at most two monomials are easy to solve <cit.>, while equations with three monomials may be very difficult in general. A canonical example of a difficult three-monomial equation is of course Fermat's Last Theorem (FLT), resolving equations of the form x^n+y^n=z^n for n≥ 3. It can be viewed as either an exponential Diophantine equation, or an infinite family of polynomial equations. While FLT is now proved by Wiles <cit.>, its natural generalization ax^n + by^m = cz^k, which is called the generalized Fermat equation, remains the subject of current deep research. In most works, the authors are interested to find the primitive solutions to (<ref>), that is, ones satisfying gcd(x,y,z)=1. In 1998, Beukers <cit.> proved that if 1/n+1/m+1/k>1, then the set of all primitive solutions to (<ref>) can be covered by a finite number of polynomial parametrizations, and there is an algorithm that, given a,b,c,n,m,k with 1/n+1/m+1/k>1, produces such parametrizations. In the case 1/n+1/m+1/k=1, the problem reduces to finding rational points on certain elliptic curves. There is no known algorithm for solving this problem in general, but there are methods that can find rational points on many specific curves, so individual equations (<ref>) with 1/n+1/m+1/k=1 and not-too-large a,b,c can usually be solved by standard methods. If 1/n+1/m+1/k<1, then Darmon and Granville <cit.> proved that, for any specific a,b,c,n,m,k, equation (<ref>) has only a finite number of primitive solutions, which raises the problem of listing these solutions explicitly. The most studied special case is a=b=c=1, so that equation (<ref>) reduces to x^n+y^m=z^k. Many recent papers solve this equation for various triples (n,m,k), see e.g. <cit.> for a survey. A recent example is the resolution of case (2,3,11) in <cit.>, conditional on the generalized Riemann hypothesis. In this work, we are interested in describing all integer solutions to a given three-monomial equation, not only the primitive ones. Our first main result is the general algorithm for solving all three-monomial equations in two variables. Our proof uses deep classical results in the theory of two-variable Diophantine equations, such as Baker's resolution of superelliptic equations <cit.> and Walsh's effective version of Runge's theorem <cit.>, but is otherwise elementary. As a very special case, our algorithm resolves a family of equations x^4+axy+y^3=0, which Masser <cit.> used as an example of a simple-looking family that “does not seem to be effectively solvable”. We also present a method that reduces an arbitrary three-monomial equation, in any number of variables, to a finite number of equations that have three monomials in disjoint variables. In many examples, the resulting equations are either easier than the original one, or are already solved in the literature. As an example, we present a family of equations that our method reduces to FLT. As another example, we give a description of all integer solutions to the equation (<ref>), provided that at least one of the exponents n,m,k is coprime with the other two. In addition, we identify a simple and easy-to-check sufficient condition that guarantees that a given three-monomial equation is solvable by our method, and empirically check that the percentage of three-monomial equations satisfying this condition approaches 100% when the number of variables goes to infinity. This work is organised into four sections. Section <ref> overviews some deep results in the theory of two-variable Diophantine equations, and uses them to prove the existence of a general algorithm for solving all two-variable equations consisting of three monomials. Section <ref> studies three-monomial equations in any number of variables. Section <ref> concludes the work. § TWO-VARIABLE EQUATIONS WITH THREE MONOMIALS §.§ Some solvable families of two variable equations A monomial in variables x_1,…,x_n with integer coefficient is any expression of the form ax_1^k_1… x_n^k_n, where a is a non-zero integer and k_1, …, k_n are non-negative integers. A polynomial P(x_1,…,x_n) is the sum of any (finite) number of such monomials. A polynomial Diophantine equation is an equation of the form P(x_1, …, x_n) = 0. A natural way to classify Diophantine equations is by the number of variables. Equations in one variable ∑_i=m^d a_i x^i = 0, where a_m, a_m+1, …, a_d are integers such that a_m≠ 0 and a_d ≠ 0, are easy to solve in integers, because every non-zero solution x must be a divisor of a_m. In fact, (<ref>) is also easy to solve in rationals, because, if x=p/q is a non-zero irreducible fraction, then, by multiplying (<ref>) by q^d and dividing by p^m, we obtain ∑_i=m^d a_i p^i-m q^d-i = 0. The first term of this sum is a_m q^d-m. Because all other terms are divisible by p, and p,q are coprime, this implies that p must be a divisor of a_m. By similar argument, the last term a_d p^d-m is divisible by q, hence q is a divisor of a_d. Because a_m and a_d have a finite number of divisors, there is only a finite number of candidate fractions p/q to check. In fact, there is a much faster algorithm <cit.> for listing all rational solutions to (<ref>) that works in time polynomial in the length of the input. A general algorithm for finding integer or rational solutions for all equations in two variables is currently unknown, and this is a major open problem. We may also classify equations by the number of monomials. There are algorithms for describing all integer solutions to an arbitrary two-monomial equation in any number of variables, see <cit.>, hence the first open case are three-monomial equations. Such equations can be very difficult in general, as witnessed by FLT and its generalizations. In this section, we restrict the number of variables and the number of monomials simultaneously, and present a general algorithm for solving all two-variable three-monomial Diophantine equations. Before proving the main result, we first overview some well-known solvable families of two-variable Diophantine equations. In 1969, Baker <cit.> proved the following theorem. There is an algorithm for listing all integer solutions (there are finitely many of them) of the equation y^m = P(x) = a_n x^n + … + a_1 x + a_0, provided that all a_i are integers, a_n≠ 0, and either (i) m=2 and P(x) possesses at least three simple (possibly complex) zeros, or (ii) m≥ 3, n≥ 3, and P(x) has at least two simple zeros. As a special case, Theorem <ref> implies the existence of an algorithm that, given integers a,b,c and non-negative integers n,m, outputs a description of the solution set to the three-monomial equation a y^m = b x^n + c. Indeed, by swapping x and y if necessary, we may assume that m≤ n. If ab=0 or m=0, then (<ref>) is an equation in one variable. If c=0, then (<ref>) is a two-monomial equation and can be solved by the algorithms in <cit.>. Otherwise we may consider |a| cases x=|a|z, x=|a|z+1, …, x=|a|z+(|a|-1), where z is a new integer variable, and in each case, equation (<ref>) is either not solvable because the right-hand side is not divisible by a, or, after cancelling a, reduces to an equation of the form y^m = P(z), where P(z)=(b(|a|z+r)^n+c)/a for some 0≤ r < |a| such that br^n+c is divisible by a. Then if m=1 we can take z=u an arbitrary integer, and express y as y=P(u). If m=n=2, then the solution methods for equation (<ref>) are well-known, see e.g. <cit.>. Finally, if m≥ 2, n≥ 3, and c≠ 0, then polynomial P(z) in (<ref>) has n≥ 3 simple zeros, and equation (<ref>) is covered by Theorem <ref>. The original proof of Theorem <ref> works by establishing a (very large) upper bound on the size of possible solutions, and the algorithm checks all pairs (x,y) up to this bound and is therefore completely impractical. In 1998, Bilu and Hanrot <cit.> developed a more involved but much faster version of this algorithm. So, given any equation of the form (<ref>), one may follow the lines of <cit.> and solve the equation. Of course, this may require a lot of time, effort, and knowledge of non-elementary mathematics, but at least it is possible in finite time. In the special case m=n, (<ref>) belongs to a family of Thue equations for which there are easier and faster algorithms, see e.g. <cit.>. Another important example of a solvable family of two-variable equations are the ones satisfying Runge's condition. This family of equations was first studied by Runge <cit.> in 1887. We say that the equation P(x,y) = ∑_i=0^n ∑_j=0^m a_ij x^i y^j = 0, with integer coefficients a_ij of degree n>0 in x and m>0 in y, satisfies the Runge's condition if P(x,y) is irreducible over ℚ and either (C1) there exists a coefficient a_ij≠ 0 of P such that nj+mi>mn, or (C2) the sum of all monomials a_ijx^iy^j of P for which nj + mi = nm can be decomposed into a product of two non-constant coprime polynomials. In 1887, Runge <cit.> proved that if a polynomial P satisfies this condition, then equation P=0 has at most finitely many integer solutions and outlined (somewhat informally) a method for finding these solutions. In 1992, Walsh <cit.> developed an effective upper bound for the size of possible solutions, which definitely implies the existence of a formal algorithm for listing all the solutions. See <cit.> for a practical version of such algorithm. <cit.> There is an algorithm that, given any polynomial P satisfying the Runge's condition, determines all integer solutions of equation P=0. §.§ An algorithm for solving two-variable three-monomial equations In this section we present a general method for solving all three-monomial equations in two variables. Before this work, it was a belief that some of the equations from this family are difficult. For example, Masser <cit.> presented a one-parameter family x^4+axy+y^3=0 as an example of a family that “does not seem to be effectively solvable”. We start this section by presenting a solution to this particular family of equations. There exists an algorithm that, given any integer a, outputs all integer solutions to equation (<ref>). If a=0, (<ref>) is an easy two-monomial equation, hence we can assume that a ≠ 0. If xy=0, then (<ref>) implies that x=y=0. Let (x,y) be any integer solution to (<ref>) with xy≠ 0, and let d be the integer of the same sign as y such that |d|=gcd(x,y). Then x=dx_1 and y=dy_1 for coprime integers x_1,y_1 such that x_1≠ 0 and y_1>0. Substituting this into (<ref>) and cancelling d^2, we obtain the equation d^2 x_1^4 + ax_1y_1 + d y_1^3 = 0, from which it is clear that dy_1^3 is divisible by x_1. Because gcd(x_1,y_1)=1, this implies that d=kx_1 for some non-zero integer k. Then 0=(kx_1)^2 x_1^4 + a x_1 y_1 + (kx_1)y_1^3 = x_1(k^2x_1^5+ay_1+ky_1^3). The last equation implies that k^2x_1^5 is divisible by y_1. Because x_1 and y_1 are coprime, this implies that k^2 = y_1 z for some integer z>0. Let u=gcd(y_1,z)>0. Then integers y_1/u and z/u are positive, coprime, and their product (k/u)^2 is a perfect square, hence y_1/u=v^2 and z/u=w^2 for some non-zero integers v,w, where we may assume that w>0 and v has the same sign as k. Then y_1=uv^2, k=uvw, and (<ref>) implies that 0 = (uvw)^2x_1^5+a(uv^2)+(uvw)(uv^2)^3 = uv^2(uw^2x_1^5 + a + wu^3v^5). Thus uw^2x_1^5 + a + wu^3v^5=0, which implies that uw is a divisor of a. Because a has a finite number of divisors, there are only finitely many pairs of positive integers (u,w) satisfying this condition. For each such pair (u_i,w_i), we can find x_1 and v from the equation u_i w_i ^2 x_1^5 + w_i u_i ^3 v^5= - a. These are the equations of the form (<ref>), and they can be solved by Theorem <ref>. With x_1 and v at hand, we can compute k=u_ivw_i, d=kx_1, y_1=u_i v^2, x=dx_1 and y=dy_1. As an illustration, we have applied the described algorithm to solve equation (<ref>) for 1 ≤ a ≤ 100. All solutions except the trivial (x,y)=(0,0) are listed in Table <ref>. We next move to the analysis of the general equation in 2 variables with 3 monomials, that is, any equation of the form a x^n y^q + b x^k y^l + c x^r y^m = 0, where a,b,c are non-zero integers and n,q,k,l,r,m are non-negative integers. The cases x=0 and y=0 can be checked separately, so we may focus on finding integer solutions such that x≠ 0 and y≠ 0. Then by cancelling the common term x^min{n,k,r}y^min{q,l,m} if necessary, we may assume that nkr = qlm = 0. Without loss of generality, we may assume that r=0. If ql ≠ 0, then m=0, and (<ref>) implies that y is a divisor of c. By checking all possible divisors, we can solve (<ref>) easily. Hence, we may assume that ql=0, and, without loss of generality, q=0. Then equation (<ref>) reduces to a x^n + b x^k y^l + c y^m = 0 The following theorem is the first main result of this work. There is an algorithm that, given any three-monomial equation in 2 variables, describes the set of all its integer solutions. Before proving the theorem, we state and prove several easy lemmas. Every integer solution to equation xy-zt=0 is of the form (x,y,z,t)=(uv,wr,uw,vr) for some integers u,v,w,r. For solutions with x=0 the statement is trivial. Let (x,y,z,t) be any solution with x≠ 0, and let u=gcd(x,z)>0. Then x=uv and z=uw with v,w coprime and v≠ 0. Then (<ref>) implies that uvy=uwt, or vy=wt. Because v and w are coprime, t must be divisible by v, so we can write t=vr for some integer r. Then vy=w(vr), hence y=wr, and (<ref>) follows. Let n,m be non-negative integers that are not both zero, and b be an arbitrary integer. If b is not a multiple of gcd(n,m), then equation n x = m y + b, x,y ≥ 0 has no solutions in non-negative integers x,y. Otherwise the solutions to (<ref>) are given by x = x_0 + m/gcd(n,m) u, y = y_0 + n/gcd(n,m) u, u ≥ 0, where (x_0,y_0) is the solution to (<ref>) with x+y minimal. Note that (x_0,y_0) depends only on n,m,b. If b is not a multiple of d=gcd(n,m), then n x is divisible by d, while m y + b is not, hence (<ref>) has no integer solutions. Now assume that b is a multiple of d. The corresponding homogeneous equation n x = m y, x,y ≥ 0 has solutions x = m/gcd(n,m) u, y = n/gcd(n,m) u, where u is an arbitrary non-negative integer. If (x,y) is an arbitrary solution to (<ref>), and (x_0,y_0) is the solution to (<ref>) with x+y minimal, then n(x-x_0) = (my+b) - (m y_0+b) = m(y-y_0). Because (x_0,y_0) is the solution with x+y minimal, this implies that x-x_0≥ 0, y-y_0≥ 0, and, by (<ref>), x-x_0 = m/gcd(n,m) u, y-y_0 = n/gcd(n,m) u. for some integer u≥ 0. Let A,B,C be any integers such that A+B+C=0. Let p be any prime, and let A_p, B_p and C_p be the exponents with which p enters the prime factorizations of A,B and C, respectively[In some works, the exponent with which p enters the prime factorization of x is denoted v_p(x). In this work, we prefer a shorter notation.]. Then the smallest two of the integers A_p, B_p, C_p must be equal. Let M=min{A_p, B_p, C_p}, and let M' be the second-smallest among these integers. Then A,B,C are all divisible by p^M, and A/p^M + B/p^M + C/p^M = 0. However, if M<M', then exactly two out of three integers A/p^M, B/p^M, C/p^M are divisible by p, hence their sum is not divisible by p and therefore cannot be 0, a contradiction. Hence, M=M'. Now we are ready to prove Theorem <ref>. of Theorem <ref>. We may assume that the equation is of the form (<ref>) with non-zero integer coefficients a,b,c and non-negative integer exponents n,k,l,m. Equation (<ref>) is trivial if m=n=0, so we will assume that m>0 or n>0. Also, if k=l=0, then equation (<ref>) is of the form (<ref>), hence we may assume that k>0 or l>0. Next, we claim that we may assume that nl+mk ≤ mn. Indeed, we must have either l=0, or k=0, or kl>0. If l=0, then (<ref>) reduces to a x^n + b x^k + c y^m = 0, hence (<ref>) can be ensured by swapping (a,n) with (b,k) if necessary. The case k=0 is similar. Finally, if kl>0 and nl+mk>mn, then substitution X=x^d, Y=y^d where d=gcd(n,m,k,l) reduces (<ref>) to a X^n/d + b X^k/d Y^l/d + c Y^m/d = 0, where n/dl/d + m/dk/d > m/dn/d. Because gcd(n/d,m/d,k/d,l/d)=1, polynomial P(X,Y)=a X^n/d + b X^k/d Y^l/d + c Y^m/d is irreducible by <cit.>, and condition (C1) in Theorem <ref> is satisfied. Hence, by Theorem <ref>, equation P(X,Y)=0 has finitely many integer solutions and there is an algorithm for listing these solutions. Then it is easy to check which of these solutions are perfect d-th powers, and produce a complete list of integer solutions (x,y) to (<ref>). From now on, we will consider equation (<ref>) subject to condition (<ref>). First assume that equality holds in (<ref>), that is, nl+mk=mn, or mk=n(m-l). By solving this as an equation of the form (<ref>) with variables m,k,n,m-l, we conclude that there exists integers u,v,w,r such that (m,k,n,m-l)=(uv,wr,uw,vr), see (<ref>). By replacing u,v,w,r by their absolute values if necessary, we may assume that these integers are non-negative. Now divide both parts of (<ref>) by y^m to get a x^n/y^m + b x^k/y^m-l + c = 0, or equivalently a (x^w/y^v)^u + b (x^w/y^v)^r + c = 0. With a new rational variable t=x^w/y^v, this is an equation at^u+bt^r+c=0 in one variable, whose rational solutions are easy to find, see the discussion after equation (<ref>) in Section <ref>. For each rational solution t=p_i/q_i, we can find x and y from an easy two-monomial equation q_i x^w = p_i y^v. From now on, we will assume that nl+mk < mn. Let (x,y) be any integer solution to (<ref>). Because solutions with xy=0 can be described easily, we may and will assume that xy≠ 0. Let p be any prime, and let x_p and y_p be the exponents with which p enters the prime factorizations of x and y, respectively. First assume that p is not a divisor of abc. Then p enters the prime factorization of the monomials a x^n, b x^k y^l and c y^m in (<ref>) with exponents nx_p, kx_p+ly_p and my_p, respectively. Let M=min{nx_p,kx_p+ly_p,my_p}, and M' be the second-smallest among these integers. By Lemma <ref>, M=M'. If kx_p+ly_p > M, then we must have kx_p+ly_p > nx_p = my_p. The last equality implies that x_p=um, y_p=un for some rational u≥ 0. But then k(um) + l(un) > n(um) ⇔ u(km + ln) > u(nm), which is a contradiction with (<ref>). Hence, kx_p+ly_p=M. Then, for every prime p, we have one of the following two cases: either kx_p+ly_p = nx_p ≤ my_p or kx_p+ly_p = my_p < nx_p. In the first case, we have ly_p = (n-k)x_p. By (<ref>), n-k>0, hence, by (<ref>), x_p = l/gcd(l,n-k) u_p, y_p = n-k/gcd(l,n-k) u_p for some integer u_p ≥ 0 that depends on p. Similarly, in case (<ref>), we have kx_p = (m-l)y_p, which by (<ref>) implies that x_p = m-l/gcd(k,m-l) v_p, y_p = k/gcd(k,m-l) v_p, for some integer v_p ≥ 0. With notations l'=l/gcd(l,n-k) ; n'=n-k/gcd(l,n-k) ; m'=m-l/gcd(k,m-l) ; k'= k/gcd(k,m-l) , formulas (<ref>) and (<ref>) simplify to x_p=l'u_p, y_p=n'u_p and x_p=m'v_p, y_p=k'v_p, respectively. Now consider primes p that are the factors of abc. Obviously, there are only finitely many such primes. Let x_p,y_p,a_p,b_p,c_p be the exponents with which p enters the prime factorizations of x,y,a,b,c, respectively. Then p enters the prime factorization of the monomials ax^n, bx^k y^l and cy^m in (<ref>) with exponents a_p+nx_p, b_p+kx_p+ly_p and c_p+my_p, respectively. Let M=min{a_p+nx_p,b_p+kx_p+ly_p,c_p+my_p}, and M' be the second-smallest among these integers. By Lemma <ref>, M=M'. First assume that b_p+kx_p+ly_p > M, then we must have b_p+kx_p+ly_p > a_p+nx_p = c_p+my_p. By Lemma <ref>, the last equality implies the existence of a pair of integers (x_p^0,y_p^0) depending only on p,m,n,a_p and c_p such that x_p = x^0_p + m/(n,m)u_p, y_p = y^0_p + n/(n,m)u_p for some integer u_p ≥ 0. But then the inequality in (<ref>) b_p + k(x^0_p + m/(n,m)u_p) + l(y^0_p + n/(n,m)u_p) > a_p + n(x^0_p + m/(n,m)u_p) is equivalent to b_p + k x^0_p + l y^0_p - a_p - n x^0_p > nm-km-ln/(n,m)u_p Let h_p=b_p + k x^0_p + l y^0_p - a_p - n x^0_p be the left-hand side of the last inequality. Notice that h_p does not depend on x_p,y_p. Because nm-km-ln>0 by (<ref>), we have 0 ≤ u_p < h_p (n,m)/nm-km-ln. Because every bounded interval contains only a finite number of integers, there is only a finite number of possible integer values for u_p, hence a finite number of possibilities for (x_p,y_p). Now consider the case b_p+kx_p+ly_p=M. Because M=M', we must have either b_p+kx_p+ly_p = a_p+nx_p ≤ c_p+my_p or b_p+kx_p+ly_p = c_p+my_p < a_p+nx_p. In the first case, b_p + l y_p = a_p + (n-k)x_p, and, by Lemma <ref>, this implies the existence of pair of integers (x_p^0,y_p^0) depending only on p,l,n-k,a_p and b_p such that x_p = x^0_p + l' u_p, y_p = y^0_p + n' u_p for some integer u_p ≥ 0, where l' and n' are defined in (<ref>). Similarly, in the second case, there is a pair of integers (x_p^0,y_p^0) depending only on p,k,m-l,b_p and c_p such that x_p = x^0_p + m' v_p, y_p = y^0_p + k' v_p for some integer v_p ≥ 0, where m' and k' are defined in (<ref>). The argument above implies that, given any non-zero integer solution (x,y) to equation (<ref>) satisfying (<ref>), we can partition the set P of all primes into five disjoint sets P=⋃_i=1^5 P_i, where * P_1 is the set of primes that are not divisors of abc and satisfy (<ref>); * P_2 is the set of primes that are not divisors of abc and satisfy (<ref>); * P_3 is the set of prime divisors of abc satisfying (<ref>); * P_4 is the set of prime divisors of abc satisfying (<ref>); * P_5 is the set of prime divisors of abc satisfying (<ref>). Let us define u = ∏_p ∈ P_1 p^u_p·∏_p ∈ P_4 p^u_p, v = ∏_p ∈ P_2 p^v_p·∏_p ∈ P_5 p^v_p. Then ∏_p ∈ P_1 p^x_p·∏_p ∈ P_4 p^x_p = ∏_p ∈ P_1 p^l' u_p·∏_p ∈ P_4 p^x^0_p·∏_p ∈ P_4 p^l' u_p = (∏_p ∈ P_4 p^x^0_p)· u^l', and similarly, ∏_p ∈ P_2 p^x_p·∏_p ∈ P_5 p^x_p = (∏_p ∈ P_5 p^x^0_p)· v^m'. Hence, x = (-1)^e_x∏_p ∈ P_1 p^x_p·∏_p ∈ P_2 p^x_p∏_p ∈ P_3 p^x_p·∏_p ∈ P_4 p^x_p∏_p ∈ P_5 p^x_p = = ((-1)^e_x∏_p ∈ P_3 p^x_p∏_p ∈ P_4 p^x^0_p∏_p ∈ P_5 p^x^0_p) u^l' v^m'. where e_x is equal to either 0 or 1 depending on the sign of x. By similar argument, y = ((-1)^e_y∏_p ∈ P_3 p^y_p∏_p ∈ P_4 p^y^0_p∏_p ∈ P_5 p^y^0_p) u^n' v^k', where e_y is equal to either 0 or 1. Now, for each given equation (<ref>), there is only a finite number of prime divisors of abc, hence there is a finite number of possibilities as to how these primes may split into sets P_3, P_4 and P_5. For each given split, integers ∏_p ∈ P_4 p^x^0_p, ∏_p ∈ P_5 p^x^0_p, ∏_p ∈ P_4 p^y^0_p and ∏_p ∈ P_5 p^y^0_p are uniquely determined. Further, for every p ∈ P_3 there is only a finite number of possibilities for (x_p,y_p), hence we have a finite number of possible values for ∏_p ∈ P_3 p^x_p and ∏_p ∈ P_3 p^y_p. Finally, e_x and e_y may assume only two values each. In conclusion, we have proved that there is a finite number of pairs (x_i,y_i), i=1,…,N such that every solution to (<ref>) with xy≠ 0 satisfies x = x_i u^l' v^m', y = y_i u^n' v^k' for some i=1,…, N and some integers u and v. Substituting this into (<ref>), we obtain a x_i^n u^nl' v^nm' + b x_i^k y_i^l u^kl' v^km' u^ln' v^lk' + c y_i^m u^mn' v^mk' = 0, or, after dividing by u^kl'+ln'v^km'+lk', a x_i^n u^nl'-kl'-ln'v^nm'-km'-lk' + b x_i^k y_i^l + c y_i^m u^mn'-kl'-ln'v^mk'-km'-lk' = 0. Now, we have nl'-kl'-ln' = (n-k)l/gcd(l,n-k)-ln-k/gcd(l,n-k) = 0, nm'-km'-lk' = (n-k)m-l/gcd(k,m-l)-lk/gcd(k,m-l) = nm-nl-km/gcd(k,m-l) > 0, where the last inequality follows from (<ref>). Similarly, mn'-kl'-ln'> 0 and mk'-km'-lk'= 0. Hence, (<ref>) reduces to a x_i^n v^nm'-km'-lk' + b x_i^k y_i^l + c y_i^m u^mn'-kl'-ln' = 0, which is an equation of the form (<ref>) in variables u,v. In conclusion, we have reduced equation (<ref>) to a finite number of equations of the form (<ref>) that can be solved by Theorem <ref>. Let us now illustrate the method by solving equation x^4+xy+2y^3=0. We have an obvious solution (x,y)=(0,0) and otherwise x≠ 0 and y≠ 0. For this equation, a=b=1 and c=2 in (<ref>), hence the only prime divisor of abc is p=2. For any p≠ 2, let x_p and y_p be the exponents with which p enters the prime factorizations of x and y, respectively. Then p enters the prime factorization of the monomials x^4, xy and 2y^3 with exponents 4x_p, x_p+y_p and 3y_p, respectively. Let M=min{4x_p,x_p+y_p,3y_p}, and M' be the second-smallest among these integers. Then, by Lemma <ref>, we must have M=M'. Because the case x_p+y_p > 4x_p = 3y_p is clearly impossible for non-negative x_p,y_p, we must have either x_p+y_p =4x_p or x_p+y_p=3y_p, or, equivalently, either y_p=3x_p or x_p=2y_p. Let P_1 and P_2 be the sets of odd primes for which the first and the second equalities hold, respectively, and define non-zero integers u = (-1)^e_x∏_p ∈ P_1 p^x_p, v = (-1)^e_x+e_y∏_p ∈ P_2 p^y_p, where e_x=0 if x>0 and e_x=1 if x<0, and e_y is defined similarly. Then x = (-1)^e_x2^x_2∏_p ∈ P_1 p^x_p∏_p ∈ P_2 p^x_p = (-1)^e_x 2^x_2∏_p ∈ P_1 p^x_p∏_p ∈ P_2 p^2 y_p = 2^x_2 u v^2, and similarly y = (-1)^e_y2^y_2∏_p ∈ P_1 p^y_p∏_p ∈ P_2 p^y_p = (-1)^e_y 2^y_2∏_p ∈ P_1 p^3 x_p∏_p ∈ P_2 p^y_p = 2^y_2 u^3 v, where x_2 and y_2 are the exponents with which 2 enters the prime factorizations of x and y, respectively. Then 2 enters the prime factorization of the monomials x^4, xy and 2y^3 with exponents 4x_2, x_2+y_2 and 3y_2+1, respectively. Let M_1=min{4x_2,x_2+y_2,3y_2+1}, and M_1' be the second-smallest among these integers. Then we must have M_1=M_1'. The case x_2+y_2 ≥ 4x_2 = 3y_2+1 is clearly impossible for non-negative x_2,y_2, hence we must have either (i) x_2+y_2 = 4x_2 or (ii) x_2+y_2 = 3y_2 +1. Let us consider these cases separately. In case (i), y_2=3x_2, so that x = 2^x_2 u v^2 = Uv^2, y=2^y_2 u^3 v = (2^x_2)^3 u^3 v = U^3 v, where U=2^x_2 u ≠ 0. Substituting this into equation (<ref>), we obtain (Uv^2)^4+(Uv^2)(U^3 v)+2(U^3 v)^3=0, or after cancelling U^4v^3, v^5 + 2U^5 = -1. This is a Thue equation, and such equations can be easily solved in many computer algebra systems including Maple and Mathematica. The solutions to this equation are (v,U)=(-1,0) and (1,-1). The first solution is impossible because U≠ 0 by definition, while the second one results in the solution (x,y)=(Uv^2,U^3v)=(-1,-1) to the original equation (<ref>). In case (ii), x_2= 2y_2+1, hence x = 2^x_2 u v^2 = 2(2^y_2)^2 u v^2 = 2uV^2, y = 2^y_2 u^3 v = u^3 V, where V=2^y_2 v ≠ 0. Substituting this into equation (<ref>), we obtain (2uV^2)^4+(2uV^2)(u^3 V)+2(u^3 V)^3=0, or after cancelling 2u^4V^3, 8V^5 + u^5 = -1. This is a Thue equation, whose only solution is (V,u)=(0,-1), a contradiction with V≠ 0. In summary, the only integer solutions to equation (<ref>) are (x,y)=(-1,-1) and (0,0). The algorithm in the proof of Theorem <ref> significantly simplifies in the case a=b=c=1 in (<ref>), that is, for equation x^n + x^k y^l + y^m = 0, because in this case, the sets P_3, P_4 and P_5 in the algorithm are empty sets. As an illustration, Table <ref> lists all non-trivial solutions to all quartic and quintic equations of the form (<ref>) satisfying the condition (<ref>). § GENERAL THREE-MONOMIAL EQUATIONS §.§ Format in which we accept the answers When studying three-monomial equations in n≥ 3 variables, we first need to agree in what form we accept the answers. For example, if we insist on polynomial parametrization of all integer solutions, then the equation xy-zt=1 had been open for over 70 years, until Vaserstein <cit.> proved the existence of polynomials X,Y,Z,T in 46 variables u=(u_1,…,u_46) with integer coefficients, such that (x,y,z,t)∈ℤ^4 is a solution to (<ref>) if and only if x=X(u), y=Y(u), z=Z(u) and t=T(u) for some u ∈ℤ^46. We do not know if there exist polynomial parametrizations of all integer solutions to other easy-looking three-monomial equations, like, for example, x^2y=z^2+1 or yzt=x^2+1. On the other hand, all these equations become trivial if we accept answers in which parameters can be restricted to be divisors of polynomial or rational expressions involving other parameters. Specifically, for any integers m and k≥ 0, let D_k(m) be the set of all integers z such that z^k is a divisor of m. We remark that D_0(m) is the set of all integers, while D_1(m) is the set of all divisors of m. This notation allows us to describe the sets of solutions to equations (<ref>)-(<ref>). For example, the set of all integer solutions to equation (<ref>) can be described as (x,y,z,t) = (u_1, u_2, u_3, u_1^2+1/u_2u_3), u_1 ∈ℤ, u_2 ∈ D_1(u_1^2+1), u_3 ∈ D_1(u_1^2+1/u_2). More generally, we can easily describe the solution sets of all equations that, possibly after permutation of variables, can be written in the form P(x_1^kx_2, x_3, …, x_n) = 0, where k≥ 1 is an integer and P is a polynomial with integer coefficients such that we know how to describe the integer solution set S of the equation P(y,x_2,…,x_n)=0. Indeed, integer solutions to (<ref>) with x_1≠ 0 are (x_1, x_2, …, x_n) = (u_1, u_2/u_1^k, u_3, …, u_n ), (u_2,u_3,…,u_n) ∈ S, u_1 ∈ D_k(u_2), while the solutions with x_1=0 are such that (0,x_3,…,x_n)∈ S and x_2 arbitrary. We remark that equations (<ref>)-(<ref>) are all of the form (<ref>). §.§ A direct formula for solving a large class of three-monomial equations A general three-monomial Diophantine equation can be written in the form a ∏_i=1^n x_i^α_i + b ∏_i=1^n x_i^β_i = c ∏_i=1^n x_i^γ_i, where x_1,…,x_n are variables, α_i, β_i and γ_i are non-negative integers, and a,b,c are non-zero integer coefficients. We call a solution x=(x_1,…,x_n) to (<ref>) trivial if ∏_i=1^n x_i = 0 and non-trivial otherwise. The trivial solutions are easy to describe, so we may concentrate on finding the non-trivial ones. We start by identifying a large class of three-monomial equations whose set of non-trivial integer solutions can be described by an easy and direct formula. Assume that a Diophantine equation can be written in the form (<ref>) such that both systems ∑_i=1^n α_i z_i = ∑_i=1^n β_i z_i = ∑_i=1^n γ_i z_i - 1, z_i ≥ 0, i=1,…, n, and ∑_i=1^n α_i t_i = ∑_i=1^n β_i t_i = ∑_i=1^n γ_i t_i + 1, t_i ≥ 0, i=1,…, n, are solvable in non-negative integers z_i and t_i. Then all non-trivial integer solutions to (<ref>) are given by x_i = (a ∏_j=1^n u_j^α_j + b ∏_j=1^n u_j^β_j)^z_i(c ∏_j=1^n u_j^γ_j)^t_i w^-z_i-t_i· u_i, i=1,…, n, where u_i ∈ℤ, i=1,…, n, w ∈ D_1(a ∏_j=1^n u_j^α_j + b ∏_j=1^n u_j^β_j) ∩ D_1(c ∏_j=1^n u_j^γ_j). Let us check that any (x_1,…, x_n) satisfying (<ref>) is an integer solution to (<ref>). The condition on w ensures that all x_i in (<ref>) are integers. Further, a ∏_i=1^n x_i^α_i = (a ∏_i=1^n u_i^α_i + b ∏_i=1^n u_i^β_i)^∑_i=1^n α_i z_i(c ∏_i=1^n u_i^γ_i)^∑_i=1^n α_i t_i w^-∑_i=1^n α_i (z_i+t_i)· a ∏_i=1^n u_i^α_i, and b ∏_i=1^n x_i^β_i = (a ∏_i=1^n u_i^α_i + b ∏_i=1^n u_i^β_i)^∑_i=1^n β_i z_i(c ∏_i=1^n u_i^γ_i)^∑_i=1^n β_i t_i w^-∑_i=1^n β_i (z_i+t_i)· b ∏_i=1^n u_i^β_i, The first equalities in (<ref>) and (<ref>) imply that the first three factors in these expressions are the same, hence a ∏_i=1^n x_i^α_i + b ∏_i=1^n x_i^β_i = (a ∏_i=1^n u_i^α_i + b ∏_i=1^n u_i^β_i)^1+∑_i=1^n α_i z_i(c ∏_i=1^n u_i^γ_i)^∑_i=1^n α_i t_i w^-∑_i=1^n α_i (z_i+t_i). On the other hand c ∏_i=1^n x_i^γ_i = (a ∏_i=1^n u_i^α_i + b ∏_i=1^n u_i^β_i)^∑_i=1^n γ_i z_i(c ∏_i=1^n u_i^γ_i)^∑_i=1^n γ_i t_i w^-∑_i=1^n γ_i (z_i+t_i)· c ∏_i=1^n u_i^γ_i. The second equalities in (<ref>) and (<ref>) imply that the right-hand sides of the last two expressions are the same, and (<ref>) follows. Conversely, let us prove that all non-trivial integer solutions to (<ref>) are covered by (<ref>) for some values of the parameters. Indeed, for any given solution (x_1, …, x_n) let us take u_i=x_i, i=1,…, n, and w=c ∏_i=1^n x_i^γ_i. Then the right-hand side of (<ref>) becomes (a ∏_j=1^n x_j^α_j + b ∏_j=1^n x_j^β_j)^z_i(c ∏_j=1^n x_j^γ_j)^t_i w^-z_i-t_i· x_i = (c ∏_j=1^n x_j^γ_j)^z_i(c ∏_j=1^n x_j^γ_j)^t_i/w^z_i+t_i· x_i = x_i, and (<ref>) follows. We remark that the same method can also be used to solve the general two-monomial equation, that is, the equation of the form a ∏_i=1^n x_i^α_i = b ∏_i=1^n x_i^γ_i, where x_1,…,x_n are variables, α_i and γ_i are non-negative integers, and a,b are non-zero integer coefficients. As above, we will look at non-trivial solutions, that is, ones with ∏_i=1^n x_i ≠ 0. We may assume that min{α_i,γ_i}=0 for all i, otherwise the multiplier x_i^min{α_i,γ_i} can be cancelled out. Equation (<ref>) can be rewritten as ∏_i=1^n x_i^e_i = r, where e_i=α_i-γ_i, i=1,…,n are integers and r=b/a is a rational number. If e_i≥ 0 for all i, then (<ref>) may have integer solutions only if r is an integer. Because x_i^e_i are divisors of r for all i, all solutions can be easily found. The case e_i≤ 0 for all i is similarly easy, so from now on we assume that not all e_i are of the same sign. If d=gcd(e_1,…,e_n), then (<ref>) may be solvable in integers only if √(r) is a rational number, say B/A. In this case, raising both parts of (<ref>) to the power 1/d results in[If d is even, then L=R is equivalent to L^1/d=± R^1/d, hence (<ref>) reduces to two equations of the form (<ref>).] A ∏_i=1^n x_i^α'_i = B ∏_i=1^n x_i^γ'_i, where α'_i=α_i/d and γ'_i=γ_i/d, i=1,…, n are non-negative integers. The two-monomial versions of systems (<ref>) and (<ref>) are ∑_i=1^n α'_i z_i = ∑_i=1^n γ'_i z_i - 1, z_i ≥ 0, i=1,…, n, and ∑_i=1^n α'_i t_i = ∑_i=1^n γ'_i t_i + 1, t_i ≥ 0, i=1,…, n, respectively. Because integers α'_i - γ'_i, i=1,…, n are not all of the same sign and do not have a common factor, both these systems are solvable in non-negative integers. Then, by analogy with (<ref>), all non-trivial solutions to (<ref>) are given by x_i = (A ∏_j=1^n u_j^α'_j)^z_i(B ∏_j=1^n u_j^γ'_j)^t_i w^-z_i-t_i· u_i, i=1,…, n, where u_i ∈ℤ, i=1,…, n, w ∈ D_1(A ∏_j=1^n u_j^α'_j) ∩ D_1(B ∏_j=1^n u_j^γ'_j). In particular, we have the following result, which we will use later. Let e_1, …, e_k be non-zero integers, not all of the same sign, and let r be a rational number. Then equation (<ref>) is solvable in integers x_1, …, x_k if and only if √(r) is a rational number, where d=gcd(e_1, …, e_k). §.§ Reduction to equations with independent monomials In this section we present a general method that reduces any three-monomial equation to a finite number of three-monomial equations with independent monomials, that is, equations of the form a ∏_i=1^n_1 x_i^α_i + b ∏_i=1^n_2 y_i^β_i + c ∏_i=1^n_3 z_i^γ_i = 0, where a,b,c are integers, n_1,n_2,n_3, α_i, β_i and γ_i are non-negative integers, and x_i,y_i and z_i are (different) variables. We remark that if an equation with independent monomials has a monomial of degree 1, then it is an equation of the form a x_i + P(x_1,…,x_i-1,x_i+1,…,x_n) = 0, a≠ 0, which is trivially solvable. Indeed, if |a|=1, then we can take x_1,…,x_i-1,x_i+1,…,x_n to be arbitrary integers and express x_i as x_i=-P(x_1,…,x_i-1,x_i+1,…,x_n)/a. If |a|≥ 2, (<ref>) implies that P(x_1,…,x_i-1,x_i+1,…,x_n) must be divisible by |a|. So, let us solve the equation P(x_1,…,x_i-1,x_i+1,…,x_n)=0 modulo |a|, and for each of the (finitely many) solutions r_1,…,r_i-1,r_i+1,…,r_n do the substitution x_j = |a|y_j + r_j, j=1,…,i-1,i+1,…,n, where y_j are new integer variables. Then (<ref>) reduces to a x_i = -P(|a|y_1+r_1,…,|a|y_i-1+r_i-1,|a|y_i+1+r_i+1,…,|a|y_n+r_n), where all the coefficients of the polynomial on the right-hand side are divisible by |a|. We can then cancel |a| and reduce each of these equations to an equation of the form (<ref>) with |a|=1. An integer solution to (<ref>) will be called primitive if gcd(x_i,y_j)=gcd(x_i,z_k)=gcd(y_j,z_k)=1, i=1,…, n_1, j=1,…,n_2, k=1,…,n_3. We remark that we may have, for example, gcd(x_i,x_j)>1 for some i≠ j. There is an algorithm that, given an arbitrary 3-monomial equation, reduces the problem of describing all integer solutions of this equation to the same problem for a finite number of three-monomial equations with independent monomials. Moreover, it is sufficient to find only primitive solutions to the resulting equations. We will need the following lemma. For any non-negative integers e_1, …, e_N and non-zero integer q, there is a finite set M of N-tuples d=(d_1, …, d_N) of positive integers such that (i) ∏_k=1^N d_k^e_k is divisible by q for every d ∈ M, and (ii) for any non-zero integers (U_1, …, U_N), ∏_k=1^N U_k^e_k is divisible by q if and only if there exists d ∈ M such that U_k/d_k are integers for all k=1,…, N. In particular, if N=1, then M can be chosen to be a one-element set. Let S_q be the set of all n-tuples of non-zero integers U=(U_1, …, U_N) such that ∏_k=1^N U_k^e_k is divisible by q. We say that U=(U_1, …, U_N)∈ S_q dominates V=(V_1, …, V_N)∈ S_q if for every k=1,…,N ratio U_k/V_k is an integer. We say that U∈ S_q is minimal if U dominates any V∈ S_q only if |V_k|=U_k for all k. Let M be the set of all minimal elements of S_q. Because M⊂ S_q, property (i) follows. Further, every U ∈ S_q dominates some d=(d_1,…,d_N) ∈ M, hence U_k/d_k are integers for all k=1,…, N. Conversely, if for some d∈ M ratios U_k/d_k are integers for all k=1,…, N, then ∏_k=1^N U_k^e_k = ∏_k=1^N (U_k/d_k)^e_k∏_k=1^N d_k^e_k. Because ∏_k=1^N d_k^e_k is divisible by q, so is ∏_k=1^N U_k^e_k. Thus, U ∈ S_q, and (ii) follows. It is left to prove that M is a finite set. We claim that every U∈ M satisfies the conditions (a) U_k = 1 whenever e_k=0 and (b) ∏_k=1^N U_k^e_k≤ q^1+e_M, where e_M=max{e_1, …, e_N}. Indeed, if e_k=0 then U dominates V=(U_1, …, U_k-1, 1, U_k+1, …, U_N) ∈ S_q, hence, by the definition of minimality, we must have U_k=|1|=1, and (a) follows. Let us now check (b). For any prime p let E_p and q_p be the exponents with which p enters the prime factorizations of ∏_k=1^N U_k^e_k and q, respectively. If (b) fails, then there must exist a prime p such that E_p > (1+e_M)q_p. In particular, E_p>0, hence there exists i such that U_i is divisible by p, which implies that E_p ≥ e_i. Further, because U∈ S_q, ratio R=(1/q)∏_k=1^N U_k^e_k is an integer. Prime p enters the prime factorization of R in the exponent E_p - q_p. If q_p=0, then E_p-q_p = E_p ≥ e_i. If q_p≥ 1, then E_p - q_p > (1+e_M)q_p - q_p = e_M q_p ≥ e_M ≥ e_i. Hence, in any case E_p-q_p≥ e_i, which implies that R/p^e_i is an integer. But R/p^e_i = (1/q)∏_k=1^N_1 U_k^e_k/p^e_i = (1/q)(U_i/p)^e_i∏_k≠ i U_k^e_k, hence (U_1, …, U_i-1, U_i/p, U_i+1, …, U_N_1) ∈ S_q, which is a contradiction with the minimality of U. Hence, (<ref>) holds for any minimal U ∈ S_q. However, there are only finitely many n-tuples of positive integers satisfying (<ref>), hence the set M of minimal U ∈ S_q is a finite set. Finally, let N=1, and let q = ±∏_p ∈ P_q p^q_p be the prime factorization of q. Then U_1^e_1 is divisible by q if and only if U_1^e_1 is divisible by p^q_p for every p∈ P_q. Equivalently, U_1 is divisible by p^q^*_p for every p∈ P_q, where q^*_p is the smallest integer satisfying e_1 q^*_p ≥ q_p. Then the statement of the Proposition is true with M={q^*}, where q^* = ∏_p ∈ P_q p^q^*_p. Now we are ready to prove Theorem <ref>. of Theorem <ref>. A general three-monomial equation is an equation of the form a ∏_i=1^n x_i^α_i + b ∏_i=1^n x_i^β_i + c ∏_i=1^n x_i^γ_i = 0, where α_i, β_i, γ_i are non-negative integers, and a,b,c are non-zero integers. Let (x_1,…,x_n) be any non-trivial solution to (<ref>). For any prime p, let a_p, b_p, c_p, z_1p, …, z_np be the exponents with which p enters the prime factorizations of a,b,c,x_1,…,x_n, respectively. Then p enters the prime factorizations of the monomials of (<ref>) with the exponents a_p + ∑_i=1^n α_i z_ip, b_p + ∑_i=1^n β_i z_ip, c_p + ∑_i=1^n γ_i z_ip, respectively. By Lemma <ref>, we must have either a_p + ∑_i=1^n α_i z_ip = b_p + ∑_i=1^n β_i z_ip≤ c_p + ∑_i=1^n γ_i z_ip, or a_p + ∑_i=1^n α_i z_ip = c_p + ∑_i=1^n γ_i z_ip < b_p + ∑_i=1^n β_i z_ip, or b_p + ∑_i=1^n β_i z_ip = c_p + ∑_i=1^n γ_i z_ip < a_p + ∑_i=1^n α_i z_ip. Let P_1, P_2, P_3 be the sets of primes satisfying (<ref>), (<ref>) and (<ref>), respectively. Note that P_1, P_2, P_3 form a partition of the set P of all primes. Let A be the set of prime divisors of abc. For every p∉ A, a_p=b_p=c_p=0, and (<ref>)-(<ref>) simplify to ∑_i=1^n α_i z_ip = ∑_i=1^n β_i z_ip≤∑_i=1^n γ_i z_ip, ∑_i=1^n α_i z_ip = ∑_i=1^n γ_i z_ip < ∑_i=1^n β_i z_ip, ∑_i=1^n β_i z_ip = ∑_i=1^n γ_i z_ip < ∑_i=1^n α_i z_ip. The equations in (<ref>)-(<ref>) are homogeneous linear equations in non-negative integer variables. The study of such equations goes back to at least 1903 in the paper of Elliott <cit.>. A cornerstone concept of this theory is the concept of minimal solutions. For any x=(x_1,…,x_n) and y=(y_1,…,y_n), we write x<y (and say that y dominates x) if x_i≤ y_i for i=1,…,n with at least one inequality being strict. Any homogeneous linear equation has a solution (z_1,…,z_n)=(0,…,0) which we call trivial, and all other solutions in non-negative integers are called non-trivial. A non-trivial solution z=(z_1,…,z_n) is called minimal if there is no other non-trivial solution z'=(z'_1,…,z'_n) such that z' < z. It is well-known <cit.>, that any homogeneous linear equation has a finite number of minimal solutions, and there are algorithms for listing them all. Let (e_k1, …, e_kn), k=1,…, N_1, (f_k1, …, f_kn), k=1,…, N_2, and (g_k1, …, g_kn), k=1,…, N_3, be the complete lists of minimal solutions to the equations in (<ref>), (<ref>) and (<ref>), respectively. Then all non-negative integer solutions to the equation in (<ref>) are given by z_ip = ∑_k=1^N_1 u_kp e_ki, i=1,2,…, n, p ∈ P_1, for non-negative integers u_1p, …, u_N_1p, see e.g. <cit.>. The non-negative integer solutions to the equation in (<ref>) and (<ref>) can be written similarly. Let us now return to the equations in (<ref>)-(<ref>). If a_p≠ b_p, then the equation in (<ref>) is an inhomogeneous linear equation in non-negative integers. As in the homogeneous case, a solution to such equation is called minimal if it does not dominate any other solution. The set of minimal solutions is finite and can be computed <cit.>. Let S_p^1 be the set of minimal solutions to the equation in (<ref>) if a_p≠ b_p, and S_p^1={0} otherwise. Similarly, let S_p^2 and S_p^3 be the sets of minimal solutions to the equations in (<ref>) and (<ref>), respectively, provided that they are inhomogeneous, and S_p^2={0} and S_p^3={0}, respectively, in the homogeneous case. Then the general solutions to the equations in (<ref>), (<ref>) and (<ref>), are given by z_ip = z_ip^0+∑_k=1^N_1 u_kp e_ki, i=1,2,…, n, z_p^0=(z_1p^0, …, z_np^0) ∈ S_p^1, p ∈ P_1, z_ip = z_ip^0+∑_k=1^N_2 u_kp f_ki, i=1,2,…, n, z_p^0=(z_1p^0, …, z_np^0) ∈ S_p^2, p ∈ P_2, and z_ip = z_ip^0+∑_k=1^N_3 u_kp g_ki, i=1,2,…, n, z_p^0=(z_1p^0, …, z_np^0) ∈ S_p^3, p ∈ P_3, respectively, where u_kp are arbitrary non-negative integers, see <cit.>. Substituting prime factorizations |a| = ∏_p ∈ P p^a_p, |b| = ∏_p ∈ P p^b_p, |c| = ∏_p ∈ P p^c_p, |x_i| = ∏_p ∈ P p^z_ip, i=1,…, n into (<ref>) results in ±∏_p ∈ P p^a_p+∑_i=1^n α_i z_ip±∏_p ∈ P p^b_p+∑_i=1^n β_i z_ip±∏_p ∈ P p^c_p+∑_i=1^n γ_i z_ip = 0, for some combination of signs. For every p ∈ P_1, (<ref>) implies that p^a_p+∑_i=1^n α_i z_ip = p^b_p+∑_i=1^n β_i z_ip is a common factor of all three monomials in this equation. Similarly, p^a_p+∑_i=1^n α_i z_ip=p^c_p+∑_i=1^n γ_i z_ip is a common factor for every p ∈ P_2 by (<ref>), while p^b_p+∑_i=1^n β_i z_ip=p^c_p+∑_i=1^n γ_i z_ip is a common factor for every p ∈ P_3 by (<ref>). Cancelling all these common factors results in ±∏_p ∈ P_3 p^a_p-b_p+∑_i=1^n (α_i-β_i) z_ip±∏_p ∈ P_2 p^b_p-c_p+∑_i=1^n (β_i-γ_i) z_ip±∏_p ∈ P_1 p^c_p-a_p+∑_i=1^n (γ_i-α_i) z_ip = 0. Inequalities in (<ref>)-(<ref>) imply that all three terms in the last expression are integers. Next, substituting expressions (<ref>)-(<ref>) for z_ip results in A ∏_p ∈ P_3 p^∑_i=1^n (α_i-β_i)∑_k=1^N_3 u_kp g_ki + B ∏_p ∈ P_2 p^∑_i=1^n (β_i-γ_i)∑_k=1^N_2 u_kp f_ki + C ∏_p ∈ P_1 p^∑_i=1^n (γ_i-α_i)∑_k=1^N_1 u_kp e_ki = 0, where A = ±∏_p ∈ P_3 p^a_p-b_p+∑_i=1^n (α_i-β_i) z_ip^0, B = ±∏_p ∈ P_2 p^b_p-c_p+∑_i=1^n (β_i-γ_i) z_ip^0, C = ±∏_p ∈ P_1 p^c_p-a_p+∑_i=1^n (γ_i-α_i) z_ip^0. The last equation can be written as A ∏_p ∈ P_3 p^∑_k=1^N_3u_kpg_k + B ∏_p ∈ P_2 p^∑_k=1^N_2u_kpf_k + C ∏_p ∈ P_1 p^∑_k=1^N_1u_kpe_k = 0, where g_k = ∑_i=1^n (α_i-β_i) g_ki, f_k = ∑_i=1^n (β_i-γ_i) f_ki, e_k = ∑_i=1^n (γ_i-α_i) e_ki, or equivalently as A ∏_k=1^N_3 W_k^g_k + B ∏_k=1^N_2 V_k^f_k + C ∏_k=1^N_1 U_k^e_k = 0, where U_k = ∏_p ∈ P_1 p^u_kp, V_k=∏_p ∈ P_2 p^u_kp, W_k=∏_p ∈ P_3 p^u_kp. For every original equation (<ref>), there are only finitely many possible values for coefficients (A,B,C) defined in (<ref>), and all such possible triples can be explicitly listed. Indeed, for every prime p that is not a divisor of abc, we have a_p=b_p=c_p=0 and z_ip^0=0 for i=1,…,n, hence all such primes contribute factor 1 to the products in (<ref>). Hence, the products are actually over prime factors of abc. There are finitely many such primes, and therefore finitely many ways how they can split into P_1, P_2 and P_3. Also, for each p there are only finitely many minimal solutions (z_1p^0, …, z_np^0) to the equations in (<ref>)-(<ref>). Finally, there are 8 combinations of signs. Hence, there are finitely many possible triples of (A,B,C), and we have reduced the original equation (<ref>) to finitely many equations of the form (<ref>) with integer variables W_k, V_k and U_k. We remark that exponents g_k,f_k,e_k defined in (<ref>) may be positive or negative integers, and coefficients A,B,C are rational numbers. However, the three terms in (<ref>) are the same as the three terms in (<ref>), and therefore must be integers. Now, given integers e_1, …, e_N_1, which integers m can be represented in the form m = C ∏_k=1^N_1 U_k^e_k? In other words, for which integers m is equation ∏_k=1^N_1 U_k^e_k = m/C solvable in integers U_1, …, U_N_1? Let us consider cases (i) all e_k≤ 0, (ii) not all e_k are of the same sign, and (iii) all e_k ≥ 0. * In case (i), (<ref>) is equivalent to ∏_k=1^N_1 U_k^|e_k| = C/m and may be solvable in integers only if C is an integer and m is a divisor of C. Hence, there are only finitely many possible values of m. * In case (ii), Proposition <ref> implies that (<ref>) is solvable if and only if √(m/C) is a rational number, where d = gcd(e_1, …, e_N_1). Representing C=s/q and √(m/C)=√(qm/s)=U/V as irreducible fractions, we obtain V^d q m = U^d s. Because gcd(U,V)=1, V^d must be a divisor of s, hence there are only finitely many possible values for V, say v_1, …, v_K. If V=v_i, then m=U^d s/v_i^d q. For m to be an integer, we must have U^d to be divisible by q. By the N=1 case of Lemma <ref>, this happens if and only if U is divisible by q^*, where q^* is the smallest positive integer such that (q^*)^d is divisible by q. Then substitution U=q^* U' results in m=s/v_i^d(q^*)^d/q(U')^d. We remark that U' is an integer variable and s/v_i^d(q^*)^d/q is an integer coefficient. From (<ref>), we may find original variables U_1, …, U_N_1 as functions of U'. Indeed, (<ref>) reduces to ∏_k=1^N_1 U_k^e_k/d = e√(m/C) = e U/v_i = eq^* U'/v_i, where e=± 1 if d is even and e=1 if d is odd. This equation can be rewritten as a two-monomial equation (<ref>), and its solution is given by (<ref>). * In case (iii), if C=s/q is an irreducible fraction, then m=(s/q)∏_k=1^N_1 U_k^e_k is a non-zero integer if and only if ∏_k=1^N_1 U_k^e_k is a non-zero integer divisible by q. By Lemma <ref>, there is a finite set M of N_1-tuples d=(d_1, …, d_N_1) of positive integers such that (1/q)∏_k=1^N_1 U_k^e_k is an integer if and only if there exists d ∈ M such that U'_k=U_k/d_k are integers for all k=1,…, N_1. Then m = s∏_k=1^N_1 U_k^e_k/q = s ∏_k=1^N_1 d_k^e_k/q∏_k=1^N_1 (U'_k)^e_k for some d ∈ M. Because ∏_k=1^N_1 d_k^e_k is divisible by q by Lemma <ref> (i), this expression involves integer variables U'_k and integer coefficient. Now, let us replace the term C ∏_k=1^N_1 U_k^e_k in (<ref>) by some divisor of C in case (i), by (<ref>) for some i in case (ii), and by (<ref>) for some d ∈ M in case (iii). Let us do the same replacements for the other two terms in (<ref>). This transforms (<ref>) into a finite number of three-monomial equations with integer variables and coefficients, non-negative exponents, and independent monomials. We remark that all prime factors of U_k, V_k and W_k belong to the disjoint classes P_1, P_2, and P_3, respectively, hence it is sufficient to find only primitive solutions to these equations. Once all these equations are solved, it is easy to find integer solutions to (<ref>), and then absolute values of the solutions to the original equation (<ref>) are |x_i| = ∏_p ∈ P p^z_ip = ∏_p ∈ P_1 p^z_ip^0+∑_k=1^N_1u_kpe_ki∏_p ∈ P_2 p^z_ip^0+∑_k=1^N_1u_kpf_ki∏_p ∈ P_3 p^z_ip^0+∑_k=1^N_1u_kpg_ki = = (∏_p ∈ P_1 p^z_ip^0∏_p ∈ P_2 p^z_ip^0∏_p ∈ P_3 p^z_ip^0) ∏_k=1^N_1 U_k^e_ki∏_k=1^N_2 V_k^f_ki∏_k=1^N_3 W_k^g_ki, i=1,…, n, where we have used (<ref>)-(<ref>) and (<ref>). Once |x_i| are found, it is easy to find the signs of the variables and finish the solution of (<ref>). In many examples, the resulting equations with independent monomials are trivial, hence the reduction in the proof of Theorem <ref> instantly solves the original equation. As an illustration, assume that α_i, β_i, γ_i in (<ref>) are such that both systems (<ref>) and (<ref>) are solvable in non-negative integers, so that (<ref>) can be instantly solved by Proposition <ref>. Then the first equation in (<ref>) implies that z_i = ∑_k=1^N_1 u_k e_ki, i=1,…, n, where u_1, …, u_N_1 are some non-negative integers, and (e_k1, …, e_kn), k=1,…, N_1, are the minimal solutions to this equation. Then, by (<ref>), 1 = ∑_i=1^n(γ_i-α_i) z_i = ∑_i=1^n(γ_i-α_i) ∑_k=1^N_1 u_k e_ki = ∑_k=1^N_1 u_k ∑_i=1^n (γ_i-α_i) e_ki = ∑_k=1^N_1 u_k e_k, where the last equality follows from (<ref>). By similar argument, the solvability of (<ref>) implies that -1 = ∑_k=1^N_1 u'_k e_k for some non-negative integers u'_k. This implies that not all e_k are of the same sign, and d=gcd(e_1,…,e_N_1)=1, hence the algorithm in the proof of Theorem <ref> replaces the term C ∏_k=1^N_1 U_k^e_k in (<ref>) by C' (U')^d = C' U' for some integer coefficient C' and integer variable U'. In conclusion, the original equation is reduced to a finite number of trivial equations of the form (<ref>) with U' in place of x_i. In other words, Proposition <ref> corresponds to the trivial case of Theorem <ref> when the reduction is to equations of the form (<ref>). Now assume that system (<ref>) is solvable in non-negative integers, while system (<ref>) is not. In this case, (<ref>) is true but (<ref>) is not. By (<ref>), gcd(e_1,…,e_N_1)=1, hence (<ref>) may fail only if all e_k are non-negative. But then (<ref>) implies that e_j=1 for some 1≤ j ≤ n. If e_i=0 for all i≠ j, then term C ∏_k=1^N_1 U_k^e_k in (<ref>) reduces to C U_j, hence the reduced equation is of the form (<ref>). If, conversely, e_i>0 for some i≠ j, then term C ∏_k=1^N_1 U_k^e_k in (<ref>) contains multiplier U_j U_i^e_i, hence the whole equation is of the form (<ref>), and can be solved as explained in Section <ref>. We have just proved the following result. There is an algorithm for describing all integer solutions to any three-monomial Diophantine equation that can be written in the form (<ref>) such that system (<ref>) is solvable in non-negative integers. §.§ Examples §.§.§ Some simple examples In this section, we give simple examples of three-monomial equations for which (a) Proposition <ref> is applicable; (b) Proposition <ref> is not applicable, but Proposition <ref> is; (c) Proposition <ref> is not applicable, but the algorithm in the proof of Theorem <ref> can still solve the equation completely. Obviously, for each three-monomial equation, there are three ways to write it in the form (<ref>). Proposition <ref> states that if system (<ref>) is solvable for at least one of these three ways, then the algorithm works. As a simple example, equation x^3-y^2z-z=0 can be written in the form (<ref>) as x^3-y^2z=z. If we rename variables as x=x_1, y=x_2 and z=x_3, this is (<ref>) with (α_1,α_2,α_3,β_1,β_2,β_3,γ_1,γ_2,γ_3)=(3,0,0,0,2,1,0,0,1). Then system (<ref>) 3z_1 = 2z_2+z_3 = z_3 - 1 has no solutions in non-negative integers. However, the same equation (<ref>) can also be written in the form (<ref>) as y^2z+z=x^3. In this case, (α_1,α_2,α_3,β_1,β_2,β_3,γ_1,γ_2,γ_3)=(0,2,1,0,0,1,3,0,0). Then systems (<ref>) and (<ref>) are 2z_2+z_3 = z_3 = 3z_1 - 1 and 2t_2+t_3 = t_3 = 3t_1 + 1, respectively. These systems are solvable in non-negative integers, hence Proposition <ref> is applicable. For example, we may take (z_1,z_2,z_3)=(1,0,2) and (t_1,t_2,t_3)=(0,0,1). Substituting this into the general formula (<ref>), we obtain (x,y,z) = (u_2^2u_3+u_3/wu_1, u_2, (u_2^2u_3+u_3)^2u_1^3/w^3u_3), u_i ∈ℤ, w ∈ D_1(u_2^2u_3+u_3) ∩ D_1(u_1^3). As a next example, let us consider the equation x^3-y^2z-y=0, which can be written in the form (<ref>) as y^2z+y=x^3, or x^3-y^2z=y, or x^3-y=y^2z. Up to the names of the variables, this is (<ref>) with (α_1,α_2,α_3,β_1,β_2,β_3,γ_1,γ_2,γ_3) equal to (0,2,1,0,1,0,3,0,0), (3,0,0,0,2,1,0,1,0) and (3,0,0,0,1,0,0,2,1), respectively. The corresponding systems (<ref>) and (<ref>) are 2z_2+z_3 = z_2 = 3 z_1 - 1 and 2t_2+t_3 = t_2 = 3 t_1 + 1, 3 z_1 = 2z_2+z_3 = z_2 - 1 and 3 t_1 = 2t_2+t_3 = t_2 + 1, and 3 z_1 = z_2 = 2z_2+z_3 - 1 and 3 t_1 = t_2 = 2t_2+t_3 + 1, respectively. It is easy to see that only the last system in z_i is solvable in non-negative integers, with, for example (z_1,z_2,z_3)=(0,0,1) being a solution. However, the corresponding system 3 t_1 = t_2 = 2t_2+t_3 + 1 has no solutions in non-negative integers. Hence, Proposition <ref> is not applicable to equation (<ref>). However, Proposition <ref> is still applicable, and it guarantees that (<ref>) can be solved by reduction to equation(s) of the form (<ref>). And indeed, we can rewrite (<ref>) as y(yz+1)=x^3. Integers y and yz+1 are coprime, and their product can be a perfect cube only if they both are perfect cubes. Then y=v^3 and yz+1=u^3 for some integers u and v. This implies that v^3 z + 1 = u^3. This equation is of the form (<ref>) and it is easy to solve: just let u ∈ℤ be arbitrary, let v be any element of set D_3(u^3-1), and then express z as z=u^3-1/v^3. Then x=√((yz+1)y)=√(u^3v^3)=uv, so the final answer is (x,y,z) = (uv, v^3, u^3-1/v^3), u ∈ℤ, v ∈ D_3(u^3-1). Our next example is the equation x+x^2y-yz^2=0. It can be written in the form (<ref>) as x+x^2y=yz^2, or x-yz^2=-x^2y, or x^2y-yz^2=-x. Up to the names of the variables, this is (<ref>) with (α_1,α_2,α_3,β_1,β_2,β_3,γ_1,γ_2,γ_3) equal to (1,0,0,2,1,0,0,1,2), (1,0,0,0,1,2,2,1,0), and (2,1,0,0,1,2,1,0,0), respectively. The corresponding systems (<ref>) are z_1 = 2z_1+z_2 = z_2+2z_3 - 1, z_1 = z_2+2z_3 = 2z_1+z_2 - 1, and 2z_1+z_2 = z_2+2 z_3 = z_1 -1, respectively. It is easy to see that none of these systems have a solution in non-negative integers, hence Proposition <ref> is not applicable to this equation. However, we can still use the reduction in the proof of Theorem <ref> to solve the equation. Let (x,y,z) be any solution to (<ref>) with xyz ≠ 0. For any prime p, let x_p, y_p and z_p be the powers with which p enters the prime factorizations of x,y,z, respectively. Then p enters the prime factorizations of the monomials of (<ref>) in the powers x_p, 2x_p+y_p and y_p+2z_p, respectively. By Lemma <ref>, one of the following systems is true x_p = 2x_p + y_p ≤ y_p + 2z_p, x_p = y_p + 2z_p < 2x_p + y_p, 2x_p + y_p = y_p + 2z_p < x_p. Let P_1, P_2, P_3 be the sets of primes satisfying each of these systems. Note that P_3 is an empty set, because inequality 2x_p + y_p < x_p is impossible for non-negative x_p, y_p. Now, let u_x = ∏_p ∈ P_1 p^x_p, v_x = ∏_p ∈ P_2 p^x_p, u_y = ∏_p ∈ P_1 p^y_p, v_y = ∏_p ∈ P_2 p^y_p, u_z = ∏_p ∈ P_1 p^z_p, v_z = ∏_p ∈ P_2 p^z_p. Then x = e_x u_x v_x, y = e_y u_y v_y, z = e_z u_z v_z, where each e_x, e_y, e_z is either 1 or -1. For primes in P_1, equation x_p = 2x_p + y_p implies that x_p=y_p=0, hence u_x=u_y=1, thus x = e_x v_x, y = e_y v_y and z = e_z u_z v_z. Substituting this into (<ref>), we obtain e_x v_x + e_y v_x^2 v_y - e_y v_y u_z^2 v_z^2 = 0. For primes in P_2, we have x_p = y_p+2z_p, thus v_x = ∏_p ∈ P_2 p^x_p = ∏_p ∈ P_2 p^y_p+2z_p = ∏_p ∈ P_2 p^y_p(∏_p ∈ P_2 p^z_p)^2 = v_y v_z^2. After dividing both parts of (<ref>) by v_x = v_y v_z^2, we obtain e_x + e_y v_y^2 v_z^2 - e_y u_z^2 = 0, which is an equation with independent monomials in variables v_y, v_z, and u_z. After multiplying it by e_x, we can rewrite it as 1 + e v_y^2 v_z^2 - e u_z^2 = 0, where e=e_x e_y. We followed the steps in the proof of Theorem <ref> with the only aim to illustrate the general method. For individual equations, the reduction can often be performed much easier by equation-specific arguments. For example, for equation (<ref>), we can note that x=y(z^2-x^2), hence x=yt for integer t=z^2-x^2=z^2-(yt)^2, or equivalently t(1+y^2t)=z^2. Because factors t and 1+y^2t are coprime, their product can be a perfect square only if t = eu^2 and 1+y^2t = ev^2, where u,v are some integers and e is either 1 or -1. Then 1+y^2(eu^2) = ev^2, or 1 + ey^2 u^2 - e v^2 = 0. Up to the names of the variables, this is exactly the equation (<ref>). In this example, it can be easily solved by noticing that (yu)^2 and v^2 must be consecutive perfect squares, but the only consecutive perfect squares are 0 and 1, hence either yu=0 or v=0. If yu=0, then x=yt=y(eu^2)=0. If v=0, then e=-1, |y|=|u|=1, t=eu^2=-1, and x=yt=-y=-(± 1). In conclusion, all integer solutions to (<ref>) are given by (x,y,z) = (0,0,w), (0,w,0), (-1,1,0) and (1,-1,0), w ∈ℤ. §.§.§ Generalized Fermat equations Our next example is the equation x^2 + y^3 = z^5. This equation is known as “the icosahedral case” of the equation x^n+y^m=z^k. For such equations, it is an active research area to determine primitive solutions, that is, ones with gcd(x,y,z)=1. Non-primitive solutions are usually excluded from the consideration on the basis that it is trivial to construct infinite families of such solutions. In 2005, Edwards <cit.> described all primitive solutions to (<ref>). Up to changing x into -x, there are exactly 27 distinct parametrizations of such solutions. We will describe all integer solutions to (<ref>) using Proposition <ref>. The solutions with xyz=0 are easy to describe, so we may assume that xyz≠ 0. Equation (<ref>) is of the form (<ref>) with x_1=x, x_2=y, x_3=z, a=b=c=1, α_1=2, β_2=3, γ_3=5, and α_2=α_3=β_1=β_3=γ_1=γ_2=0. Systems (<ref>) and (<ref>) reduce to 2z_1=3z_2=5z_3-1 and 2t_1=3t_2=5t_3+1. Both systems are solvable in non-negative integers, with the smallest solutions being (z_1,z_2,z_3)=(12,8,5) and (t_1,t_2,t_3)=(3,2,1), respectively. Hence, (<ref>) reduces to x = (u_1^2+u_2^3)^12u_3^15/w^15 u_1, y = (u_1^2+u_2^3)^8u_3^10/w^10 u_2, z = (u_1^2+u_2^3)^5u_3^5/w^6 u_3, where u_1, u_2, u_3 ∈ℤ, w ∈ D_1(u_1^2+u_2^3) ∩ D_1(u_3^5). In general, Proposition <ref> is applicable to the equation x^n + y^m = z^k if and only if exponents n,m,k satisfy min(gcd(nm,k),gcd(nk,m),gcd(mk,n))=1, that is, there exists an exponent coprime with the other two. If d=gcd(n,m,k)≥ 3, then (<ref>) has no solutions with xyz≠ 0 by FLT. The case d=2 requires more care, but the most problematic case is when n=ab, m=ac, k=bc for some distinct pairwise coprime positive integers a,b,c. In this case, Theorem <ref> reduces (<ref>) to the problem of finding primitive solutions to the same equation. The latter solutions are known in the case (a,b,c)=(1,2,t) and its permutations, see <cit.>. All other cases would follow from Tijdeman-Zagier conjecture, which predicts that (<ref>) has no primitive solutions with xyz≠ 0 provided that min(n,m,k)≥ 3. This conjecture is very difficult in general, and a million-dollars prize is offered for its resolution <cit.>. In light of the above discussion, it is interesting to investigate whether Tijdeman-Zagier conjecture becomes any easier in the special case (n,m,k)=(ab,ac,bc) for pairwise coprime a,b,c. In this case, (<ref>) is equivalent to 1+(y^c/x^b)^a=(z^c/x^a)^b, hence the question reduces to investigating rational solutions (X,Y) to the equation X^b - Y^a = 1. We remark that the famous Catalan's conjecture proved by Mihăilescu <cit.> in 2004 states that the last equation has no solutions in integers X,Y,a,b ≥ 2 other than 3^2-2^3=1. However, allowing X,Y to be rational makes the problem more difficult. See <cit.> and <cit.> for some partial results in this direction. §.§.§ Cyclic three-monomial equations Our next example is the resolution of all three-monomial equations obeying certain symmetry. Let us call equation P(x_1, x_2, …, x_n)=0 cyclic if P(x_1, x_2, …, x_n) = P(x_2, …, x_n, x_1). It is easy to see that the only irreducible cyclic equations with three monomials are (i) three-monomial equations in one variable, (ii) equations of the form ax^n + bx^my^m + ay^n=0 for some integers a,b and non-negative integers n,m, and (iii) equations of the form x^a y^b + y^a z^b + z^a x^b = 0, for some non-negative integers a,b. Equations in case (i) are trivial, ones in case (ii) are covered in Section <ref>, so it is left to solve (<ref>). If d=gcd(a,b)=2, then all monomials in (<ref>) are non-negative, hence the only solution to (<ref>) is x=y=z=0. If d≥ 3, then (<ref>) has no solutions with xyz≠ 0 by FLT <cit.>. Hence, we may assume that a,b are coprime. For any prime p, let x_p, y_p and z_p be the exponents with which p enters the prime factorizations of x, y and z, respectively. Then p enters the prime factorizations of the monomials in (<ref>) with the exponents ax_p+by_p, ay_p+bz_p and az_p+bx_p. Hence, by Lemma <ref>, we have either (i) ax_p+by_p=ay_p+bz_p ≤ az_p+bx_p, or (ii) ax_p+by_p=az_p+bx_p < ay_p+bz_p, or (iii) ay_p+bz_p=az_p+bx_p < ax_p+by_p. In case (i), we have a(x_p-y_p)=b(z_p-y_p). Because a and b are coprime, this implies that x_p-y_p=k_p b and z_p-y_p=k_p a for some integer k_p. Then z_p-x_p=k_p(a-b). After dividing all monomials in (<ref>) by p^ax_p+by_p=p^ay_p+bz_p, we obtain that p enters the prime factorization of the third monomial only, and the corresponding exponent is (az_p+bx_p)-(ax_p+by_p) = a(z_p-x_p)+b(x_p-y_p)= k_p(a^2-ab+b^2). The cases (ii) and (iii) are similar. In conclusion, after cancelling all common factors of the monomials of (<ref>), we obtain three monomials, and each of them is the m-th power of an integer, where m=a^2-ab+b^2. If m ≥ 3, then xyz=0 by FLT. If m=a^2-ab+b^2 < 3, then (a,b)=(1,1) or (1,0) or (0,1). In these cases equation (<ref>) is easy to solve. §.§.§ Three-monomial equations of small degree It is natural to classify the equations by degree. Let us say that two equations belong to the same family if they differ only by coefficients. It is obvious that, up to the names of the variables, there is only a finite number of families of equations with given degree and number of monomials. The only families of quadratic three-monomial equations not solvable by Proposition <ref> are ax^2+by^2+c=0 and ax^2+by^2+cz^2=0. The first family is essentially the general Pell's equation, whose solution methods are well-known, see e.g. <cit.>. The second family has been studied by Legendre, see <cit.> for a modern treatment. In conclusion, all quadratic three-monomial equations are easy, and the first interesting case is cubic equations. After excluding some trivial families, e.g. when all monomials share a variable or equations in at most two variables, we identified 96 families of three-monomial cubic equations, and for 88 of them the condition of Proposition <ref> holds, which implies that all these equations are automatically solvable. These families are listed in Table <ref>. The remaining 8 families are listed in Table <ref>. The first three families in Table <ref> can be reduced by Theorem <ref> to quadratic two-variable equations of the form Au^2+Bv^2+C=0, which can then be solved by standard methods <cit.>. The last 5 families are homogeneous cubic three-variable equations that can be interpreted as elliptic curves in projective coordinates. Every integer solution (x_0,y_0,z_0) to such equation with gcd(x_0,y_0,z_0)=1 produces an infinite family of solutions of the form (x_0u, y_0u, z_0u), u ∈ℤ, which can be denoted as (x_0 : y_0 : z_0) and interpreted as a (projective) rational point on the corresponding elliptic curve. To describe all such points, we need to compute the rank and the list of generators of its Mordell-Weil group. For the last problem, no general algorithm is known, but there are methods that work for a vast majority of specific curves with not-too-large coefficients <cit.>. For quartic equations, we identified 60 families that are not solvable by Proposition <ref>. Table <ref> lists these families, together with equations with independent monomials to which they can be reduced to by Theorem <ref>. 31 families are reduced to equations in one variable, two-variable equations of the form (<ref>), or three-variable quadratic equations, which are easily solvable. 23 families reduce to finding rational points on elliptic or super-elliptic curves, which is a difficult problem in general, but it can be solved by existing methods for many individual equations, especially with small coefficients. Finally, 6 families reduce to equations of the form Au^n v^m+Bw^k+C=0, for integers n,m,k≥ 2 such that gcd(n, m)=1, most of which are difficult even for small coefficients. For example, a wide open conjecture of Schinzel and Tijdeman <cit.> predicts that for any polynomial P(w) with integer coefficients with at least three simple zeros, there may exist at most finitely many integers w such that P(w) is a powerful number, that is, a number representable as u^3v^2 for integers u,v. If P(w) consists of two monomials, this leads to some difficult three-monomial equations in 3 variables. As a simple example, P(w)=w^3+1 leads to equation u^3v^2=w^3+1 that seems to be difficult. Examples of three-monomial quartic equations equivalent to (<ref>) are x^3y + yz^2 + z = 0 and x+x^2y^2 + z^3 = 0. §.§.§ Random equations What proportion of general three-monomial equations are solvable by Proposition <ref>? To answer this question empirically, we study a simple model of a random three-monomial equation: fix the number of variables n and integer d>0, and select all degrees α_i, β_i and γ_i in (<ref>) to be independent uniformly random integers on the interval [0,d]. We do not need to select coefficients a,b,c, because the condition of Proposition <ref> does not depend on them. For each 3≤ n ≤ 10 and for d=10^m, m=1,…,5, we empirically estimated the proportion of equations satisfying the condition of Proposition <ref>, see Table <ref>. As we can see, for a fixed number of variables n, the proportion of solvable equations decreases with d but not fast, and seems to be approaching a limit: the data for d=10^4 and d=10^5 are similar. On the other hand, for any fixed d, this proportion quickly increases with n. For d=10^5, only about 20% of 3-variable equations satisfy the condition of Proposition <ref>, but this proportion increases to almost 98% for 10-variable equations. The data suggests that if n→∞, then 100% of the three-monomial equations are solvable by Proposition <ref>. § CONCLUSIONS This paper develops the general method for solving all three-monomial two-variable Diophantine equations. The result may seem quite surprising, given that this family includes some equations like (<ref>) that were thought to be difficult. We are confident that this result is “the best possible one could hope for” without a major breakthrough, in the sense that it seems very difficult to resolve (i) all four-monomial equations in two variables or (ii) all three-monomial equations in three variables. Family (i) contains some well-known open equations, see <cit.>, and its algorithmic resolution looks no easier than the resolution of all two-variable Diophantine equations. Family (ii) contains, for example, equations of the form (<ref>), which are known as generalized Fermat equations and are the subject of current deep research <cit.>. Our algorithm for solving three-monomial two-variable Diophantine equations reduces any equation in this family, which is not covered by Theorem <ref>, to a finite number of equations of the form (<ref>) that can be solved by Theorem <ref>. This algorithm is in general inefficient for two reasons. Firstly, the number of the resulting equations (<ref>) may grow rapidly with the coefficients of the original equation. Secondly, the known algorithms for solving individual equations of the form (<ref>) are quite slow in the worst case. However, the method is practical for solving individual three-monomial two-variable equations with not-too-large exponents and coefficients, see Tables <ref> and <ref> for some examples. The main idea of the proof is the easy but extremely useful Lemma <ref>, that allows us to conclude that the exponents with which every prime p enters the prime factorization of variables must satisfy one of three linear equations. Then these equations can be solved using the well-known theory of linear equations in non-negative integer variables. We have also applied the same idea to general three-monomial equations, in any number of variables. This allowed us to reduce any such equation to a finite number of equations with independent monomials. In many examples, the resulting equations are easy or well-known, hence the described reduction completely solves the original equations. An important example of a family of three-monomial equations that can be completely solved by the presented method are equations covered by Proposition <ref>. Empirical results suggest that the number of three-monomial equations in this family approaches 100% of all three-monomial equations in n variables of any fixed degree if n goes to infinity. Hence, our results are the most useful in the two “opposite” cases: n=2 and n large. In this sense, the most difficult case is n=3 variables, where empirical data suggest that only about 20% of the three-monomial equations are covered by Proposition <ref>. The simplest-looking examples of difficult three-monomial equations in 3 variables are the six families of quartic equations listed at the bottom of Table <ref>, as well as quintic equation (<ref>) with independent monomials. § ACKNOWLEDGEMENTS The authors are grateful to the referee for very detailed comments and suggestions, which helped to improve the quality of the paper. elsarticle-num
http://arxiv.org/abs/2307.00521v1
20230702090952
zkFi: Privacy-Preserving and Regulation Compliant Transactions using Zero Knowledge Proofs
[ "Naveen Sahu", "Mitul Gajera", "Amit Chaudhary" ]
cs.CR
[ "cs.CR" ]
[email protected] [email protected] [email protected] plain plain We propose a middleware solution designed to facilitate seamless integration of privacy using zero-knowledge proofs within various multi-chain protocols, encompassing domains such as DeFi, gaming, social networks, DAOs, e-commerce, and the metaverse. Our design achieves two divergent goals. zkFi aims to preserve consumer privacy while achieving regulation compliance through zero-knowledge proofs. These ends are simultaneously achievable. zkFi protocol is designed to function as a plug-and-play solution, offering developers the flexibility to handle transactional assets while abstracting away the complexities associated with zero-knowledge proofs. Notably, specific expertise in zero-knowledge proofs (ZKP) is optional, attributed to zkFi's modular approach and software development kit (SDK) availability. < g r a p h i c s > zkFi: Privacy-Preserving and Regulation Compliant Transactions using Zero Knowledge Proofs June 2023 Amit Chaudhary ======================================================================================================= § INTRODUCTION Achieving privacy in blockchain applications presents unique challenges - often requiring trade-offs between user experience and privacy. The transparent nature of conventional blockchains reveals all of the transaction data, including addresses, assets involved, amount, smart-contract data, and timestamps, out to the public. It is analogous to using a regular bank account and revealing all private financial information, deterring the mass adoption of blockchain and digital asset technology. As this space continues to evolve and more institutional and individual users engage in activities on these applications, privacy will become a paramount concern creating the biggest hurdle for achieving mainstream adoption. Individuals contemplating the adoption of blockchain-based payment systems may exhibit considerable hesitance if their salaries or other confidential financial details, such as payments for medical services and their online purchases, are accessible to the public. This demand for privacy will also be from social networking platforms, decentralized lending protocols, philanthropic platforms, e-commerce, gaming, and other protocols where users want to prioritize safeguarding the privacy of their information. While there is a clear need for privacy solutions, regulatory scrutiny of privacy protocols necessitates action to develop practical and fair measures that deter bad actors from engaging in on-chain illicit activity. Selective De-anonymization, as mentioned in <cit.>, lays out a method for allowing traceability. The involuntary de-anonymization can prove to be a flagship regulation-compliant technique that can be used when a malicious actor refuses to comply with the law. In this paper, we propose a privacy-preserving solution with built-in regulatory compliance tools using zero-knowledge proofs having the following features * A general purpose multi-chain privacy solution spanning across multiple EVM chains. * Available with simple, composable and flexible plug-and-play middleware solution via an SDK. * Secure with built-in compliance solution with concrete AML practices. * Providing better user-experience, using account-abstraction and wallet integrations [with MetaMask Snaps]. § LIMITATIONS IN CURRENT ARCHITECTURE At present most widely used programmable blockchains (e.g. EVM based chains such as Ethereum, Polygon, Optimism, Arbitrum) offer benefits such as permissionless nature, decentralization, and security, but these blockchains do not offer privacy. Alternative blockchain networks have been aiming to create solutions from scratch, to eliminate the pitfalls but fail to near the activity and value of mentioned public chains. This necessitates a solution to multiple problems on the public chain itself. Lack of Privacy A regular on-chain transaction exposes private data and transactions to the public. The sender/receiver entity, asset type and quantity, smart-contract data, timestamps, and more are conveniently organized and available to the general public through block explorers. This information can be used to track funds for targeted attacks, identify users, and extract sensitive information and patterns about their activity. These pitfalls prevent the adoption of revolutionary blockchain applications by several serious users, especially users like institutional investors and businesses. Weak Compliance Privacy problem implicitly poses another severe issue of compliance. How to have robust compliance in place and prevent bad actors while maintaining user privacy? Enabling privacy on decentralized platforms have been very well known to attract malicious actors abusing the platform. These include using it for illicit activities like laundering stolen funds from hacks or preparing for one. Most of the time, these actors have succeeded because of the lack of firm AML (Anti-Money Laundering) practices to deter bad actors. Weak compliance deters institutional investors or businesses from entering the blockchain space for legitimate usage. Lack of Infrastructure Building private applications on the blockchain has been made possible by the advancements in Zero Knowledge (ZK). However, implementing ZK technology is complicated. Developers need specialized knowledge and resource investment in ZK development. This creates overhead and distraction from the divergence of resources from developing a core component of their applications. Poor User-Experience These distractions and overheads due to the lack of middle-ware privacy solutions lead to a subpar developer and user experience. Even if the application develops its privacy layer, the UI/UX faces challenges for users. While UX is still an area to be improved in web3 generally - it is even worse in the ecosystem of privacy-preserving applications. §.§ Previous Solutions ZCash blockchain was among the first to tackle privacy by facilitating anonymous transactions. While innovations there have been impressive, they could not reach the intended adoption nor offered any programmability - restricting only to peer-to-peer transactions. Hence, missing out on the prominent application level use-case for, eg. DeFi. Monero, another private chain, more or less shares the same context. Tornado protocol on Ethereum amassed a significant number of usage despite not-so-good UX. But it overlooked compliance and became go-to-place for money laundering <cit.>. A portion of its volume has ties to large-scale hacks <cit.>, attracting serious implications from regulators. It, too, only offered peer-to-peer private transactions lacking any interoperability beyond that. Aztec came up with a novel solution with their L2 roll-up approach that had DeFi compatibility to some extent. However, it required users to bridge their assets back and forth and had significant waiting times - making it not-so-practical for all kinds of DeFi interaction, e.g. in swaps because of slippage. Others also share the same problems, especially weak compliance guarantees and friction in user experiences. § ZKFI: A MIDDLEWARE SOLUTION To tackle problems, as discussed above, zkFi offers a packaged solution that acts as a privacy middleware with built-in compliance. Privacy and compliance-related complexity are abstracted away by providing the developers with an SDK that facilitates a plug-and-play solution. §.§ Privacy with Zero Knowledge Proofs (ZKPs) zkFi uses ZKPs for facilitating privacy at its core by achieving the following goals: * Performing secure and private transactions consisting of several underlying zero-knowledge proofs called JoinSplits. * Prevent double-spending of notes. * Verifiable encryption for selective de-anonymization. While there are multiple formulations of ZKP systems available and being researched, zkFi specifically utilizes a groth16 zkSNARK system which is currently most suitable for on-chain privacy applications. §.§ Stronger Compliance Guarantees Compliance has been the conundrum for privacy protocols so far. zkFi aims to have an industry-standard compliance framework such as: * Selective De-anonymization: For an industry-standard AML practice, a process for the de-anonymization of user-specific private transaction data needs to be followed. It could be a voluntary de-anonymization where the entity in question can share general or per-transaction viewing key to a regulatory authority. In other cases (for malicious actors), involuntary de-anonymization may be enforced in response to a regulatory emergency or court order. The latter should be a multi-party process to prevent abuse of power. This is inspired by the research in <cit.>. * Deposit Limits: We put a fiat limit on the asset being transacted and/or the volume flowing through the protocol in a time period. There can be a provision to relax this limit for specific entities like a compliant business. * Privacy-friendly KYC: KYC is the essential first step toward onboarding a user. For this purpose, an external service may be used that facilitates the purpose in a privacy-friendly way e.g. Quadrata[https://quadrata.com/]. * Risk Management and Screening The protocol sets up compliance and risk management integrations to identify and prevent any kind of illegal financial activity. This could be achieved by services like TRM Labs[https://www.trmlabs.com/] products and Chainalysis[https://go.chainalysis.com/chainalysis-oracle-docs.html] oracles to do deposit and withdraw screening. §.§ Pluggable Privacy With SDK By offering an SDK, a full set of compliant privacy features is instantly available to protocols and their developers. SDK facilitates a simple composable plug-and-play solution that abstracts away every bit of ZKPs or compliance-specific complexities. This renders immense benefits to protocols such as: * New protocols can focus on developing their core features without investing time and resources into ZK development. * Existing protocols do not have to modify their smart contracts for compatibility. It's a simple plug-and-play through SDK. * Compliance issues often come as part of a privacy conundrum. But with zkFi, protocols will not have the burden of juggling privacy-related compliance practices. In addition to the advantages mentioned above, the flexibility of interaction with the integrating protocol remains at the hands of a developer. Given private user assets and/or data as input, developers are totally free to write any custom logic to apply to the inputs. See section <ref> for implementation details. §.§ Account Abstraction and UX The advent of EIP-4337 <cit.> Account Abstraction proposal allowed for significant improvements in user experience across protocols. One such improvement or feature it brings is the ability to execute gasless transactions where smart contracts pay for gas fees in ERC-20 tokens. In privacy protocols, there exists a common problem referred to as gas dilemma or fee payment dilemma. A gas dilemma exists because if users need to pay the gas fees from their wallet to execute their transactions then this gas payment discloses the user's public profile. zkFi uses EIP-4337 so that the gas fee can be paid in ERC-20 tokens making transactions on Privy a gas-less experience. The flexibility of gas payment is left to the integrating protocol - allowing it to sponsor transaction fees or charge from their users in any desired way. In the case of peer-to-peer transactions, a user simply pays gas in transacting assets. This is facilitated through a custom EIP-4337 compatible Paymaster contract that pays the gas (in native token) on users' behalf in exchange for a small fee in any supported asset. Moreover, making shielded account keys available within browser wallets, starting with MetaMask through its Snaps[https://metamask.io/snaps/] integration, itself would allow protocols to allow for private transactions within their custom UIs while sensitive user keys live behind the battle-tested security of the wallet itself. To transact user assets the UIs would simply request for a signature from shielded account using provided SDK. § USE CASES OF ZKFI Pluggable Privacy for DeFi protocols As mentioned, previously in section <ref>, the design of the infrastructure allows any existing DeFi protocol for seamless integration. For instance, consider Aave[https://aave.com/] which is a lending/borrowing liquidity protocol. Aave users normally supply assets to one of Aave markets and get interest-bearing tokens a.k.a aToken[https://docs.aave.com/developers/tokens/atoken] in return. For Aave to let its users supply assets anonymously, it'd just require a simple stateless proxy contract, let's call it AaveProxy. The job of AaveProxy is just to take an asset and return the corresponding aToken asset by talking to APIs already written by Aave in their core contracts. The proxy is very loosely coupled, no ZK programming is required at all nor any changes in Aave's existing contracts. The same kind of seamless integration is possible with other protocols for - earning interest privately on Compound, doing private swaps on Uniswap, staking privately on Lido, and many more. Private Payments via Stealth Address While normal peer-to-peer private transactions are supported, one may opt to receive assets in the form of payment to their stealth address - which preserves anonymity one can share a random-looking address each time. So, for instance, payment links can be generated by a receiver and can be shared with a sender such that the shared link has no traceability to any previous payment. This is similar to sharing payment links in Stripe[https://stripe.com/in/payments/payment-links] but with anonymity. Shielded Wallet for protocol UIs Via its integration directly in existing crypto wallets, starting with MetaMask Snaps integration, any protocol may request spending of user's private assets instead public assets from a normal wallet. This frees protocols to define their own custom UIs on their own domains and request interaction with Shielded Wallet however it wants. This adds up with a better user experience thanks to built-in account abstraction features like gasless transactions. zkFi strives to offer a general solution, so it is future compatible with other use cases that may arise as the need for privacy is realized more and more. § ARCHITECTURE The diagram in figure <ref> shows an architectural overview of the system with involved actors and their interaction with each other: KYC Provider An external privacy-friendly service provider for KYC operations. This is a one-time process during user onboarding. Wallet Provider The entity responsible for signing valid private transactions. This includes regular browser wallets (e.g. MetaMask) acting as hosts to Shielded Account with extra functions. The Shielded Account lives within the security and is derived from the host wallet's private keys. User Interface The UI is connected with the Shielded Account through the wallet provider and can request shielded transaction approvals. The SDK consumers i.e. protocols can design their own protocol UIs with any framework (e.g. React, Remix, Vue) of their choice. This could be to collect inputs from the user for the private DeFi interaction or show user data. The proof generation at the client side is facilitated through a ZK Prover module and sent along with the transaction. EIP-4337 Bundler A bundler sends the transactions to the network through a bundler node rather than directly from a wallet. This has two main benefits - the user avoids exposing their wallet address publicly and it can pay for the gas with shielded/private assets itself. Gas Price Oracle A decentralized oracle to consult the equivalent amount of gas price in ERC-20 tokens. A DEX like Uniswap could be a suitable oracle. EIP-4337 Paymaster A custom EIP-4337 compatible paymaster that pays for the user operations in exchange for fees cut in the asset being transacted. The fee is validated after consulting Gas Price Oracle. zkFi Core Core of zkFi protocol encapsulates a multi-transactional multi-asset pool, a merkle tree of notes and an on-chain ZK proof verifier. See section <ref>. DeFi Proxy DeFi Proxies are simple smart contracts that implement/extend a simple interface/base provided by SDK. Doing this allows it to plug nicely into the Core and be able to receive assets from it to perform any DeFi operation. See section <ref>. Wardens Wardens are multi-party entities that exist to perform the involuntary selective de-anonymization process upon strict requirement from regulatory body or court. Wardens hold fragments of a secret key, each of which is used in an MPC process to decrypt transaction data when a certain minimum number of secrets are combined. See section <ref>. § BUILDING BLOCKS §.§ Shielded Account Let H be Poseidon hash function defined at <ref> and G be the generator point on the Baby Jubjub elliptic curve as defined in section <ref>. A user holds a shielded account to sign valid transactions and decryption of balances and transaction data. Unlike, normal Ethereum accounts, a shielded account contains a couple of keypairs. The related schemes are described below: * Spend Keypair (s, S) The spend private key is the most sensitive key that can sign valid transactions that spend the user's funds. To generate this private key a pre-defined message, m is first signed by a regular ethereum wallet and then the key is derived from hash of obtained signature: σ = 𝚂𝚒𝚐𝚗(m) s = H(σ) Deriving s from the wallet signature allows the user to recover the spend private again by signing the same M again, in case the key gets lost. Public key S derived from s by simple EC multiplication: S = s * G S is a point on elliptic curve with coordinates denoted as (S.x, S.y). Now, we derive a "shielded address" (analogous to an Ethereum address [<https://ethereum.org/en/developers/docs/accounts/#account-creation>]) from this public key as: A = H(S.x, S.y) A can now receive shielded funds from anyone and the user holding spend private key s is able to use funds associated with A. * View Keypair (p, P) View keypair's purpose is to decrypt the user balances and transaction-related data. The private key can be derived from the s as: p = H(s) And corresponding public key as: P = p * G p is utilized to encrypt the transaction data sent along with the transaction. Having a distinct keypair for read-only access renders multiple advantages including revealing p for compliance purposes without giving up spend access and allowing protocol websites for read-only access of data to display on their custom UIs. The scheme can be further extended to have a transaction-specific viewing key giving the ability to reveal only selected transactions or protocol-specific viewing keys so that protocol websites only get to read data relevant to it rather than the entire transaction history. §.§ Stealth Address Stealth addresses are one-time addresses that are randomly generated and used to associate with a note. The stealth address scheme described below is inspired by the EIP-5564 <cit.>. To generate a stealth address for a particular receiver with public spend key S and public view key P proceed as: * Generate a random scalar, r. * Calculate a shared secret, s = r * P. * Calculate stealth public key S_stealth = H(s) * G + B. * Calculate stealth address as A_stealth = H(S_stealth.x, S_stealth.y) * Calculate R = r * G and publish it to make it available to the receiver. To detect incoming transactions intended for the receiver it scans through transactions using the view key and checks whether S_stealth = H(p * R) * G + S holds. If it does it is for the receiver. Now, the receiver computes the stealth private key as: s_stealth = H(R * p) + s The user can now use s_stealth to sign a valid transaction to spend associated funds. §.§ Core Core smart contracts of the zkFi protocol. This is gated by a guard for compliance purposes. Flow reaches the core only after passing these guard checks. The core includes a multi-transactional multi-asset pool, meaning the pool supports multiple assets and has the ability to transact multiple assets in a single transaction. The on-chain ZK Verifier verifies the proof submitted during the transaction. §.§.§ Setup Let 𝒯 be the Merkle tree where each node is the hash of its children, hashed with H. The tree leaf nodes are subsequently filled with 𝐜𝐨𝐦𝐦𝐢𝐭𝐦𝐞𝐧𝐭s which are hashes of 𝐧𝐨𝐭𝐞s defined as, N ≡ (j, A, v, r) where, j ∈𝔹^160 is asset identifier, v ∈𝔹^248 is associated monetary value, r ∈𝔹^248 is random salt and A is a shielded address (could be stealth address) that can use or spend N. The note 𝐜𝐨𝐦𝐦𝐢𝐭𝐦𝐞𝐧𝐭, h_N is: h_N = H(N) = H(j || A || v || r) Let σ be the signature generated with key p_spend that authenticates the ownership of N as: σ = 𝙴𝚍𝙳𝚂𝙰_𝚂𝚒𝚐𝚗(h_N, s) where 𝙴𝚍𝙳𝚂𝙰_𝚂𝚒𝚐𝚗 is as defined in <ref>. Let's call f a nullifier defined as: f ≡ (l, σ) where l is index of note commitment h_N as leaf node in 𝒯. Then, the nullifier hash is calculated as follows: h_f = H(f) = H(l || σ) Let's define a set of public inputs corresponding to an asset j as: ρ≡ (R, j, V_in, V_out, 𝐡_𝐟_in, 𝐡_𝐍_out) where, * R is root of 𝒯. * V_in and V_out are public input and output values for asset j. * 𝐡_𝐟_in = (h_f1, h_f2, ...h_fm) is list of nullifier hashes corresponding to m input notes 𝐍_in = (N_in1, N_in2, ...N_inm). * 𝐡_𝐍_out is list of commitments of output notes 𝐍_out = (N_out1, N_out2). Now for n number of assets j = (j_1, j_2, ...,j_n), we define multi-dimensional public inputs, each for asset j_i, as: ρ≡ (ρ_1, ρ_2, ...,ρ_n) Similarly, we define set of private inputs as: ω≡ (𝐯_in, 𝐯_out, 𝐫_in, 𝐫_out, 𝐒_in, σ_in, 𝐀_out, 𝐥_in, 𝐞_in) where * 𝐯_in = (v_in1, v_in2, ...,v_inm) and 𝐯_out = (v_out1, v_out2) are the values of input notes, 𝐍_in and output notes, 𝐍_out for an asset j. * 𝐫_in and 𝐫_out are the salts of input notes, 𝐍_in and output notes, 𝐍_out for an asset j. * S_in = (S_in1, S_in2, ...,S_inm) are public keys associated with m input notes N_in. * σ_in are the signatures produced by signing commitments of N_in with private keys s_in corresponding to public keys S_in. * 𝐀_out are addresses associated with output notes 𝐍_out. * 𝐥_in are leaf indices of input note commitments, 𝐡_𝐍_in. * 𝐞_in are openings in 𝒯 at indices 𝐥_in. For n different assets the private inputs can be denoted as: ω≡ (ω_1, ω_1, ...,ω_n) Then, given ρ and ω, let 𝒮 be the statement of knowledge defined as: 𝒮_ρ≡ Knowledge of ω ∀ j ∀ f, h_f = H(f) ∧∀ j ∀ (h_N, S, σ), 𝙴𝚍𝙳𝚂𝙰_𝚅𝚎𝚛𝚒𝚏𝚢(h_N, S, σ) = 1 ∧∀ j ∀ (l_in, e_in), e_in is the opening at l_in in 𝒯 ∧∀ j ∀ N_out, h_N_out = H(N_out) ∧∀ j, V_in + ∑v_in_k = V_out + ∑v_out_k where 𝙴𝚍𝙳𝚂𝙰_𝚅𝚎𝚛𝚒𝚏𝚢 is defined in <ref>. Let d_p and d_v be the proving and verifying keys created using a trusted setup ceremony. Let's define the SNARK proof generator function as: 𝙿𝚛𝚘𝚟𝚎 : 𝔹^* →𝔹^2048 ; (d_p, ρ, ω) ↦π where π is called the proof. And proof verifier as: 𝚅𝚎𝚛𝚒𝚏𝚢 : 𝔹^* →{0, 1} ; (d_v, π, ρ) ↦ b where b is a single bit for representing boolean true/false as a result of proof verification. §.§.§ Transact To perform a (multi) JoinSplit transact operation to spend its notes the client proceeds as follows: * Identify the different assets 𝐣 = (j_1, j_2, ...j_n) needed for transaction. * Scan the network to get encrypted notes and decrypt it using the viewing key to get an array of notes, 𝐍 for each asset j_i. * For each j_i, choose a number of notes 𝐍_in⊂𝐍, with commitment indices 𝐥_in in 𝒯, to spend. * Compute nullifier hashes, 𝐡_𝐟_in of notes, 𝐍_in * Select a root, R from last δ (set to 100 normally) number of roots of 𝒯 and calculate tree openings 𝐞_in at 𝐥_in * Determine the desired output notes 𝐍_out and calculate their commitments, 𝐡_𝐍_out for each asset j_i. * Set V_in and V_out to control any public value going into or coming out of contracts. * Calculate π by finalizing ρ as well as ω and by calling 𝙿𝚛𝚘𝚟𝚎 with d_p. * Send a request to an EIP-4337 Bundler with necessary transaction data which is supposed to relay it to the contract. If π was calculated correctly and statement was true, the on-chain 𝚅𝚎𝚛𝚒𝚏𝚢 function a.k.a Verifier outputs 1. Otherwise, 0 representing bad or tampered π. In case verification is successful, the flow proceeds with the necessary funds. §.§ DeFi Proxy A DeFi Proxy is a state-less contract that lives on-chain to act as a proxy for a target DeFi protocol. This contract must implement a standard interface that exposes a "convert" functionality or operation. This reflects the fact that, in general, any DeFi operation e.g. swapping, lending, staking, can be thought of as a process of converting from one asset called input asset to another asset called output asset. Here are a few examples: * When swapping ETH for USDC on Uniswap, ETH (input asset) is converted to USDC (output asset). * When supplying DAI on the AAVE market to earn interest, DAI (input asset) is essentially being converted to aDAI (output asset) the interest-bearing tokens given back in return. * Staking ETH (input asset) on Lido results in conversion to stETH (output asset) holding which represents your stake plus rewards. The Core has the ability of arbitrary DeFi interaction by communicating with the target protocol via its proxy contract. In its convert operation, this proxy is supposed to perform necessary operations by calling the target it is the proxy for, eventually returning any output to the Core. This operation includes a transact operation as described above. It is a specific kind of transaction where V_out values (for each j_i) are set to non-zero so that these values are sent as input to the proxy. After processing some input, (𝐣, 𝐯) with its specific target protocol, the proxy is expected to return output asset (𝐣^', 𝐯^'). 𝙲𝚘𝚗𝚟𝚎𝚛𝚝 (𝔹^160^n, 𝔹^256^n) → (𝔹^160^m, 𝔹^256^m) ; (𝐣, 𝐯) ↦ (𝐣^', 𝐯^') Later, a note commitment is calculated with (j^', v^') for each of m outputs and a given stealth address as a transaction parameter and inserted into 𝒯. The flexibility comes from the fact that the implementation of 𝙲𝚘𝚗𝚟𝚎𝚛𝚝 is left to the developer trying to integrate the target protocol. §.§ Involuntary De-anonymization AML through MPC and ZKP zkFi aims to implement involuntary selective disclosure along the lines of <cit.> paper - which will be an industry standard AML practice. This process will be multi-party through MPC cryptography and performed only when a request is received from regulators. Let t ∈𝔹^* be sensitive raw transaction data and E_K be an encryption function that encrypts with some public key K. Then encrypted data, E_K(t) is also stored on-chain during a transaction. The correctness of the encryption is verified with an on-chain verifier: 𝚅𝚎𝚛𝚒𝚏𝚢_E: 𝔹^* →{0, 1} ; (d_v, π, E_K(t), K) ↦ b Ideally, E_K is efficient in a ZK circuit. Verification is required to prevent bad actors from posting garbage data. The secret decryption key is fragmented into multiple secrets, k_1, k_2, ...,k_n, and distributed among n multiple unique parties called wardens. None of the wardens can decrypt only using its single secret. At least, m of n wardens must come to an agreement and share their secrets to compute the decryption function D_K over secrets and the ciphertext using the MPC procedure. D_K(E_K(t), (k_1, k_2, ...,k_m)) = t § CRYPTOGRAPHIC PRIMITIVES §.§ Hash Function A hash function is a mathematical function, h that takes arbitrary length input, m ∈𝔹^*, and outputs fixed-length output x ∈𝔹^n often called hash, hash value or digest. h 𝔹^* →𝔹^n ; m ↦ x A cryptographically secure hash function has the following properties: * Pre-image resistance Given a hash value, it must be difficult to find the input that produced it. * Second pre-image resistance Given an input and its hash value it must be difficult to find another input that produces the same hash. * Collision resistance It must be difficult to find two inputs that map to the same hash value. Specifically, for its operations, zkFi extensively uses a hash function called Poseidon <cit.> denoted as H: H 𝔹^* →𝔹^256 §.§ Elliptic Curve Elliptic curves are mathematical structures that are defined over some finite field. It possesses some interesting properties that make them very useful in cryptography. For instance, it is possible to define discrete logarithm problem on elliptic curves. Given a generator point, G and a point on curve P it is computationally hard to determine a scalar, s such that P = s * G where * is scalar multiplication defined in a group. zkFi employs a specific elliptic curve for its ZK operations called Baby JubJub as defined in <cit.>. §.§ Digital Signature A digital signature σ is a verifiable piece of data produced by signing a message m with a private key s through some chosen signature scheme or function. 𝚂𝚒𝚐𝚗 (𝔹^*, 𝔹^k) →𝔹^n ; (m, s) ↦σ The signature scheme can later verify that the signature σ was produced by an entity who knows the m as well as private key s: 𝚅𝚎𝚛𝚒𝚏𝚢 (𝔹^n, 𝔹^j) →{0, 1} ; (σ, P) ↦ b where P is public key corresponding to s and b is a single bit representing result of verification - true or false. A cryptographically secure signature scheme must have the following properties: * Authenticity A valid signature implies that message was deliberately signed by the signer only. * Unforgeability Only the signer can produce a valid signature for the associated message. * Non-reusability The signature of a message cannot be used on another message. * Non-repudiation The signer of the message cannot deny having signed the message with a valid signature. * Integrity Ensures that the contents of the message are not altered. zkFi utilizes EdDSA <cit.> signature scheme for signing all private transactions and verifying those signatures. The signing function is defined as: 𝙴𝚍𝙳𝚂𝙰_𝚂𝚒𝚐𝚗 (𝔹^256, 𝔹^256) →𝔹^768 ; (m, s) ↦σ and verification as: 𝙴𝚍𝙳𝚂𝙰_𝚅𝚎𝚛𝚒𝚏𝚢 (𝔹^256, 𝔹^512, 𝔹^768) →{0, 1} ; (m, S, σ) ↦ b where b is a single bit representing result of verification true or false. §.§ Zero Knowledge Proofs Zero-knowledge proof (ZKP) is a cryptographic technique that allows a party, a prover to prove to another party, a verifier that it knows a secret without actually revealing a secret by following a set of complex mathematical operations. Although the origins of ZKPs can be traced back to the early 1980s, when a group of researchers, including Shafi Goldwasser, Silvio Micali, and Charles Rackoff, published a paper <cit.> that introduced the primitive concept to the world, it lacked practicality at the time. However, later developments addressed the problems leading to usable implementations in various systems, especially with the advent of blockchains. In order to be ZKP, it must satisfy three properties: * Completeness: If the statement is true, both the prover and verifier are following the protocol, then the verifier will accept the proof. * Soundness: If the statement is false, no prover can convince the verifier that it is true with any significant probability. * Zero Knowledge: If a statement is true, the verifier learns nothing other than the fact that the statement is true. One particularly efficient type of ZKP is zkSNARK. A SNARK (Succinct Non-Interactive Argument of Knowledge) defines a practical proof system where the proof is succinct that can be verified in a short time and is of small size. The system is non-interactive so that the prover and verifier do not have to interact over multiple rounds. Knowledge refers to the fact that the statement is true and the prover knows a secret, also called “witness” that establishes that fact. If a SNARK proof allows for proof verification without revealing a witness it becomes a zkSNARK. Generating a zkSNARK proof is a multi-step process that involves breaking down logic to primitive operations like addition and multiplication, creating an Arithmetic Circuit consisting of gates and wires. This circuit is then converted to a R1CS (Rank-1 Constraint System) which constrains the circuit to verify, for instance, that values are being calculated correctly with gates. The next step converts these constraints in the circuit to a Quadratic Arithmetic Program (QAP). QAP represents these constraints with polynomials rather than numbers. Afterward, together with Elliptic Curve Cryptographic (ECC) and QAP, a zkSNARK proof is formed. §.§.§ Trusted Setup Ceremony A trusted setup ceremony is a cryptographic procedure that involves contributions of one or more secret values, s_i from one or more parties into the trusted setup system to eventually generate some piece of data that can be used to run cryptographic protocols. Once this data is generated, the secrets, s_i are considered toxic waste and must be destroyed because possession of these makes it possible to rig the system. In the context of ZK, for example, it can be used to generate proofs that are false positives. There are multiple kinds of trusted setup procedures. The original ZCash ceremony in 2016 <cit.> was one earliest trusted setups being used in a major protocol. One trusted setup of particular interest is powers-of-tau setup which is multi-participant and is commonly used in protocols. To get an idea of how multi-step setups work consider the image below where, secrets, s_1, s_2, ..., s_n To be a bit precise, the secrets are used in Elliptic Curve operations for security. An input s_i is used to generate successive points on the curve given the generator G by operation G*s_i. The result of the contribution, G*s_i is fed to the next step. The next participant cannot determine the previous contributor's secret because it is a discrete log. A more detailed discussion can be found at <cit.>. §.§.§ Groth16 Groth16 is a SNARK proving system proposed by Jens Groth <cit.> in 2016. It is a non-interactive proof system that is commonly used in privacy protocols and is fast with small proofs. The prover complexity is O(n_c) where n_c is the number of rank-1 constraints. Groth16 requires a trusted setup to generate proving keys and verifying keys. The setup is required to make the proofs succinct and non-interactive. §.§ Merkle Tree Merkle Tree is a data structure that is used to store and verify the integrity of data. It is a binary tree where each node has two children. The leaves of the tree are the data blocks that are stored and the internal nodes of the tree are the hashes of children below them. Consequently, the root of the tree is the culmination of all data blocks in the tree. This means that if any data blocks at the leaf nodes are changed or corrupted it will propagate to the root which will change too. The root hash of the Merkle tree can be compared to a reference to verify the integrity of the data. This makes the Merkle trees very efficient. A merkle tree is maintained in zkFi to record user assets. User-deposited assets are represented in the form of a hash called commitment that gets stored in a Merkle tree structure. During withdrawal, the same user submits a proof a.k.a Merkle Proof (among others) of inclusion of the same commitment hash in the tree and proof of knowledge of the hash's pre-image, without revealing any traceable information like an index of commitment in tree or constituents of pre-image. §.§ Secure Multi-Party Computation (SMPC) SMPC, sometimes simply referred to as MPC, is a cryptographic protocol that allows multiple parties to jointly compute a function on their private inputs without revealing those inputs to each other. The idea was initially introduced by Andrew Yao in the early 1980s in a research paper <cit.>. The general notion of MPC is in which n parties jointly compute a function f over their secrets, s_1, s_2, ..., s_n i.e. compute f(s_1, s_2, ..., s_n) and do not reveal their individual secrets, s_i to each other. Yao introduced a toy example, "Yao's Millionaire Problem" as - two millionaires want to determine who is richer but they do not want to reveal any information other than that about their wealth to each other. If x_1 and x_2 denote the numerical value of their wealth, the goal is to calculate the result of x_1 ≤ x_2 with x_1 and x_2 being private inputs by each. MPC can be applied to multiple practical problems as well including: * Secret Sharing Secret sharing can utilize MPC protocol in which a secret is split to n parties and only k (1 < k ≤ n) of which can reconstruct that secret. * Threshold Cryptography MPC can be used in threshold cryptography to allow multiple parties to jointly decrypt a cipher. A single party cannot decrypt the secret alone. More studies about MPC and its applications can be found at <cit.>. § CONCLUSION For web3 to reach mass adoption, privacy is an unquestionable facet. While zero-knowledge technology has solved the privacy problem, earlier solutions have clearly indicated that strong compliance to prevent illicit use cannot be ignored at all. Furthermore, to encourage protocols and developers to build privacy into their products, infrastructure solutions must be present that are easy to integrate and build on top of. In this paper, we demonstrated that an infrastructure-based privacy solution with built-in compliance is possible. We proposed a middleware solution through an SDK package that is easy to integrate and abstracts away solutions to privacy and compliance without sacrificing developer experience. Consequently, rendering the protocols and developers the freedom to innovate and focus on solving their core problem instead of worrying about user privacy or compliance. 9 a16z Joseph Burleson, Michele Korver, and Dan Boneh. Privacy-Protecting Regulatory Solutions Using Zero-Knowledge Proofs. Available at <https://api.a16zcrypto.com/wp-content/uploads/2022/11/ZKPs-and-Regulatory-Compliant-Privacy.pdf> trm-labs TRM Labs, Inc. U.S. Treasury Sanctions Widely Used Crypto Mixer Tornado Cash <https://www.trmlabs.com/post/u-s-treasury-sanctions-widely-used-crypto-mixer-tornado-cash> Chainalysis Chainalysis Team Crypto Mixer Usage Reaches All-time Highs in 2022, With Nation State Actors and Cybercriminals Contributing Significant Volume <https://blog.chainalysis.com/reports/crypto-mixer-criminal-volume-2022/> poseidon Lorenzo Grassi, Dmitry Khovratovich, Christian Rechberger, Arnab Roy, and Markus Schofnegger. POSEIDON: A New Hash Function for Zero-Knowledge Proof Systems <https://eprint.iacr.org/2019/458.pdf> eip-2494 ERC-2494: Baby Jubjub Elliptic Curve. <https://eips.ethereum.org/EIPS/eip-2494> eip-4337 ERC-4337: Account Abstraction Using Alt Mempool. <https://eips.ethereum.org/EIPS/eip-4337> eip-5564 ERC-5564: Stealth Addresses. <https://eips.ethereum.org/EIPS/eip-5564> eddsa S. Josefsson, I. Liusvaara. Edwards-Curve Digital Signature Algorithm (EdDSA). <https://datatracker.ietf.org/doc/html/rfc8032> smpc A. C. Yao. Protocols for secure computations. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 160–164, Los Alamitos, CA, USA, Nov 1982. IEEE Computer Society. zcash-ceremony Morgan E. Peck. The Crazy Security Behind the Birth of Zcash, the Inside Story. <https://spectrum.ieee.org/the-crazy-security-behind-the-birth-of-zcash> trusted-setup Vitalik Buterin. How do trusted setups work?. <https://vitalik.ca/general/2022/03/14/trustedsetup.html> groth16 Jens Groth. On the Size of Pairing-based Non-interactive Arguments. <https://eprint.iacr.org/2016/260.pdf> smpc-intro David Evans, Vladimir Kolesnikov, and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation <https://www.cs.virginia.edu/ evans/pragmaticmpc/pragmaticmpc.pdf> zkp Shafi Goldwasser, Silvio Micali, and Charles Racko. The Knowledge Complexity Of Interactive Proof Systems. <https://people.csail.mit.edu/silvio/Selected >
http://arxiv.org/abs/2307.01954v1
20230704231531
FEMDA: Une méthode de classification robuste et flexible
[ "Pierre Houdouin", "Matthieu Jonckheere", "Frederic Pascal" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: [email protected]. August 1, 2023 ================================================================================================================================================================= § INTRODUCTION L'analyse discriminante est un outil très utilisé pour les tâches de classification. La méthode historique <cit.> présup­pose que les données sont issues de distributions gaussiennes et la règle de décision consiste à choisir le cluster qui maximise la vraisemblance de la donnée. Au début des années 80, <cit.> et <cit.> ont étudié l'impact de la contamination et du mislabelling sur les performances et concluent à une grande sensibilité. Pour traiter ce problème, <cit.> suggère l'utilisation de M-estimateurs qui sont robustes au bruit. Plus récemment, <cit.> a proposé de modéliser les données par une distribution de student multivariée, plus flexible. En 2015, <cit.> généralise même aux distributions elliptiques symétriques (ES). Cette nouvelle méthode, appelée Generalized QDA (GQDA) repose sur l'estimation d'un seuil, dont la valeur varie avec la forme de la distribution. Enfin, <cit.> a complété GQDA avec l'utilisation d'estimateurs robustes, pour obtenir RGQDA. Toutes ces méthodes supposent que les points d'un même cluster sont issus de la même distribution, hypothèse qui n'est pas toujours valide. <cit.>, inspiré par <cit.>, propose une méthode alternative qui ne suppose aucun a priori sur les distributions, et permet à chaque point d'être issu de sa propre distribution elliptique symétrique. Les points d'un même cluster ne sont pas forcément identiquement distribués, seulement tirés indépendamment. La contrepartie d'une telle flexibilité réside dans les caractéristiques des clusters : au sein d'un même cluster, les points partagent seulement la même moyenne et la même matrice de dispersion. Nous allons étudier dans ce papier la robustesse aux changements d'échelle dans les données de cette nouvelle méthode. Le modèle est présenté dans la section 2, la section 3 contient des expériences sur données simulées, la section 4 les expériences sur données réelles et les conclusions et remarques sont effectuées dans la section 5. § FEMDA : FLEXIBLE EM-INSPIRED DISCRIMINANT ANALYSIS Modèle statistique: On suppose que chaque donnée 𝐱_i ∈ℝ^m, i ∈ [1,n] est tirée d'une distribution ES indépendante du cluster. La moyenne et la matrice de dispersion dépendent du cluster auquel le point appartient tandis que le facteur d'échelle τ_i,k peut dépendre de l'observation également. La donnée 𝐱_i du cluster 𝒞_k, k ∈ [1,K] est tirée selon la densité de probabilité suivante : f(𝐱_i) = A_i| Σ_k |^-1/2τ_i,k^-m/2 g_i( (𝐱_i-μ_k)^T Σ_k^-1 (𝐱_i-μ_k)/τ_i,k) Expression de la log-vraisemblance et des estimateurs du maximum de vraisemblance: Soient 𝐱_1,...,𝐱_n_k des données indépendantes du cluster 𝒞_k, la log-vraisemblance de l'échantillon peut s'écrire: l(𝐱_1,...,𝐱_n_k) = ∑_i=1^n_klog( A_i | Σ_k |^-1/2 t_i,k^-m/2 s_i,k^m/2 g_i(s_i,k) ) où t_i,k = (𝐱_i-μ_k)^T Σ_k^-1(𝐱_i-μ_k) and s_i,k=t_i,k/τ_i,k. Maximiser le terme de l'équation (<ref>) par rapport à τ_i,k, avec μ_k et Σ_k fixés mène à τ̂_i,k = (𝐱_i-μ_k)^T Σ_k^-1 (𝐱_i-μ_k)/max_t ∈ℝ^+{t^m/2 g_i(t) }. Les hypothèses sur g_i assurent la stricte positivité du dénominateur. Après avoir remplacé dans l'équation (<ref>) τ_i,k par τ̂_i,k, on obtient: l(𝐱_i) = Ã_̃ĩ - 1/2log(| Σ_k | ( (𝐱_i-μ_k)^T Σ_k^-1(𝐱_i-μ_k) )^m) où Ã_̃ĩ = log(A_i) + log(max_t ∈ℝ^+{t^m/2 g_i(t) }). A cette étape, on comprend que la flexibilité dans le choix de l'échelle de la matrice de covariance nous permet de réduire l'impact de la fonction génératrice g_i dans la vraisemblance à une constante multiplicative indépendante de k. Enfin, l'utilisation de l'estimateur du maximum de vraisemblance permet d'obtenir les estimateurs robustes suivants pour la moyenne et la matrice de dispersion : {[ μ̂_k = ∑_i=1^n_k w_i,k𝐱_i∑_i=1^n_k w_i,k,; Σ̂_k = m/n_k∑_i=1^n_k w_i,k (𝐱_i-μ̂_k)(𝐱_i-μ̂_k)^T ]. où w_i,k = 1/t_i,k. Il est intéressant de noter que μ̂_k est insensible à l'échelle de Σ̂_k. Par conséquent, si Σ̂_k est une solution à l'équation de point fixe, λΣ̂_k l'est également. Les estimateurs obtenus sont similaires aux M-estimateurs robustes, sauf que les poids w_i,k sont proportionnels au carré de la distance de Mahalanobis. La convergence de ces deux équations de point-fixe couplées a été analysée par <cit.>. Règle de classification: Grâce à ces deux estimateurs, on utilise les données d'entraînement pour estimer les paramètres inconnus. On suppose le nombre de clusters connu. Il est maintenant possible de déduire la règle de classification. On a la proposition suivante : La règle de décision pour Flexible EM-Inspired Discriminant Analysis (FEMDA) est : 𝐱_i ∈𝒞_k (∀ j ≠ k, Δ_jk^2(𝐱_i) ≥1/mλ_jk) avec Δ_jk^2(𝐱_i) = log((𝐱_i- μ_j)^T Σ_j^-1(𝐱_i- μ_j)(𝐱_i-μ_k)^T Σ_k^-1(𝐱_i-μ_k)) et λ_jk = log(| Σ_k || Σ_j |). Preuve: La preuve repose sur le fait que la log-vraisemblance ne dépend de k qu'à travers le terme 1/mlog(|Σ_k |) + log((𝐱_i-μ_k)^T Σ_k^-1(𝐱_i-μ_k)) Cette règle de décision est similaire à la version robuste de QDA. La différence est que nous comparons le logarithme des distances de Mahalanobis au carré plutôt que directement les distances de Mahalanobis au carré. Cela rend notre règle de décision également insensible à l'échelle de Σ. § EXPÉRIENCES SUR DONNÉES SIMULÉES FEMDA, la méthode proposée, est comparée avec les mé­thodes suivantes : QDA classique modélisant les données par des distributions gaussiennes, QDA modélisant les données par des distributions de student (t-QDA, <cit.>), GQDA et GQDA <cit.>, <cit.>. Paramètres de simulation: Les moyennes des clusters sont tirées aléatoirement sur la m-sphère unité. Les matrices de covariance sont générées avec des valeurs propres et une matrice orthogonale aléatoires. On choisit m=10, K=5, N_train = 5000, N_test = 20000 et τ∼𝒰(1, m). Scénarios considérés: On génère les points grâce à deux familles de distributions ES différentes. Famille de distributions Représentation stochastique gaussienne généralisée μ + Γ(m/2 β, 2)^1/2 βΣ^1/2𝒰( 𝒮(0,1) ) t-distribution μ + 𝒩(0, Σ) √(1/Γ(ν/2, 2/ν)) 𝒰( 𝒮(0,1) ) représente une distribution uniforme sur la m-sphère unité. Le paramètre de forme β (resp. ν) est tiré de manière uniforme dans [0.25, 10] (resp. [1, 10]) pour les gaussiennes généralisées (resp. pour les t-distributions). Les scénarios de génération de données sont définis comme suit : 0.6GG - 0.4T correspond à 60% des points de chaque cluster générés avec une gaussienne généralisée et 40% avec une t-distribution. On utilise le code couleur suivant pour la génération des paramètres: 0.6GG - 0.4T signifie que les mêmes β et ν sont utilisés pour les points d'un même cluster et 0.6GG - 0.4T signigie qu'on utilise un paramètre différent pour chaque point de chaque cluster. Résultats Pour chaque scénario dans la première colonne, le tableau <ref> présente les différences de taux de bonne classification entre la meilleure méthode et les autres : Dans le tableau <ref>, on remarque que GQDA et QDA obtiennent des performances plus faibles que FEMDA et t-QDA. t-QDA est la meilleure méthode dans la plupart des scénarios et surpasse légèrement FEMDA, au prix de l'estimation de plus de paramètres et donc d'une méthode plus lente. <cit.> a étudié plus en détails les vitesses de convergence de chaque estimateur et règle de décision. Dans le tableau <ref>, on bruite les données avec un changement d'échelle. Une fraction des données subit le changement suivant : x ⟵μ + λ (x - μ). On observe alors que FEMDA est la méthode la plus robuste au bruit, t-QDA est surpassée dans presque tous les scénarios lorsque la contamination atteint 25% avec λ=4, et dans tous avec λ=8. L'écart-type entre plusieurs simulations est faible, de l'ordre de 0.05%. § EXPÉRIENCES SUR DONNÉES RÉELLES Les jeux de données sont issus de l'UCI machine learning repository <cit.>. Trois datasets sont utilisés : Ionosphere avec 351 données de dimension 34, Ecoli avec 336 données de dimension 8 et Breast cancer avec 699 données de dimension 10. §.§ Résultats de classification Pour obtenir ces résultats, 100 simulations ont été effectuées, et après 10 simulations successives, les train et test set sont recomposés (70% train set et 30% test set). On peut voir sur les figures <ref> et <ref> que GQDA surperforme d'au moins 1% les autres méthodes, suivi par FEMDA et ensuite par t-QDA pour le dataset Ionosphere. Les écarts sont plus resserrés pour le dataset Ecoli. Sur la figure <ref>, on remarque que FEMDA devient la meilleure méthode avec une précision proche de 95%, suivi de près par t-QDA puis GQDA. La variance dans les résultats est plutôt faible. Pour conclure sur ces trois datasets, les performances de FEMDA sont légèrement supérieures à celles de t-QDA, et souvent inférieures à celles de GQDA, qui sont cependant plus variables. §.§ Résultats après changements d'échelle On va maintenant bruiter les données d'une manière similaire à ce qui a été effectué pour les données simulées. On choisit λ=5. On trace ensuite l'évolution de la précision des trois méthodes robustes selon le taux de contamination. On remarque sur la figure <ref> que même avec des taux de bruit très élevés, t-QDA et FEMDA conservent de très bons résultats pour Spambase et Ionosphere. En revanche, les performances de RGQDA chutent beaucoup plus rapidement lorsque le taux de contamination augmente. Pour le dataset Ecoli, le comportement est beaucoup plus uniforme, les trois méthodes voient leurs performances baisser, surtout lorsqu'on dépasse un taux de 40% de contamination. FEMDA affiche malgré tout une résilience légèrement supérieure pour les hauts taux de contamination, mais les performances restent très proches de celles de t-QDA. La robustesse de FEMDA aux changements d'échelle dans les données d'entraînement peut être expliquée par l'expression des estimateurs, qui sont intrinsèquement insensibles aux changements d'échelle. Enfin, la différence de sensibilité à l'augmentation de la contamination pour Ecoli peut s'expliquer par la faible dimension des données par rapport aux autres datasets. En effet, en grande dimension, la direction de la matrice de covariance est beaucoup plus discriminante pour séparer les données. § CONCLUSION Dans ce papier, nous avons présenté une nouvelle méthode d'analyse discriminante robuste aux changements d'échelle dans les données d'entraînement. Elle surpasse toutes les méthodes de l'état de l'art en présence de données contaminées, et se comporte de manière similaire à t-QDA sans bruit, tout en étant plus rapide. FEMDA est donc une méthode rapide et très résiliente face aux données bruitées. Dans ce nouveau paradigme, les clusters ne partagent plus la même matrice de covariance, mais seulement la même matrice de dispersion. Permettre à chaque point d'avoir son propre facteur d'échelle entraîne un gain de flexibilité qui permet de traiter des jeux de données contaminées et non nécessairement identiquement distribuées. On peut donc considérer que FEMDA est une amélioration de t-QDA : performances similaires sans contamination, mais plus robuste et plus rapide. 99 1 Carl J. Huberty. Discriminant Analysis. Review of Educational Research, 1975. 2 Peter A. Lachenbruch. Discriminant Analysis When the Initial Samples Are Misclassified. Technometrics, 1996. 3 P. A. Lachenbruch and M. Goldstein. Discriminant Analysis. Biometrics, 1979. 4 Huber Peter J. Robust covariances. Statistical decision theory and related topics, 1977. 5 Andrews Jeffrey L and McNicholas Paul D and Subedi Sanjeena. Model-based classification via mixtures of multivariate t-distributions. Computational Statistics & Data Analysis, 2011. 6 Bose Smarajit and Pal Amita and SahaRay Rita and Nayak Jitadeepa. Generalized quadratic discriminant analysis. Pattern Recognition, 2015. 7 Ghosh Abhik and SahaRay Rita and Chakrabarty Sayan and Bhadra Sayan. Robust generalised quadratic discriminant analysis. Pattern Recognition, 2021. 8 Houdouin Pierre and Pascal Frédéric and Jonckheere Matthieu and Wang Andrew. Robust classification with flexible discriminant analysis in heterogeneous data. https://arxiv.org/abs/2201.02967, 2022. 9 Roizman Violeta and Jonckheere Matthieu and Pascal Frédéric. A flexible EM-like clustering algorithm for noisy data. arXiv preprint arXiv:1907.01660, 2019. 10 Dua Dheeru and Graff Casey. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2017.
http://arxiv.org/abs/2307.03235v1
20230706180050
Multi-matrix correlators and localization
[ "Adolfo Holguin", "Shannon Wang" ]
hep-th
[ "hep-th" ]
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering Jie Zhang^1     Masanori Suganuma^1     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University      ^2RIKEN Center for AIP {jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp =================================================================================================================================================================================================================== § INTRODUCTION Large operators in large N gauge theories are an important subject of study with relevance to nuclear physics, theories of quantum gravity and strings. Although there has been enormous success in computing the spectrum of anomalous dimensions of light operators in models such as maximally supersymmetric Yang-Mills theory in the planar limit, very little is known about how to tackle generic operators whose dimensions can scale with a power of N. This is an interesting problem for holography <cit.> and for understanding the structure of conformal field theories more generally <cit.>. One of the difficulties one faces when trying to address these types of problems is that the intuitions from the planar limit are often unjustified for large operators; one must sum over both planar and non-planar diagrams and it is not a priori clear which diagrams dominate in the large N limit. A promising approach is to replace single and multi- trace operators with a different basis that is better behaved at finite N <cit.>, and to perform a systematic expansion around protected states in the large N limit. In the case of maximally supersymmetric Yang-Mills theory, this has been implemented at finite N <cit.>. Even though the expressions found through these techniques at finite N are quite explicit, it is usually difficult to take the large N limit of such quantities. More recently, there have been works showing that certain generating functions can be used to perform computations in the free-field theory limit <cit.>. This technique has been succesfully implemented in the computation of three-point correlators involving large operators made out of a single matrix field <cit.>, as in the half-BPS sector of 𝒩=4 SYM <cit.>, where the dual gravitational description is explicitly realized from the gauge theory. An explicit mapping between BPS states made out of more than one matrix and asymptotically AdS_5× S^5 geometries is still lacking, though a compelling description in terms of bubbling geometries seems to exist <cit.>. The study of generating functions for multi-matrix correlators was outlined in <cit.> for certain classes of operators, and more generally in <cit.>. Our goal is to elucidate some of the details regarding the generating functions of 1/4 and 1/8- BPS operators in 𝒩=4 SYM. We do this by proposing a fixed-point formula for the overlap of generic coherent state generating functions; this gives us an integral formula that generalizes the Harish-Chandra-Itzykson-Zuber (HCIZ) formula to multiple pairs of matrices. Integrals of this type appear naturally in the study of multi-matrix models of commuting random matrices. This paper is structured as follows. In section <ref>, we review the generating function techniques, focusing on the case of 1/4 BPS operators in 𝒩=4 SYM. We argue that the form of these operators is protected, so we can restrict to eigenstates of the one-loop dilatation operator. We then evaluate the norm of the generating function for the U(2) theory by explicit integration to motivate our fixed point formula for general N. Finally, we give a proof of the extension of the HCIZ formula to the multiple-matrix model using the heat kernel method as outlined in <cit.>; we will briefly discuss our attempts to prove this formula using the character expansion method. In section <ref>, we connect our results to the construction of restricted Schur polynomials and outline how to generalize to operators associated with Young diagrams with arbitrary number of rows or columns. We conclude with some future directions. § MULTI-MATRIX GENERATING FUNCTIONS We are interested in studying operators in gauge theories that are made out of more than one matrix-valued scalar field. In particular, we will work with 1/4-BPS operators in U(N) 𝒩=4 SYM on the cylinder ℝ× S^3. At weak coupling, these operators can be built out of symmetrized products of two of the three complex scalar fields of the theory X,Y. Generalizing to more than two matrices is straightforward. This class of operators transforms in the [p,q,p] representations of the SU(4)_R symmetry, and the operators are generically of multi-trace form. We will concentrate on scalar primary states at an equal time slice for simplicity. Unlike 1/2-BPS operators, which can be built explicitly in the free theory, 1/4-BPS operators of the interacting theory are different from those of the free theory. The lifting of states due to non zero gauge coupling can be treated pertubatively and the loop corrections to dilatation operators annihilate operators that are made out of symmetric products of X and Y. This problem was studied in detail for small operators in <cit.>, but for generic large operators, explicit constructions in terms multi-traces are cumbersome. An alternative expansion in terms of characters was introduced in <cit.>, which the authors call the restricted Schur polynomial basis. This basis is convenient for dealing with the mixing between the different trace structures since it diagonalizes the matrix of two point functions for all values of N. §.§ Generating 1/4 BPS States Yet another way of generating 1/4-BPS states can be found by studying operators of the form: |Λ_X, Λ_Y⟩= 1/Vol[U(N)]∫ dU exp([U X U^†Λ_X +UY U^†Λ_Y]) |0⟩. If we insist that the coherent state parameters Λ_X and Λ_Y commute, |Λ_X, Λ_Y⟩ is annihilated by the one-loop dilatation operator; it was shown in <cit.> that this persists to two-loop order. In <cit.>, it was conjectured that the space of BPS states in 𝒩=4 SYM is given by the kernel of the one-loop dilatation operator at all values of the coupling; we will take this as a working assumption and work with the set of states annihilated by the Beisert one-loop dilatation operator: D̂_2^SU(2)= g^2 [[X,Y][∂_X, ∂_Y]]. Because the states (<ref>) are coherent states of X̅, Y̅ <cit.>, they form an overcomplete basis of states for any value of N. This has many computational advantages, mostly due to the fact that taking the large N limit is very straightforward, but translating back into a complete orthogonal basis of operators can be complicated. This may be solved by computing the norm of the coherent states. By exploiting the Campbell-Hausdorff formula, we arrive at an integral of the form: ⟨Λ̅_X, Λ̅_Y|Λ_X, Λ_Y|=⟩1/Vol[U(N)]∫ dU exp([U Λ̅_X U^†Λ_X +UΛ_Y U^†Λ_Y]). Since we can in principle expand (<ref>) in terms of an orthonormal basis, we may use this overlap to determine the coefficients relating the multi-trace basis of operators to an orthogonal basis by expanding in a series and matching the coefficients as done in <cit.>. The precise tool relating the multi-trace basis operators and the character expansion in this case is the Weingarten calculus <cit.>; an example illustrating this technique can be found in <cit.>. The main obstacle we face is evaluating the integral (<ref>) for generic coherent state parameters. To our knowledge, these types of integrals have not been studied before, and a closed form expression for them is needed. Our main goal will be to evaluate this class of integrals for any value of N. Although we only explicitly study the case of U(N) integrals, the methods should apply generally and should generalize to SO(N) and Sp(N) groups as well as to quivers. These types of integrals are also a natural object to study in the context of matrix models, since they arise in the study of multi-matrix models of commuting matrices. §.§ The Four-Matrix Model in SU(2) Before proceeding to the case of general N, we will study the following integral I_2= ∫_SU(2) dU e^[U A U^†A̅+U B U^†B̅] for commuting matrices A,B, A̅, B̅. We will first approximate I_2 by a saddle point approximation; the critical points of the function in the exponential are given by the solutions to the equations [A, U^†A̅ U]+ [B, U^†B̅ U]=0. For generic enough matrices, this is only satisfied if each of the two terms vanishes individually [A, U^†A̅ U]=[B, U^†B̅ U]=0. The only problematic cases occur when a subset of the eigenvalues of B is a permutation of a subset of eigenvalues of -A. From here on, we assume that the eigenvalues are generic enough that this does not happen. This means that, generically, the saddle points are labelled by permutation matrices U_π. We are then left with a Gaussian integral around each of the saddle points, which can be evaluated easily; this results in a "one-loop determinant" factor given by: D_2(a,a̅, b, b̅)=(a_1 - a_2)(a̅_1 - a̅_2) + (b_1 - b_2)(b̅_1 - b̅_2) This gives an approximate value for the integral (up to a convention dependent normalization factor): I_2≃e^a_1 a̅_1+a_2 a̅_2+b_1 b̅_1+b_2 b̅_2-e^a_1 a̅_2+a_2 a̅_1+b_1 b̅_2+b_2 b̅_1/(a_1 - a_2)(a̅_1 - a̅_2) + (b_1 - b_2)(b̅_1 - b̅_2). At first sight, it is not clear that this approximation is reliable, since there is no large parameter in the exponential. To gain more intuition, we evaluate I_2 through an explicit computation. First, we must parameterize our unitary matrix U; then, we need to compute the Haar measure. We start with the following matrices: A = [ a_1 0; 0 a_2 ], B = [ b_1 0; 0 b_2 ] A̅ = [ a̅_1 0; 0 a̅_2 ], B̅ = [ b̅_1 0; 0 b̅_2 ] We then seek to parametrize our unitary matrix. We know that any arbitrary SU(2) matrix must meet the following conditions: SU(2) = {[ a b; -b^* a^* ]∈ℂ^2× 2 | |a|^2 + |b|^2 = 1} For ease of computation, we choose to parameterize U with Euler angles: U = [ e^-iγ+α/2cosθ/2 -e^iγ-α/2sinθ/2; e^-iγ-α/2sinθ/2 e^iγ+α/2cosθ/2 ] We seek to rewrite the Haar measure dU in terms of J(θ,γ,α)dθ dγ dα, where J(θ,γ,α) is the Jacobian. We may do so by computing the inverse of the unitary matrix and multiplying it by its partial derivatives with respect to the Euler angles. We start by finding the inverse of U: U^-1 = [ e^iγ+α/2cosθ/2 e^iγ-α/2sinθ/2; -e^-iγ-α/2sinθ/2 e^-iγ+α/2cosθ/2 ] Then we calculate the partial derivatives with respect to γ, α, and θ and multiply by the inverse. We obtain: U^-1∂ U/∂γ = [ -i/2 0; 0 i/2 ] U^-1∂ U/∂α = [ -i/2cosθ i/2e^iγsinθ; i/2e^-iγsinθ i/2cosθ ] U^-1∂ U/∂θ = [ 0 -1/2e^iγ; 1/2e^-iγ 0 ] We calculate the Jacobian matrix using the following basis ϵ_1 = [ i 0; 0 -i ], ϵ_2 = [ 0 ie^iγ; ie^-iγ 0 ], and ϵ_3 = [ 0 -e^iγ; e^-iγ 0 ]: 𝒥 = [ -1/2 -1/2cosθ 0; 0 1/2sinθ 0; 0 0 1/2 ] The Jacobian J(θ,γ,α) we seek is the determinant of 𝒥: (J) = 1/8|sinθ| We see that it is only dependent on θ. Our integral becomes: I_2 = 1/8∫_0^πdθ∫_0^4πdγ/4π∫_0^2πdα/2π|sinθ|e^[A̅UAU^† + B̅UBU^†] = 1/8∫_0^πdθ|sinθ|e^1/2((a_1 + a_2)(a̅_1 + a̅_2) + (b_1 + b_2)(b̅_1 + b̅_2) + ((a_1 - a_2)(a̅_1 - a̅_2) + (b_1 - b_2)(b̅_1 - b̅_2))cosθ) Our critical points are θ = 0 and θ = π, so we can remove the absolute value bars. Then we evaluate our integral: I_2 = 1/8∫_0^πdθsinθ e^1/2((a_1 + a_2)(a̅_1 + a̅_2) + (b_1 + b_2)(b̅_1 + b̅_2) + ((a_1 - a_2)(a̅_1 - a̅_2) + (b_1 - b_2)(b̅_1 - b̅_2))cosθ) = e^a_1a̅_1 + a_2a̅_2 + b_1b̅_1 + b_2b̅_2 - e^a̅_1a_2 + a_1a̅_2+b̅_1b_2 + b_1b̅_2/4((a_1 - a_2)(a̅_1 - a̅_2) + (b_1 - b_2)(b̅_1 - b̅_2)) This is precisely the same result that the saddle point approximation yields. From the intermediate steps, it is clear that there are never any terms that mix the eigenvalues of A and B; if we set either A=0 or B=0, we immediately recover the HCIZ formula for U(2). §.§ Proof of General Formula Generically, we expect that the following formula holds: I_N = ∫_U(N) dU e^[U A U^†A̅+U B U^†B̅] = 𝒞_N ∑_π∈ S_Nπ×∏_ie^a_i a̅_π(i)+ b_i b̅_π(i)/∏_i≠ j[(a_i-a_j)(a̅_i-a̅_j)+(b_i-b_j)(b̅_i-b̅_j)] = 𝒞_N (e^a_i a̅_j+ b_i b̅_j)/Δ(Λ_A)Δ(Λ_A̅) + Δ(Λ_B)Δ(Λ_B̅), where 𝒞_N is a constant that depends on the normalization for the volume of U(N); a_j, a̅_j, b_j, and b̅_j are respectively the eigenvalues of the matrices A, A̅, B, and B̅; the matrices Λ_A, Λ_A̅, Λ_B, Λ_B̅ are respectively the diagonal matrices of the eigenvalues of A, A̅, B, and B̅; and Δ(Λ_M) is the Vandermonde determinant of matrix Λ_M. The main idea is as follows. The function ϕ(U)= [UΛ_A U^†Λ_A̅ ] can be thought of as a Hamiltonian function generating a U(1)^N action on U(N); the HCIZ integral localizes on the fixed points of this action. The integration is done over a coadjoint orbit 𝒪_Λ_A which has a natural symplectic structure. Alternatively, the integration domain can be reduced to U(N)/U(1)^N, where U(1)^N is the maximal torus commuting with Λ_A. For a generic pair of commuting matrices, the analogous funtion ψ(U)= [UΛ_A U^†Λ_A̅]+ [UΛ_B U^†Λ_B̅] still generates an action of the maximal torus on U(N), although the integration domain does not have a natural interpretation as a coadjoint orbit. Despite of this, one can still formally reduce the integration to the symplectic space U(N)/U(1)^N, with Λ_A,B being treated as elements of the Cartan subalgebra of 𝔲(N). Up to the assumption of non-degeneracy of fixed points, these are the necessary conditions for the Duistermaat-Heckman theorem. We will instead prove the formula using the heat kernel method outlined in <cit.>. The HCIZ integral arises from writing the kernel of a heat equation defined on a space of matrices in terms of eigenvalues. We start by reviewing the two matrix problem, which was studied in <cit.>. We let A and A̅ be two random Hermitian matrices of dimension N; then their joint density is <cit.>: p(A, A̅) = 1/𝒵_Nexp(-[V(A) - V(A̅) + (AA̅)]) where V(M), for a Hermitian matrix M, is a potential term defined as such: V(M) = 1/2 M^2 + ∑_j > 2g_j/N^j-1 M^2j. Here, g is fixed and real. The normalization 𝒵_N is given by the partition function <cit.>: 𝒵_N = ∫ dAdA̅exp([-V(A) - V(A̅) + AA̅]) We note that dAdA̅exp([-V(A) - V(A̅) + AA̅]) only depends on the eigenvalues of A and A̅; integrating over the angular variables yields <cit.>: 𝒵_N = (2π)^N(N-1)/(∏_j=1^N j!)^2∫∏_j=1^N da_jda̅_jΔ^2(Λ_A)Δ^2(Λ_A̅)exp[- V(Λ_A) - V(Λ_A̅)] ×∫ dUexp[(U Λ_A U^†Λ_A̅)], We see that the usual HCIZ integral arises as the Jacobian factor when reducing the matrix integral into an integral over eigenvalues. We now add two more matrices and seek to extend the HCIZ formula to the case of two pairs of commuting matrices. This is the equivalent of finding the joint density of four matrices, in which A and B are Hermitian matrices that commute with each other, and similarly with A̅ and B̅: p(A, A̅, B, B̅) = 1/𝒵exp([ - V(A) - V(A̅) + AA̅ - V(B) - V(B̅) + BB̅]) Then our partition function becomes: 𝒵 = ∫_[A,B]=[A̅,B̅]=0 dAdA̅dBdB̅exp([ - V(A) - V(A̅) + AA̅ - V(B) - V(B̅) + BB̅]) The fact that A and B implies that that they can be diagonalized simultaneously so that the angular variables of the matrices are correlated. A→ U Λ_A U^† B→ U Λ_B U^† Once we perform this change of variables and integrate over the angular variables, the partition function reduces to: 𝒵_N = (2π)^N(N-1)/(∏_j=1^N j!)^2∫∏_j=1^N da_jda̅_jdb_jdb̅_jΔ^2(Λ_A)Δ^2(Λ_A̅)Δ^2(Λ_B)Δ^2(Λ_B̅) ×exp[- V(Λ_A) - V(Λ_A̅) - V(Λ_B) - V(Λ_B̅)]∫ dUexp[(U Λ_A U^†Λ_A̅ + U Λ_B U^†Λ_B̅)] We now write down the heat equation. We start by defining 𝒟_M, a unitary invariant Laplacian operator on Hermitian matrices <cit.>: 𝒟_M ≡∑_i∂/∂ M^2_ii + 1/2∑_i<j[∂^2/(∂ReM_ij)^2 + ∂^2/(∂ImM_ij)^2], where we denote our matrices A, B … as M. The propagator for the diffusion process in the case of two matrices is <cit.>: f(t; A, A̅) = ⟨A|e^-tD_A/2|A̅⟩ = 1/(2π t)^N^2/2exp[-1/2t(A-A̅)^2] In the case of four matrices where B and B̅ are independent of A and A̅, our propagator becomes: f(t; A, A̅, B, B̅) = ⟨A|e^-tD_A/2|A̅⟩⟨B|e^-tD_B/2|B̅⟩ = 1/(2π t)^N^2exp[-1/2t(A-A̅)^2 - 1/2t(B-B̅)^2] Eq. (<ref>) is a solution for t positive of the following heat equation: (∂/∂ t - 1/2D_A - 1/2D_B)f(t;A,A̅,B,B̅) = 0, where D_A and D_B respectively denote the Laplacians for A and B. The solution f is a heat kernel which reduces to two Dirac delta functions with respect to the measures dΛ_A̅ and dΛ_B̅ when we take t→ 0. We let g(t,A) + g(t, B) be the solution to our heat equation for t > 0; at t = 0, g(t, A) coincides with g(A) and g(t, B) with g(B), which are invariant under unitary transformations, allowing us to write: g(t,Λ_A) = C∫ dU∫ dΛ_A̅Δ^2(Λ_A̅)f(t;Λ_A,UΛ_A̅U^†,Λ_B, UΛ_B̅U^†)g(Λ_A̅) g(t,Λ_B) = C∫ dU∫ dΛ_B̅Δ^2(Λ_B̅)f(t;Λ_A,UΛ_A̅U^†,Λ_B, UΛ_B̅U^†)g(Λ_B̅) It is known that integrating over invariant function f(A) ≡ f(UAU^†) yields <cit.>: ∫ dAf(A) = (2π)^N(N-1)/2/∏_j=1^N j!∫∏_j=1^N da_jΔ(a)^2f(Λ_A) Matching our results to the equation above, we find that our constant C must be: C = (2π)^N(N-1)/2∏_j=1^N j! Returning to eq. (<ref>), we multiply g(t,Λ_A) by Δ(Λ_A) and g(t,Λ_B) by Δ(Λ_B) and find: Δ(Λ_A)g(t,Λ_A) = ∫ dΛ_A̅K(t;Λ_A,Λ_A̅)[Δ(Λ_A̅)g(Λ_A̅)] Δ(Λ_B)g(t,Λ_B) = ∫ dΛ_B̅K(t;Λ_B, Λ_B̅)[Δ(Λ_B̅)g(Λ_B̅)] where we define K(t;Λ_M,Λ_M̅) as: K(t;Λ_M,Λ_M̅) = CΔ(Λ_M)Δ(Λ_M̅)∫ dUf(t;Λ_A,UΛ_A̅U^†,Λ_B, UΛ_B̅U^†) We see then that K(t;Λ_M,Λ_M̅) is the evolution kernel for antisymmetric equations that take the form: ξ(Λ_M) = Δ(Λ_M)g(Λ_M) We find that ξ(Λ_M) satisfies the following equation: dξ/dt = 1/21/Δ(Λ_M)∑_j=1^N∂/∂λ_jΔ^2(Λ_M)∂/∂λ_jξ/Δ(Λ_M) = 1/2∑_j(∂/∂λ_j + ∑_k≠ j1/λ_j-λ_k)(∂/∂λ_j + ∑_k≠ j1/λ_j-λ_k)ξ =1/2∑_j∂^2ξ/∂λ^2_j - ∑_k≠ l ≠ j1/λ_j-λ_k1/λ_j-λ_lξ Using the identity <cit.>: 1/(λ_1-λ_2)(λ_1-λ_3) + 1/(λ_2-λ_3)(λ_2-λ_1) + 1/(λ_3-λ_1)(λ_3-λ_2) = 0 we see that the last two terms vanish, leaving us with: dξ/dt = 1/2∑_j∂^2/∂λ^2_jξ We note that in our case, there is no interaction between Δ(Λ_A)g(t,Λ_A) and Δ(Λ_B)g(t,Λ_B). Thus we may simply add them together and define 𝒦(t;Λ_A,Λ_A̅,Λ_B, Λ_B̅) as the evolution kernel for antisymmetric equations that take the form: η(Λ_A, Λ_B) = Δ(Λ_A)g(Λ_A) + Δ(Λ_B)g(Λ_B) where: dη/dt = 1/2(∑_j∂^2/∂ a^2_j + ∑_j∂^2/∂ b^2_j)ξ and: 𝒦(t;Λ_A,Λ_A̅,Λ_B, Λ_B̅) ≡ K(t;Λ_A,Λ_A̅) + K(t;Λ_B, Λ_B̅) =C(Δ(Λ_A)Δ(Λ_A̅) + Δ(Λ_B)Δ(Λ_B̅))∫ dUf(t;Λ_A,UΛ_A̅U^†,Λ_B, UΛ_B̅U^†) Then our kernel 𝒦(t;Λ_A,Λ_A̅,Λ_B, Λ_B̅) is: 𝒦(t;Λ_A,Λ_A̅,Λ_B, Λ_B̅) = 1/(2π t)^N/21/N!∑_𝒫∈ S_N(-1)^𝒫exp[-1/2t∑_j(a_j - a̅_𝒫) -1/2t∑_j(b_j - b̅_𝒫)] = 1/(2π t)^N/21/N![exp[-1/2t(a_j-a̅_k)^2 -1/2t(b_j-b̅_k)^2]] We compare this to eq. (<ref>) and eq. (<ref>) and find: ∫ dUe^[-1/2t(Λ_A - UΛ_A̅U^†)^2 -1/2t(Λ_B - UΛ_B̅U^†)^2] = t^N(N-1)/2∏_j=1^N j![e^[-1/2t(a_j-a̅_k)^2 -1/2t(b_j-b̅_k)^2]]/Δ(Λ_A)Δ(Λ_A̅) + Δ(Λ_B)Δ(Λ_B̅) We see that this is simply another form of (<ref>). Thus we have found an extension of the HCIZ formula to the four matrix problem. We see that we may extend it to 2n matrices by adding pairs of random Hermitian matrices that commute; because these pairs of matrices are independent of one another, it becomes trivial to modify the joint density function, partition function, propagator, heat equation, and evolution kernel. In general we find the formula: I_N,n = ∫_U(N) dU e^[U M_1 U^†M̅_1+U M_2 U^†M̅_2 + ⋯ + U M_n U^†M̅_n] = 𝒞_N (e^λ_1,iλ̅_1,j +λ_2,iλ̅_2,j +⋯ +λ_n,iλ̅_n,j)/Δ(Λ_M_1)Δ(Λ_M̅_1) + Δ(Λ_M_2)Δ(Λ_M̅_2) + ⋯ + Δ(Λ_M_n)Δ(Λ_M̅_n), where keeping with the notation we used previously, we define λ_i,j and λ̅_i,j respectively as the jth eigenvalues of matrices M_i and M̅_i, and Λ_M_i and Λ_M̅_i respectively as the diagonal matrices of the eigenvalues of M_i and M̅_i. We leave the extension of this formula to the symplectic and orthogonal groups for future work. §.§ Discussion of the Character Expansion Proof While the heat kernel method of proving eq. (<ref>) allows us to generalize the HCIZ integral to multi-matrix models, a character expansion proof would allow us to apply this formula to multi-matrix coherent states, such as eq. (<ref>), and interpret the results in terms of giant gravitons, since column representations give the duals of sphere giants <cit.> and row representations give the duals of AdS giants <cit.>. A character expansion proof exists for the HCIZ integral, but extending it to the case of 2n matrices is less straightforward; the sum in the denominator of eq. (<ref>) is a glaring obstacle. In an attempt to gain further familiarity with this integral and shed light on proving eq. (<ref>) through a character expansion, we solve the integral I_3,2 in Appendix <ref>. We will reproduce the result directly below: I_3,2 = ∑_p,k≥0^∞∑_m,n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m8π e^w_1Γ(2p+4k-2m-2g+2, 0, -w_2)/(k+1)(k+2)(p+4k-2m-2n+2)(-w_2)^2p+4k-2m-2g+2 ×(-1)^k-m-l-hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)! +∑_p,k≥0^∞∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0i8π e^w_1Γ(2p+2k+2g+3, 0, -w_2)/(2k+3)(2k+5)(p+2k+m+2n+3)(-w_2)^2p+2k+2g+3 ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)+1/2^2(k-m)(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!Γ(k-m+3/2)Γ(k-m+3/2) It becomes very tedious to simplify this expression any further; regardless, we note that we have written the result as a sum of products of factorials, which is indicative of a relationship to Young diagrams and perhaps a class of modified Schur polynomials that have yet to be defined. This suggests that it should be possible to prove eq. (<ref>) through a character expansion. A thorough expansion of the determinant in eq. (<ref>) for N=3 and n=2 in the spirit of <cit.> and an explicit computation of the denominator up to the point we leave off in our calculations in Appendix <ref> may also lead to the desired character expansion proof, which may then be extended to the SO(N) and Sp(N) groups, thus giving a description of the correlators in terms of giant gravitons in those Lie groups. We leave this for future work. § CONNECTION WITH RESTRICTED SCHUR POLYNOMIALS A natural question to ask is: What sort of basis of operators do the coherent states (<ref>) actually generate? This is quite non-trivial, since there are in principle many different ways of orthogonalizing the two point function of 1/4-BPS operators at finite N. Recalling the definition of the restricted Schur polynomials χ_R,(r,s) αβ(X,Y)= [P_R,(r,s) αβ X^n ⊗ Y^m], where R is a Young diagram associated to a representation R of S_n+m, r is a Young diagram for the representation r of S_n and s is another Young diagram for a representation s of S_m, the object P_R,(r,s) αβ can be understood as follows. Starting with S_m× S_n⊂ S_m+n, we can find representations r× s sitting within R. Generically, the representation r× s can appear more than once inside of R, so one needs to keep track of how one embeds r× s into R. The matrix indices α, β keep track of this information. More formally, we can label each of the embeddings of r× s by an index γ, and consider the space R_γ⊂ R. The restricted Schur polynomial is then given by χ_R,R_γ(X,Y) = 1/m! n!∑_σ∈ S_n+m_R_γ[Γ_R(σ)] [σ X^n⊗ Y^m], where Γ_R(σ) is the matrix represetating σ <cit.>. The most complicated part of the restricted Schur polynomials is the evaluation of _R_γ[Γ_R(σ)], which involves building R_γ explicitly. By expanding the exponential and evaluating the unitary integrals, we obtain 1/Vol[U(N)]∫ dU exp(U X U^†Λ_X +UY U^†Λ_Y)= ∑_n,m1/m! n!∑_σ, τ∈ S_n+m[σΛ_X^n⊗Λ_Y^m][τ X^n⊗ Y^m] Wg(στ^-1, N) , where Wg(σ, N) is the Weingarten function. Explicit combinatorial formulas for Weingarten functions are well known from the work of Collins (see <cit.> for an elementary introduction), but before delving into specific details, we should contrast this with the situation where one of Λ_X,Y is zero. In that case, the resulting sum can be recast as a diagonal sum of products of unitary characters; right now, we have a complicated sum of traces. For a moment, let us consider the situation for a single matrix. The resulting sum is: 1/Vol[U(N)]∫ dU exp(U X U^†Λ_X) = ∑_n=0^∞1/ n!∑_σ, τ∈ S_n[σΛ_X^n][τ^-1 X^n] Wg(στ^-1, N) = ∑_n=0^∞1/ n!∑_σ, τ∈ S_n[σΛ_X^n][τ^-1 X^n] ∑_λ⊢ n1/n! f_λχ^λ(τ^-1σ) χ^λ(1) = ∑_n=0^∞∑_λ⊢ n1/f_λ s_λ(X) s_λ(Λ_X). The last line is obtained from the character expansion of the integral, which was computed in <cit.>. Then for two matrices, we have: 1/Vol[U(N)]∫ dU exp(U X U^†Λ_X +UY U^†Λ_Y) = ∑_n,m1/m! n!(n+m)!∑_λ⊢ n+m1/f^λ∑_σ, τ∈ S_n+mχ^λ(σ) χ^λ(τ)[σΛ_X^n⊗Λ_Y^m][τ X^n⊗ Y^m] . Clearly this has a similar structure to the definition of the restricted Schur polynomials (<ref>), but the restricted characters have been replaced with ordinary symmetric group characters instead. This discrepancy can be traced back to the fact that the sum over S_n+m has many redundancies owing to the fact that we can conjugate by an element of S_n× S_m while leaving the traces fixed. This is the statement that we can permute the n X's and m Y's among themselves while simultaneously permuting the Λ_X,Y's. As explained in <cit.>, there is an equivalence relation between elements of S_n+m in such a way that σ∼τ ⇔[σ A^n ⊗ B^m]=[τ A^n ⊗ B^m], which happens exactly when σ can be conjugated into τ by an element of S_n× S_m. In other words, the construction of restricted Schur polynomials is equivalent to constructing generalized class functions on restricted conjugacy classes. Unfortunately this means that the coherent state generating function (<ref>) cannot differentiate between different restricted Schur polynomials by itself for the simple reason that the Weingarten function is a class function. This means that if we want to replace the characters in (<ref>) with restricted characters, we must change the domain of integration. In any case, the coherent states still form an overcomplete basis of operators that can be used for computations, even if we do not currently know how to project into a particular primary state. One way of achieving this projection would be by integrating against pairs of Schur functions of Λ_X,Y as was done for 1/2-BPS operators in <cit.>; this would give a description of the restricted Schur polynomials in terms of half-BPS partons as advocated by <cit.>, but it is still unclear how one would be able to deal with possible multiplicities of the subduced representations (r,s). § DISCUSSION In this paper, we studied multi-matrix coherent states for bosonic matrices that generate 1/4 and 1/8 BPS states in 𝒩=4 SYM. We showed that the norm of these coherent states admits a fixed point formula generalizing the Harish-Chandra-Itzykson-Zuber formula. This gives in principle a way of generating expressions for BPS states for any value N in 𝒩=4 SYM. One technical obstacle we face is that our construction does not give an alternative construction of the so-called restricted Schur polynomial operators <cit.>. This is related to the expectation that there is a hidden symmetry under which different operators are charged. One idea is that determining the Casimir charges should be enough to differentiate between different operators, but this problem is quite non-trivial even in the 1/2 BPS sector <cit.>. It is also unclear how to implement this idea efficiently at large N since the number of Casimirs needed to distinguish between different operators grows with the complexity of the operators. Despite this obstacle, our results are important for computing correlators of 1/4 and 1/8 BPS operators dual to bound states of giant gravitons <cit.> and generic bubbling geometries <cit.>. Understanding the precise map between the overcomplete 'eigenvalue basis' of coherent states and specific orthogonal bases of operators remains an important problem. We conclude with a few more immediate directions for future work. §.§ 1/16 BPS States and Black Hole Microstate Operators One of the more interesting generalizations would be to the case of 1/16 BPS operators. By now, there is ample evidence that there exists a class of 1/16 BPS operators describing the microstates of supersymmetric black holes in AdS_5 × S^5 <cit.>. Recently, there have been some studies of these types of states for small values of N <cit.>; see <cit.> for a more general discussion. It would be nice to develop more systematic techniques to build these types of operators. In principle, there are no obstructions to generalizing our techniques to this setup, with the working assumption that finding states with vanishing one-loop anomalous dimension is enough <cit.>. The idea would essentially be to build a superfield coherent state <cit.>: ∫ dU exp{∫ d^3 θ∫ dz [UΨ U^†Φ] }|0⟩, where Ψ(z, θ) is the ℂ^2|3 superfield discussed in <cit.>, and Φ is an auxiliary superfield of coherent state parameters. The combined effect of the exponentiation and integration over the unitary matrices is to generate all possible gauge invariant tensor contractions. One should expect that the operators generated by this generating function are generalizations of the SU(2|3) restricted Schur polynomials constructed in <cit.>. Generically the terms in the expansion of (<ref>) will not be of multi-graviton form, so they are natural candidates for microstates of supersymmetric black holes. In practice, the main disadvantage of an expression like (<ref>) is that it might not be practically useful, in the sense that the expansion necesarily involves an infinite number of matrix fields associated to covariant derivatives acting on the fields. One way of avoiding this difficulty is to use generating functions such as the ones studied in <cit.>. Alternatively, one can view the auxiliary superfield Φ as a full-fledged dynamical collective coordinate. One would then hope that integrating out the SYM fields leads to an effective matrix quantum mechanics describing (near)-BPS black hole microstates, with the lightcone coordinate z acting as a time variable. §.§ Three Point Correlators, Bubbling Geometries, and Twisted Holography Although eventually we would like to study black holes, it is important to build intuition from simpler examples. One class of such examples is the BPS bubbling geometries <cit.> generalizing LLM geometries <cit.>. Although the droplet description of such states in supergravity is compelling, a precise mapping between the weak coupling BPS states is not fully developed[For instance, it is unclear whether the solutions found in <cit.> exhaust the set of all 1/4 and 1/8 BPS states.]. The coherent states (<ref>) have a more natural connection to such geometries<cit.>. A worthwhile exercise would be to study correlators of single trace chiral primaries in the background of heavy coherent states corresponding to both giant gravitons or bubbling geometries; see <cit.> for some finite N results. The holographic renomalization techniques of <cit.> are also applicable in these cases, but it would be interesting to develop more efficient computational techniques in supergravity along the lines of <cit.>. A good toy model for this would be to study these types of questions in Twisted Holography <cit.>. §.§ Special Functions We obtained a formula for I_3,2 as a sum of products of factorials in eq. (<ref>), which we know can be simplified into: I_3,2 = 𝒞_3 (e^a_ia̅_j + b_ib̅_j)/Δ(Λ_A)Δ(Λ_A̅) + Δ(Λ_B)Δ(Λ_B̅) The current form of eq. (<ref>) is difficult to untangle, but we know that it must simplify to a determinant of a matrix exponential over the sum of products of Vandermonde determinants. Since it features an unwieldy number of factorials in the denominator, as well as an incomplete gamma function and falling factorials in the numerator, it's likely that an expansion of eq. (<ref>) to match the results obtained in Appendix <ref> will offer some additional insights on identities of special functions, which may give rise to new ideas and applications in QFTs, as demonstrated in <cit.>. We would like to thank D. Berenstein for helpful discussions. SW's research was supported in part by the Department of Energy under grant DE-SC0019139. § THE FOUR-MATRIX MODEL IN SU(3) We now consider the following integral: I = ∫ dU(3)exp((A̅UA U^†) + (B̅UB U^†)), for A = [ a_1; a_2; a_3 ], A̅ = [ a̅_1 a̅_2 a̅_3 ], B = [ b_1; b_2; b_3 ], and B̅ = [ b̅_1 b̅_2 b̅_3 ]. We know that we can parameterize our U(3) matrix as: U = e^iλ_3αe^iλ_2βe^iλ_3σe^iλ_5θe^iλ_3 ae^iλ_2 be^iλ_3 ce^iλ_8ϕ, where λ_i denotes the ith generators of U(3). We list the relevant SU(3) generators below <cit.>: λ_2 = [ 0 -i 0; i 0 0; 0 0 0 ], λ_3 = [ 1 0 0; 0 -1 0; 0 0 0 ], λ_5 = [ 0 0 -i; 0 0 0; i 0 0 ], λ_8 = 1/√(3)[ 1 0 0; 0 1 0; 0 0 -2 ] Because U(3) = SU(3)× U(1), we multiply U by an additional phase e^iψ. This yields the parameterization of U(3) that we will use to compute the argument of the exponential in Eq. (<ref>). Simplifying and expanding, we arrive at: (A̅UAU^†+B̅UBU^†) = (a_3a̅_̅3̅ + b_3b̅_̅3̅) cos^2θ + (a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2βcos^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)cos^2βcos^2θcos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2 bsin^2β + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2θcos^2 bsin^2β + (a̅_̅1̅a_3 + b̅_̅1̅b_3)cos^2βsin^2θ + (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 bsin^2θ + (a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2βsin^2θ + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2βsin^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2βcos^2θsin^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2βsin^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2θsin^2βsin^2 b + (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2θsin^2 b -1/4(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅) × e^-2i(σ + a)cosθsin2βsin2b +1/4(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅) × e^2i(σ + a)cosθsin2βsin2b We plug this into the integral. We may use the method outlined in <cit.> to compute the Haar measure; we arrive at: dU = sin2βsin2θsin2bsin^2θ dα dβ dσ dθ d a d b d c dϕ dψ We cite the angle limits from <cit.>: 0 ≤α, σ, a, c, ψ < π 0 ≤β, b, θ < π/2 0 ≤ϕ < 2π We seek to integrate over σ and a first. We observe that: e^2i(σ + a) - e^-2i(σ + a) = 2isin(σ +a) Returning to our integral, we note that if we choose to integrate over σ, then the relevant integral is: I_σ = ∫_0^πexp(i/2(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2βsin2bcosθsin(σ+a))dσ = π J_0(1/2(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2βsin2bcosθ) + iπH_0(1/2(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2βsin2bcosθ), where H_0 is the Struve function of order zero and J_0 is the Bessel function of the first kind of order zero. We can see that the integral over a then becomes trivial. We now seek to integrate over θ. First, we collect the relevant terms and group the coefficients for simplicity's sake. We have: v_1 = sin2βsin2b v_2 = (a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2βcos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2 bsin^2β + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2βsin^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2βsin^2 b v_3 = (a_3a̅_̅3̅ + b_3b̅_̅3̅) + (a_1a̅_̅1̅ + b_1b̅_̅1̅)cos^2βcos^2 b + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2 bsin^2β + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2βsin^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅)sin^2βsin^2 b v_4 = (a̅_̅1̅a_3 + b̅_̅1̅b_3)cos^2β + (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b + (a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2β + (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b v_5 = 1/2(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2βsin2b Our integral over θ thus becomes: I_θ = π∫_0^π/2v_1e^v_2+v_3cos^2θ+v_4sin^2θ(J_0(v_5cosθ)+iH_0(v_5cosθ))sin2θsin^2θ dθ, where I_θ is the integral over θ in I from eq. (<ref>) and v_i denote the grouped coefficients. We now observe that we may rewrite sin2θ as 2sinθcosθ. Then our integral becomes: I_θ = -2π∫_0^π/2v_1e^v_2+v_4+(v_3-v_4)cos^2θ(J_0(v_5cosθ)+iH_0(v_5cosθ))cosθ(1-cos^2θ)dcosθ, Setting x=cosθ, our integral becomes: I_θ = 2π∫_0^1v_1e^v_2+v_4+(v_3-v_4)x^2(J_0(v_5x)+iH_0(v_5x))x(1-x^2)dx = 2π v_1e^v_2+v_4∫_0^1e^(v_3-v_4)x^2(J_0(v_5x)+iH_0(v_5x))x(1-x^2)dx We find the Taylor expansion of e^(v_3-v_4)x^2: e^(v_3-v_4)x^2 = ∑_k=0^∞(v_3-v_4)^kx^2k/k! Now, we expand the Bessel and Struve functions in series. We find that: J_0(v_5x)+iH_0(v_5x) = ∑_m=0^∞(-1)^m/m!Γ(m+1)(v_5x/2)^2m + i∑_m=0^∞(-1)^m/Γ(m+3/2)Γ(m+3/2)(v_5x/2)^2m+1 Putting everything together, we arrive at: I_θ = 2π v_1e^v_2+v_4∫_0^1(∑_k=0^∞(v_3-v_4)^kx^2k/k!)(∑_k=0^∞(-1)^k/k!Γ(k+1)(v_5x/2)^2k)x(1-x^2)dx + i2π v_1e^v_2+v_4∫_0^1(∑_k=0^∞(v_3-v_4)^kx^2k/k!)(∑_k=0^∞(-1)^k/Γ(k+3/2)Γ(k+3/2)(v_5x/2)^2k+1)x(1-x^2)dx = 2π v_1e^v_2+v_4∫_0^1∑_k=0^∞∑_m=0^k((v_3-v_4)^m/m!)((-1)^k-mv_5^2(k-m)/2^2(k-m)(k-m)!Γ(k-m+1))x^2k+1(1-x^2)dx + i2π v_1e^v_2+v_4∫_0^1∑_k=0^∞∑_m=0^k((v_3-v_4)^m/m!)((-1)^k-mv_5^2(k-m)+1/2^2(k-m)+1Γ(k-m+3/2)Γ(k-m+3/2))x^2k+2(1-x^2)dx = 2π v_1e^v_2+v_4∑_k=0^∞∑_m=0^k1/2(k+1)(k+2)((v_3-v_4)^m/m!)((-1)^k-mv_5^2(k-m)/2^2(k-m)(k-m)!Γ(k-m+1)) + i2π v_1e^v_2+v_4∑_k=0^∞∑_m=0^k2/(2k+3)(2k+5)((v_3-v_4)^m/m!)((-1)^k-mv_5^2(k-m)+1/2^2(k-m)+1Γ(k-m+3/2)Γ(k-m+3/2)) We now seek to integrate over β. Before we start, we first examine the combinations v_2+v_4 and v_3-v_4: v_2 + v_4 = (a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2βcos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2 bsin^2β + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2βsin^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2βsin^2 b + (a̅_̅1̅a_3 + b̅_̅1̅b_3)cos^2β + (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b + (a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2β + (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b = ((a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2 b + (a_1a̅_̅2̅ + b_1b̅_̅2̅)sin^2 b + a̅_̅1̅a_3 + b̅_̅1̅b_3)cos^2β + ((a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2 b + a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2β + (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b + (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b We note that v_1 = sin2βsin2b, which means we can repeat the process of rewriting sin2β as 2sinβcosβ, but absorbing cosβ behind the derivative instead. Then we rewrite the expression above as: v_2 + v_4 = ((a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2 b + (a_1a̅_̅2̅ + b_1b̅_̅2̅)sin^2 b + a̅_̅1̅a_3 + b̅_̅1̅b_3)(1-sin^2β) + ((a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2 b + a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2β + (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b + (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b = a̅_̅1̅a_3 + b̅_̅1̅b_3 + (a_1a̅_̅3̅ + b_1b̅_̅3̅ + a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2 b + (a_2a̅_̅3̅ + b_2b̅_̅3̅ + a_1a̅_̅2̅ + b_1b̅_̅2̅)sin^2 b + (a̅_̅2̅a_3 + b̅_̅2̅b_3 + (a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅2̅ - b_2b̅_̅2̅)cos^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅2̅ - b_1b̅_̅2̅)sin^2 b)sin^2β We then turn to v_3-v_4: v_3 - v_4 = (a_3a̅_̅3̅ + b_3b̅_̅3̅) + (a_1a̅_̅1̅ + b_1b̅_̅1̅)cos^2βcos^2 b + (a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2 bsin^2β + (a̅_̅1̅a_2 + b̅_̅1̅b_2)cos^2βsin^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅)sin^2βsin^2 b -(a̅_̅1̅a_3 + b̅_̅1̅b_3)cos^2β - (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b - (a̅_̅2̅a_3 + b̅_̅2̅b_3)sin^2β - (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b = (a_3a̅_̅3̅ + b_3b̅_̅3̅) - (a_1a̅_̅3̅ + b_1b̅_̅3̅)cos^2 b - (a_2a̅_̅3̅ + b_2b̅_̅3̅)sin^2 b +((a_1a̅_̅1̅ + b_1b̅_̅1̅)cos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2)sin^2 b -(a̅_̅1̅a_3 + b̅_̅1̅b_3))(1-sin^2β) + ((a_1a̅_̅2̅ + b_1b̅_̅2̅)cos^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅)sin^2 b - (a̅_̅2̅a_3 + b̅_̅2̅b_3))sin^2β = a_3a̅_̅3̅ + b_3b̅_̅3̅ - a̅_̅1̅a_3 - b̅_̅1̅b_3 + (a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅3̅ - b_1b̅_̅3̅)cos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅3̅ - b_2b̅_̅3̅)sin^2 b + ((a_1a̅_̅2̅ + b_1b̅_̅2̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅)cos^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2)sin^2 b)sin^2β + (a̅_̅1̅a_3 + b̅_̅1̅b_3 - a̅_̅2̅a_3 - b̅_̅2̅b_3)sin^2β We once again regroup and relabel our coefficients for ease of computation: u_1 = a̅_̅1̅a_3 + b̅_̅1̅b_3 + (a_1a̅_̅3̅ + b_1b̅_̅3̅ + a_2a̅_̅2̅ + b_2b̅_̅2̅)cos^2 b + (a_2a̅_̅3̅ + b_2b̅_̅3̅ + a_1a̅_̅2̅ + b_1b̅_̅2̅)sin^2 b u_2 = a̅_̅2̅a_3 + b̅_̅2̅b_3 + (a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅2̅ - b_2b̅_̅2̅)cos^2 b + (a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅2̅ - b_1b̅_̅2̅)sin^2 b u_3 = a_3a̅_̅3̅ + b_3b̅_̅3̅ - a̅_̅1̅a_3 - b̅_̅1̅b_3 + (a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅3̅ - b_1b̅_̅3̅)cos^2 b + (a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅3̅ - b_2b̅_̅3̅)sin^2 b u_4 = a̅_̅1̅a_3 + b̅_̅1̅b_3 - a̅_̅2̅a_3 - b̅_̅2̅b_3 + (a_1a̅_̅2̅ + b_1b̅_̅2̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅)cos^2 b + (a_2a̅_̅2̅ + b_2b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2)sin^2 b u_5 = (a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2b We set y = sinβ. Then our integral over β becomes: I_β = 4πsin2b ∫_0^1 e^u_1+u_2y∑_k=0^∞∑_m=0^k1/2(k+1)(k+2)((u_3+u_4y^2)^m/m!)((-1)^k-m(u_5y√(1-y^2))^2(k-m)/2^2(k-m)(k-m)!Γ(k-m+1))ydy + i4πsin2b ∫_0^1 e^u_1+u_2y∑_k=0^∞∑_m=0^k2/(2k+3)(2k+5)((u_3+u_4y^2)^m/m!) ×((-1)^k-m(u_5y√(1-y^2))^2(k-m)+1/2^2(k-m)+1Γ(k-m+3/2)Γ(k-m+3/2))ydy We now examine (u_3+u_4y^2)^m. We know that we can use the binomial expansion to express it as: (u_3+u_4y^2)^m = ∑_j=0^mmju_3^j(u_4y^2)^m-j Then we have: (u_3+u_4y^2)^m/m! = ∑_j=0^m1/(m-j)!j!u_3^j(u_4y^2)^m-j We then examine (y√(1-y^2))^2(k-m) and (y√(1-y^2))^2(k-m)+1. First, we note that we can rewrite these expressions as (y^2-y^4)^k-m and (y^2-y^4)^2(k-m)+1/2. Then, using the binomial series, we find: (y^2-y^4)^k-m = ∑_l=0^k-mk-ml(-1)^k-m-ly^2ly^4(k-m-l) = ∑_l=0^k-mk-ml(-1)^k-m-ly^4k-4m-2l and (y^2-y^4)^2(k-m)+1/2 = y^2(k-m)+1∑_l=0^∞2(k-m)+1/2l(-1)^ly^2l = ∑_l=0^∞2(k-m)+1/2l(-1)^ly^2(k-m+l)+1, where k-m+1/2l denotes the generalized binomial coefficients: k-m+1/2l = 2(k-m)+1/2(2(k-m)+1/2-1)⋯(2(k-m)+1/2-l+1)/l! Then we have: (-1)^k-m(u_5y√(1-y^2))^2(k-m)/2^2(k-m)(k-m)!Γ(k-m+1) = (-1)^k-mu_5^2(k-m)∑_l=0^k-mk-ml(-1)^k-m-ly^4k-4m-2l/2^2(k-m)(k-m)!Γ(k-m+1) = ∑_l=0^k-m(-1)^2(k-m)-lu_5^2(k-m)y^4k-4m-2l/2^2(k-m)l!(k-m-l)!Γ(k-m+1) and (-1)^k-m(u_5y√(1-y^2))^2(k-m)+1/2^2(k-m)+1Γ(k-m+3/2)Γ(k-m+3/2) = (-1)^k-mu_5^2(k-m)+1∑_l=0^∞2(k-m)+1/2l(-1)^ly^2(k-m+l)+1/2^2(k-m)+1Γ(k-m+3/2)Γ(k-m+3/2) = ∑_l=0^∞(-1)^k-m+l(k-m+1/2)^lu_5^2(k-m)+1y^2(k-m+l)+1/2^2(k-m)+1l!Γ(k-m+3/2)Γ(k-m+3/2) We now expand: ∑_k=0^∞∑_m=0^k1/2(k+1)(k+2)(∑_j=0^mu_3^ju_4^m-j/(m-j)!j!y^2(m-j))(∑_l=0^k-m(-1)^2(k-m)-lu_5^2(k-m)y^4k-4m-2l/2^2(k-m)l!(k-m-l)!Γ(k-m+1)) =∑_k=0^∞∑_m=0^ky^4k-2m/2(k+1)(k+2)(∑_j=0^mu_3^ju_4^m-j/(m-j)!j!y^-2j)(∑_l=0^k-m(-1)^2(k-m)-lu_5^2(k-m)y^-2l/2^2(k-m)l!(k-m-l)!Γ(k-m+1)) =∑_k=0^∞∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/2(k+1)(k+2)u_3^ju_4^m-j/(m-j)!j!(-1)^2(k-m)-lu_5^2(k-m)/2^2(k-m)(l)!(k-m-l)!Γ(k-m+1)y^4k-2m-2n Then we expand: ∑_k=0^∞∑_m=0^k2y^2k-m+1/(2k+3)(2k+5)(∑_j=0^mu_3^ju_4^m-j/(m-j)!j!y^2(m-j))(∑_l=0^∞(-1)^k-m+l(k-m+1/2)^lu_5^2(k-m)+1y^2l/2^2(k-m)+1l!Γ(k-m+3/2)Γ(k-m+3/2)) = ∑_k=0^∞∑_m=0^k∑_n=-m^∞(∑_l-j = n 0≤ j ≤ m l≥ 02y^2k+m+2n+1/(2k+3)(2k+5)u_3^ju_4^m-j/(m-j)!j!(-1)^k-m+l(k-m+1/2)^lu_5^2(k-m)+1/2^2(k-m)+1l!Γ(k-m+3/2)Γ(k-m+3/2)) Finally, we note that we can expand e^u_2y using the Taylor series: e^u_2y = ∑_p=0^∞u_2^py^p/p! Then the first half of I_β evaluates to: 4πsin2b e^u_1∫_0^1 ∑_p=0^∞u_2^py^p/p! ∑_k=0^∞∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/2(k+1)(k+2)u_3^ju_4^m-j/(m-j)!j!(-1)^2(k-m)-lu_5^2(k-m)/2^2(k-m)(l)!(k-m-l)!Γ(k-m+1)y^4k-2m-2n+1dy =4πsin2b e^u_1∑_q=0^∞∑_p+4k=q p,k≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-mu_2^p/(k+1)(k+2)p!(q-2m-2n+2)u_3^ju_4^m-j/(m-j)!j! ×(-1)^2(k-m)-lu_5^2(k-m)/2^2(k-m)+1(l)!(k-m-l)!Γ(k-m+1) The second half evaluates to: i4πsin2b e^u_1∫_0^1 ∑_p=0^∞u_2^py^p/p! ×∑_k=0^∞∑_m=0^k∑_n=-m^∞(∑_l-j = n 0≤ j ≤ m l≥ 02y^2k+m+2n+2/(2k+3)(2k+5)u_3^ju_4^m-j/(m-j)!j!(-1)^k-m+l(k-m+1/2)^lu_5^2(k-m)+1/2^2(k-m)+1l!Γ(k-m+3/2)Γ(k-m+3/2))dy =i4πsin2b e^u_1∑_q = 0^∞∑_p+2k=q p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0u_2^p/p!(2k+3)(2k+5)(q+m+2n+3)u_3^ju_4^m-j/(m-j)!j! ×(-1)^k-m+l(k-m+1/2)^lu_5^2(k-m)+1/2^2(k-m)l!Γ(k-m+3/2)Γ(k-m+3/2) Finally, we integrate over b. As before, we take sin2b and rewrite it as 2sin bcos b. Then we absorb cos b behind the derivative and set z = sin b. We now integrate over z from 0 to 1. We rewrite u_i to reflect this change: u_1 = a̅_̅1̅a_3 + b̅_̅1̅b_3 + a_1a̅_̅3̅ + b_1b̅_̅3̅ + a_2a̅_̅2̅ + b_2b̅_̅2̅ + (a_2a̅_̅3̅ + b_2b̅_̅3̅ + a_1a̅_̅2̅ + b_1b̅_̅2̅ -a_1a̅_̅3̅ - b_1b̅_̅3̅ - a_2a̅_̅2̅ - b_2b̅_̅2̅)sin^2 b u_2 = a̅_̅2̅a_3 + b̅_̅2̅b_3 + a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅2̅ - b_2b̅_̅2̅ + (a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅2̅ - b_1b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2 + a_2a̅_̅2̅ + b_2b̅_̅2̅)sin^2 b u_3 = a_3a̅_̅3̅ + b_3b̅_̅3̅ - a̅_̅1̅a_3 - b̅_̅1̅b_3 + a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅3̅ - b_1b̅_̅3̅ + (a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅3̅ - b_2b̅_̅3̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅ + a_1a̅_̅3̅ + b_1b̅_̅3̅)sin^2 b u_4 = a̅_̅1̅a_3 + b̅_̅1̅b_3 - a̅_̅2̅a_3 - b̅_̅2̅b_3 + a_1a̅_̅2̅ + b_1b̅_̅2̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅ + (a_2a̅_̅2̅ + b_2b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2 - a_1a̅_̅2̅ - b_1b̅_̅2̅ + a_1a̅_̅1̅ + b_1b̅_̅1̅)sin^2 b u_5 = (a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅)sin2b We set: w_1 = a̅_̅1̅a_3 + b̅_̅1̅b_3 + a_1a̅_̅3̅ + b_1b̅_̅3̅ + a_2a̅_̅2̅ + b_2b̅_̅2̅ w_2 = a_2a̅_̅3̅ + b_2b̅_̅3̅ + a_1a̅_̅2̅ + b_1b̅_̅2̅ -a_1a̅_̅3̅ - b_1b̅_̅3̅ - a_2a̅_̅2̅ - b_2b̅_̅2̅ w_3 = a̅_̅2̅a_3 + b̅_̅2̅b_3 + a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅2̅ - b_2b̅_̅2̅ w_4 = a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅2̅ - b_1b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2 + a_2a̅_̅2̅ + b_2b̅_̅2̅ w_5 = a_3a̅_̅3̅ + b_3b̅_̅3̅ - a̅_̅1̅a_3 - b̅_̅1̅b_3 + a_1a̅_̅1̅ + b_1b̅_̅1̅ - a_1a̅_̅3̅ - b_1b̅_̅3̅ w_6 = a̅_̅1̅a_2 + b̅_̅1̅b_2 - a_2a̅_̅3̅ - b_2b̅_̅3̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅ + a_1a̅_̅3̅ + b_1b̅_̅3̅ w_7 = a̅_̅1̅a_3 + b̅_̅1̅b_3 - a̅_̅2̅a_3 - b̅_̅2̅b_3 + a_1a̅_̅2̅ + b_1b̅_̅2̅ - a_1a̅_̅1̅ - b_1b̅_̅1̅ w_8 = a_2a̅_̅2̅ + b_2b̅_̅2̅ - a̅_̅1̅a_2 - b̅_̅1̅b_2 - a_1a̅_̅2̅ - b_1b̅_̅2̅ + a_1a̅_̅1̅ + b_1b̅_̅1̅ w_9 = 2(a_1a̅_̅1̅ + a̅_̅1̅a_2 + a_1a̅_̅2̅ - a_2a̅_̅2̅ - b_1b̅_̅1̅ + b̅_̅1̅b_2 + b_1b̅_̅2̅ - b_2b̅_̅2̅) Then our integral becomes: I_b = 8π e^w_1∫_0^1 e^w_2z^2∑_q=0^∞∑_p+4k=q p,k≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m(w_3+w_4z^2)^p(w_5+w_6z^2)^j(w_7+w_8z^2)^m-j/(k+1)(k+2)(q-2m-2n+2)p!j!(m-j)! ×(-1)^2(k-m)-l(w_9z√(1-z^2))^2(k-m)/2^2(k-m)+1l!(k-m-l)!Γ(k-m+1)zdz +i8π e^w_1∫_0^1 e^w_2z^2∑_q = 0^∞∑_p+2k=q p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0(w_3+w_4z^2)^p(w_5+w_6z^2)^j(w_7+w_8z^2)^m-j/(2k+3)(2k+5)(q+m+2n+3)p!j!(m-j)! ×(-1)^k-m+l(k-m+1/2)^l(w_9z√(1-z^2))^2(k-m)+1/2^2(k-m)l!Γ(k-m+3/2)Γ(k-m+3/2)zdz As before, we note that: (w_3+w_4z^2)^p/p! = ∑_r=0^p1/(p-r)!r!w_3^r(w_4z^2)^p-r and (w_5+w_6z^2)^j/j! = ∑_s=0^j1/(j-s)!s!w_5^s(w_6z^2)^j-s and (w_7+w_8z^2)^m-j/(m-j)! = ∑_t=0^m-j1/(m-j-t)!t!w_7^t(w_8z^2)^m-j-t We find that: (w_3+w_4z^2)^p(w_5+w_6z^2)^j(w_7+w_8z^2)^m-j/p!j!(m-j)! = (∑_r=0^pw_3^rw_4^p-r/(p-r)!r!z^2p-2r)(∑_s=0^jw_5^sw_6^j-s/(j-s)!s!z^2j-2s) ×(∑_t=0^m-jw_7^tw_8^m-j-t/(m-j-t)!t!z^2m-2j-2t) = ∑_g=0^p+m∑_r+s+t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-jw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/(p-r)!r!(j-s)!s!(m-j-t)!t!z^2(p+m-g) We also know that (z√(1-z^2))^2(k-m) and (z√(1-z^2))^2(k-m)+1 can be written as: (z√(1-z^2))^2(k-m) = ∑_h=0^k-mk-mh(-1)^k-m-hz^4k-4m-2h and (z√(1-z^2))^2(k-m)+1 = ∑_h=0^∞2(k-m)+1/2h(-1)^hz^2(k-m+h)+1, where k-m+1/2l denotes the generalized binomial coefficients. Then we have: (-1)^2(k-m)-l(w_9z√(1-z^2))^2(k-m)/2^2(k-m)+1l!(k-m-l)!Γ(k-m+1) = (-1)^2(k-m)-lw_9^2(k-m)∑_h=0^k-mk-mh(-1)^k-m-hz^4k-4m-2h/2^2(k-m)+1l!(k-m-l)!Γ(k-m+1) = ∑_h=0^k-m(-1)^k-m-l-hw_9^2(k-m)z^4k-4m-2h/2^2(k-m)+1l!h!(k-m-l)!(k-m-h)! and: (-1)^k-m+l(k-m+1/2)^l(w_9z√(1-z^2))^2(k-m)+1/2^2(k-m)l!Γ(k-m+3/2)Γ(k-m+3/2) = (-1)^k-m+l(k-m+1/2)^lw_9^2(k-m)+1∑_h=0^∞2(k-m)+1/2h(-1)^hz^2(k-m+h)+1/2^2(k-m)l!Γ(k-m+3/2)Γ(k-m+3/2) = ∑_h=0^∞(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1z^2(k-m+h)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2) Then we compute: ∑_q=0^∞∑_p+4k=q p,k≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/(k+1)(k+2)(q-2m-2n+2) ×(∑_g=0^p+m∑_r+s+t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-jw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/(p-r)!r!(j-s)!s!(m-j-t)!t!z^2(p+m-g)) ×(∑_h=0^k-m(-1)^k-m-l-hw_9^2(k-m)z^4k-4m-2h/2^2(k-m)+1l!h!(k-m-l)!(k-m-h)!) and arrive at: ∑_q=0^∞∑_p+4k=q p,k≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/(k+1)(k+2)(q-2m-2n+2) ×∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m(-1)^k-m-l-hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)!z^2(2k+p-m-g) We also compute: ∑_q = 0^∞∑_p+2k=q p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 01/(2k+3)(2k+5)(q+m+2n+3) ×(∑_g=0^p+m∑_r+s+t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-jw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/(p-r)!r!(j-s)!s!(m-j-t)!t!z^2(p+m-g)) ×(∑_h=0^∞(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1z^2(k-m+h)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2)) and arrive at: ∑_q = 0^∞∑_p+2k=q p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 01/(2k+3)(2k+5)(q+m+2n+3) ×∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0w_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/(p-r)!r!(j-s)!s!(m-j-t)!t! ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1z^2(p+k+g)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2) Once again, we expand e^w_2z using the Taylor series: e^w_2z = ∑_f=0^∞w_2^fz^f/f! Then the first half of our integral becomes: 8π e^w_1∫_0^1 (∑_f=0^∞w_2^fz^f/f!)∑_q=0^∞∑_p+4k=q p,k≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/(k+1)(k+2)(q-2m-2n+2) ×∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m(-1)^k-m-l-hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)!z^2(2k+p-m-g)zdz This is a hideous series, and we would be forgiven for thinking that we should define a new index that matches 2(2k+p-m-g). But q does the job, if more subtly, and so we will retain q and rewrite 2(2k+p-m-g) as 2(q-2k-m-g). Then we can integrate over z and find: 8π e^w_1∫_0^1∑_d=0^∞∑_2q-4k+f=d p+4k=q q,k,p,f≥0∑_m=0^k∑_n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m1/(k+1)(k+2)(q-2m-2n+2) ×∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m(-1)^k-m-l-hw_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1f!(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)!z^d-2m-2g+1dz = 8π e^w_1∑_q=0^∞∑_2p+4k+f=q k,p,f≥0∑_m,n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m1/(k+1)(k+2)(p+4k-2m-2n+2)(q-2m-2g+2) ×(-1)^k-m-l-hw_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1f!(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)! The second half of our integral becomes: i8π e^w_1∫_0^1 (∑_f=0^∞w_2^fz^f/f!)∑_q = 0^∞∑_p+2k=q p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 01/(2k+3)(2k+5)(q+m+2n+3) ×∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0w_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/(p-r)!r!(j-s)!s!(m-j-t)!t! ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1z^2(p+k+g)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2)zdz Once again, we write the exponent 2(p+k+g)+1 as 2(q-k+g)+1. Then we integrate over z and arrive at: i8π e^w_1∫_0^1∑_d=0^∞∑_f+2q-2k=d p+2k=q p,k,q,f≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 01/(2k+3)(2k+5)(q+m+2n+3) ×∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0w_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/f!(p-r)!r!(j-s)!s!(m-j-t)!t! ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2)z^d+2g+2dz =∑_q=0^∞∑_f+2p+2k=q p,k,f≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0i8π e^w_1/(2k+3)(2k+5)(p+2k+m+2n+3)(q+2g+3) ×w_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-t/f!(p-r)!r!(j-s)!s!(m-j-t)!t!(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_9^2(k-m)+1/2^2(k-m)l!h!Γ(k-m+3/2)Γ(k-m+3/2) Thus our integral evaluates to: I = ∑_q=0^∞∑_f+2p+4k=q f,p,k≥0∑_m,n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m8π e^w_1/(k+1)(k+2)(p+4k-2m-2n+2)(q-2m-2g+2) ×(-1)^k-m-l-hw_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1f!(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)! +∑_q=0^∞∑_f+2p+2k=q f,p,k≥0∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0i8π e^w_1/(2k+3)(2k+5)(p+2k+m+2n+3)(q+2g+3) ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_2^fw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)+1/2^2(k-m)f!(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!Γ(k-m+3/2)Γ(k-m+3/2) We now seek to simplify this answer. First, starting from q=0, we list the first few possible combinations for q, f, p, and k for the first term. Our table is as follows: q f p k [0.5ex] 0 0 0 0 1 1 0 0 2 0 1 0 2 2 0 0 3 1 1 0 3 3 0 0 4 0 0 1 4 0 2 0 4 2 1 0 q f p k [0.5ex] 4 4 0 0 5 1 0 1 5 1 2 0 5 3 1 0 5 5 0 0 6 0 1 1 6 0 3 0 6 2 2 0 6 4 1 0 q f p k [0.5ex] 6 6 0 0 7 1 1 1 7 1 3 0 7 3 2 0 7 5 1 0 7 7 0 0 8 0 0 2 8 0 2 1 8 0 4 0 q f p k [0.5ex] 8 2 1 1 8 2 3 0 8 4 0 1 8 4 2 0 8 6 1 0 8 8 0 0 9 1 0 2 9 1 2 1 9 1 4 0 We see that if we hold all the other indices constant and expand the sum over q and f, then we may extract a factor of 1/q-2m-2g+2w_2^f/f! from the terms associated with the set of fixed indices. Since q-f = 2p+4k, we may rewrite this factor as 1/f+2p+2k-2m-2g+2w_2^f/f!. Summing from zero to infinity, we find that: ∑_f=0^∞1/f+2p+4k-2m-2g+2ω_2^f/f! = (-w_2)^-2p-4k+2m+2g-2Γ(2p+4k-2m-2g+2, 0, -w_2) We may repeat this for the second term. We extract a factor of 1/f+2p+2k+2g+3w_2^f/f! and sum it over f from zero to infinity: ∑_f=0^∞1/f+2p+2k+2g+3ω_2^f/f! = (-w_2)^-2p-2k-2g-3Γ(2p+2k+2g+3, 0, -w_2) Then we have: I = ∑_p,k≥0^∞∑_m,n=0^k∑_j+l=n 0≤ j≤ m 0≤ l ≤ k-m∑_g=0^p+k∑_r+s+t+h=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j 0≤ h≤ k-m8π e^w_1Γ(2p+4k-2m-2g+2, 0, -w_2)/(k+1)(k+2)(p+4k-2m-2n+2)(-w_2)^2p+4k-2m-2g+2 ×(-1)^k-m-l-hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)/2^2(k-m)+1(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!(k-m-l)!(k-m-h)! +∑_p,k≥0^∞∑_m=0^k∑_n=-m^∞∑_l-j = n 0≤ j ≤ m l≥ 0∑_g=-p-m^∞∑_h-r-s-t=g 0≤ r≤ p 0≤ s≤ j 0≤ t≤ m-j h≥0i8π e^w_1Γ(2p+2k+2g+3, 0, -w_2)/(2k+3)(2k+5)(p+2k+m+2n+3)(-w_2)^2p+2k+2g+3 ×(-1)^k-m+l+h(k-m+1/2)^l(k-m+1/2)^hw_3^rw_4^p-rw_5^sw_6^j-sw_7^tw_8^m-j-tw_9^2(k-m)+1/2^2(k-m)(p-r)!r!(j-s)!s!(m-j-t)!t!l!h!Γ(k-m+3/2)Γ(k-m+3/2) JHEP
http://arxiv.org/abs/2307.00360v1
20230701151001
BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer
[ "Zuchao Li", "Shitou Zhang", "Hai Zhao", "Yifei Yang", "Dongjie Yang" ]
cs.CL
[ "cs.CL" ]
Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few LeavesThe research leading to these results has received funding from the Research Council of Norway via the projects (PCPC) (grant no. 274526) and BWCA (grant no. 314528). Emmanuel Sam10000-0001-7756-0901 Benjamin Bergougnoux20000-0002-6270-3663 Petr A. Golovach 10000-0002-2619-2990 Nello Blaser 10000-0001-9489-1657 ======================================================================================================================================================================================================================================================================= BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University. It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio. In the modeling level, we employ a bidirectional autoregressive architecture that allows the model to efficiently capture the complex dependencies of natural language, making it highly effective in tasks such as language generation, dialog systems, and question answering. Moreover, the bidirectional autoregressive modeling not only operates from left to right but also from right to left, effectively reducing fixed memory effects and alleviating model hallucinations. In the training aspect, we propose a novel parameter expansion method for leveraging the pre-training of smaller models and employ reinforcement learning from both AI and human feedback, aimed at improving the model's alignment performance. Overall, these approaches significantly improve the effectiveness of BatGPT, and the model can be utilized for a wide range of natural language applications. § INTRODUCTION In recent years, natural language processing has made remarkable progress, especially the large language model (LLM) pre-training, leading to the development of various language models capable of generating text with high fluency and naturalness. Language models are becoming increasingly impactful and crucial, as they underpin many applications integral to our daily lives. From generating human-like text for content creation to providing recommendations based on contextual understanding, language models serve as the backbone of numerous systems that improve the efficiency and effectiveness of workflows for millions of users. The landscape of modern transformer-based language models has been continuously evolving. Among these models, the Generative Pre-trained Transformers (GPTs) <cit.> has emerged as one of the most prominent ones due to their ability to efficiently model complex patterns in natural language. GPT and its variants have brought about unprecedented capabilities in generating high-quality text, making strides in fluency, coherence, and generalization. Under the hood, the GPT-like models predominantly rely on learning representations with a causal language modeling objective. They are categorized as unidirectional models as they only utilize context from one direction. In contrast, bidirectional models <cit.> are trained with a denoising pre-training objective, such as masked language modeling, allowing them to utilize context from both directions for prediction. To better align LLMs with different training objectives with a variety of downstream tasks, several key strategies have been implemented. T5 <cit.> reformulates different NLP tasks into text-to-text problems, thus facilitating a unified solution for diverse tasks. Further, instruction tuing models like FLAN <cit.> and T0 <cit.> leverages task-specific instructions during pre-training to enhance performance across various tasks. These instructions, integrated directly into the input, enable models to better align with task-specific objectives, leading to improved task completion and cross-task generalization. To close the gap between NLP tasks with human needs, reinforcement learning from human feedback (RLHF) <cit.> has been employed. By providing models with feedback on their generated outputs, they can fine-tune their predictions to better align with human expectations. This iterative learning process enables the model to adapt its output based on the reward signal from the feedback, paving the way for more accurate and contextually relevant language generation. These advancements, in tandem, have contributed to the growing sophistication and capabilities of LLMs, pushing the boundaries of LLMs to unprecedented levels that are closer than ever to human-like performance. Recent iterations of these models have demonstrated exceptional proficiency across a wide range of language tasks, even surpassing human-level performance in several evaluations <cit.>. Despite these advancements, current models still face limitations and challenges. One prominent issue is the "limited memory" effect, where the model's ability to maintain context dwindens with increasing sequence length. Additionally, these models often suffer from hallucinations, where they generate outputs not aligned with the input context, and face difficulties in capturing complex dependencies efficiently. These challenges impose barriers on the utilization of these models for certain high-stakes applications. To overcome these limitations and build upon existing advancements, Wuhan University and Shanghai Jiao Tong University jointly developed and trained BatGPT, a large-scale language model that utilizes a bidirectional autoregressive architecture for language generation, dialog systems, and question answering. In this paper, we present the design, modeling, and training of BatGPT, highlighting our bidirectional autoregressive architecture, novel parameter expansion method, and reinforcement learning approach for improving the model's alignment performance. Our results demonstrate that BatGPT is highly effective and versatile, making it a valuable alternative for various natural language applications. § RELATED WORK A diverse spectrum of pre-trained language models have been developed over the years, categorized mainly into autoregressive <cit.>, autoencoding <cit.>, and encoder-decoder models <cit.>. These models, each with their unique design philosophy and strength, have collectively pushed forward the boundary of natural language understanding and generation. T5 <cit.> was one of the early models that introduced the paradigm shift of formulating almost all NLP tasks as generation tasks. This approach was subsequently adopted and refined by instruction tuning models such as FLAN <cit.> and T0 <cit.>, which enhanced the performance across various tasks by enriching the pre-training phase with more task-specific instructions. Further, self-instruct <cit.> was proposed to address the limitations of human-written instructions in terms of quantity, diversity, and creativity by enabling models to generate their instructions via a bootstrapping framework. Recent models like Alpaca <cit.> and Guanaco [https://guanaco-model.github.io/], which were fine-tuned from LLaMA <cit.> using self-generated instructions, have shown promising performance, validating the effectiveness of self-instruct. Insights into the scaling laws of LLMs have also provided valuable guidance for balancing the trade-off between training cost and model performance. <cit.> showed that increasing model size, data size, and computational resources can lead to improved performance. This scaling law was further affirmed by models like GPT-3 <cit.> and Gopher <cit.>. While the Chinchilla study <cit.> showed that under the same computational budget, scaling down the model size yields uniformly and significantly better performance over models of larger sizes, indicating that most existing LLMs were far from adequately trained. This revised understanding of the scaling laws has guided the development of newer models such as BLOOM <cit.>, GLM-130B <cit.>, and etc. Another key effort to better align model capabilities with human needs is the integration of reinforcement learning from human feedback (RLHF) <cit.>, which enables models to generate more helpful and less harmful output by learning from human feedback. Recent work <cit.> have further replaced human feedback with AI feedback as the reward signal, making it possible to control AI behavior with far fewer human labels. BatGPT builds upon these advancements and aims at addressing some of the persistent limitations in the field. We introduce a bidirectional autoregressive architecture that not only enhances the model's capability in handling complex dependencies but also mitigates the limited memory issue. The novel parameter expansion method employed by BatGPT leverages the knowledge garnered during the pre-training of models of smaller sizes, thus facilitating a significant reduction in time and computational costs. Inspired by RLHF models, BatGPT's reinforcement learning approach further refines the alignment between model outputs and human expectations. In summary, BatGPT addresses the limitations of its predecessors while incorporating their strengths. We hope the unique approach and advanced features of BatGPT can set a new benchmark in the field, not only presenting the potential to contribute to a wide range of natural language applications, but also paving the way for future model development. § BATGPT §.§ Bidirectional Autoregressive Pre-training BatGPT is pretrained using a bidirectional autoregressive language modeling objective, a modification of the traditional autoregressive objective where the model learns to predict the next token based on all previously seen tokens in the sequence, from both the past and future directions. This makes the model capable of capturing dependencies in both the forward and backward context. Let x = (x_1, x_2, ..., x_T) denote a sequence of tokens of length T. The goal of BatGPT during pre-training is to maximize the joint probability of observing the entire sequence. More specifically, the model aims to predict each token x_t given the preceding tokens x_<t or the subsequent tokens x_>t. Given the sequence x, the pretrained BatGPT, denoted as π_ϕ^pretrain, parameterized by ϕ, outputs a distribution over possible tokens at each position from both ends of the sequence: π_ϕ^pretrain(x_t | x_<t) and π_ϕ^pretrain(x_t | x_>t). The objective for pre-training can be written as: J^pretrain(ϕ) = 𝔼_x ∼ D^pretrain[ ∑_t=1^Tlogπ_ϕ^pretrain(x_t | x_<t ) + ∑_t=1^Tlogπ_ϕ^pretrain(x_t | x_>t) ], where D^pretrain represents the distribution over the pre-training data and the expectation is taken over all sequences x sampled from D^pretrain. By maximizing J^pretrain, BatGPT learns to capture the intricate linguistic patterns, semantics, and structure inherent in the vast array of data it is trained on, thus yielding more coherent and fluent outputs.. §.§ Instruct Tuning Following the pre-training stage, BatGPT is further refined via instruction tuning. This process utilizes a wealth of prompted data in the form of ⟨ prompt, response ⟩ pairs. These pairs serve as contextualized cues for the model to generate appropriate responses, thereby facilitating the alignment of BatGPT's behavior with human instructions and expectations. Specifically, the prompted data set, denoted as D^inst, consists of sequences (x, y), where x stands for a prompt and y for the corresponding response. The instruction tuning objective then becomes to optimize the following likelihood function: J^inst(ϕ) = 𝔼_(x, y) ∼ D^inst[ logπ_ϕ^inst(y|x) ], where π_ϕ^inst(y|x) represents the probability of generating the response y given the prompt x according to the instruction tuning updated model π_ϕ^inst. In addition to this, BatGPT is further refined using multi-round dialogue data, which takes concatenating conversation history as input and last-round response as output. This focused training strategy is devised specifically to enhance BatGPT's capability in comprehending and maintaining lengthy chat threads. It optimizes BatGPT's ability to preserve conversational context over long conversations, allowing for more coherent, in-depth, and meaningful dialogues. The instruct tuning phase essentially tunes BatGPT's parameters ϕ to better generate responses that align with the given prompts, enabling BatGPT to effectively process and appropriately respond to diverse instructions. §.§ Reinforcement Learning from Human Feedback Reinforcement Learning from Human Feedback (RLHF) forms a crucial component of the BatGPT training pipeline. To make the aligning process more efficient and flexible, BatGPT learns not only from human feedback but also from feedback generated by other AI systems. The underlying objective of RLHF is to optimize a reward model R which is then used to train BatGPT through Proximal Policy Optimization (PPO) <cit.>. The reward model R is trained on collected preference data, which consists of pairs of model-generated responses (y, y') along with a preference d ∈{-1, 0, 1}. Here, d = -1 denotes preference for y, d = 1 denotes preference for y', and d = 0 represents indifference. The collection of human preference data is detailed below. Preference Data Collection In order to train BatGPT via RLHF, it was necessary to gather an ample amount of preference data based on human judgement. To expedite this process, we developed a preference data annotation platform. The front-end page design presented to the annotators is shown in <ref>. On this platform, the annotators are presented with two model outputs, denoted as 𝒜 and ℬ, generated in response to the same instruction. These outputs can either be both from BatGPT or one from BatGPT and one from another LLM. The platform provides the annotator with a predefined set of options to make the task of comparison more systematic and efficient. For evaluating the acceptability of the outputs, particularly focusing on potential harmfulness, the annotator is presented with four choices: "𝒜 is acceptable", "𝒜 is unacceptable", "ℬ is acceptable", and "ℬ is unacceptable". In order to gauge the helpfulness of the outputs, the annotator is given another set of four options: "𝒜 is better", "ℬ is better", "both 𝒜 and ℬ are good", and "both 𝒜 and ℬ are not good". The use of these predefined options streamlines the process of evaluating model outputs, reduces ambiguity, and increases the reliability of the feedback gathered, thus ensuring a more robust and effective training process for BatGPT. Following the collection of human feedback, BatGPT employs AI systems to amass additional preference data. This is achieved through specially designed prompt templates where the feedback options align with what is presented to human annotators. The feedback gathered from both humans and AI, consolidated, creates an extensive preference dataset, which further enhances the depth and diversity of the training pool. RLHF Training The reward model R parameterized by θ, predicts rewards r(y) and r(y') for each (y, y', d) triplet in the preference dataset D^pref, where r(y) = R(x, y) and r(y') = R(x, y'). The reward model is trained by minimizing the the following loss: L^R(θ) = 𝔼_(y, y', d) ∼ D^pref[ max(0, 1 - d · (r_θ(y) - r_θ(y'))) ]. Once the reward model is trained, it is used to update the policy parameters ϕ via Proximal Policy Optimization (PPO). In order to maintain the capabilities obtained from the pre-training and instruct tuning stages, corresponding loss terms are incorporated into the objective, resulting in an expanded objective function. ϕ is trained by maximizing the following objective: J^RL(ϕ)= 𝔼_(x, y) ∼ D^RL[r_θ(x, y)-λ_ITlog(π_ϕ^RL(y | x) / π_ϕ^inst(y | x))]+ λ_PT𝔼_x ∼ D^pretrain [log(π_ϕ^RL(x))], where r_θ(x, y) is the reward for a generated response y given a prompt x under the reinforcement learning updated policy π_ϕ^RL, λ_IT and λ_PT are regularization terms that keeps the RL policy close to the instruct tuning and pre-training distribution. This hybridized reinforcement learning approach enables BatGPT to leverage the nuanced understanding of humans and the robust consistency of AI systems. As such, BatGPT can generate more beneficial, better-aligned, and safer outputs, accommodating a wide range of applications. § PERFORMANCE The CMMLU <cit.> is a comprehensive benchmark specifically designed to assess language model capability over a variety of Chinese-related subjects. This benchmark evaluates models across diverse categories, including humanities, social sciences, STEM (science, technology, engineering, and mathematics), and other areas. The evaluation results reported in CMMLU is presented in <ref>. BatGPT ranked second among all Chinese-oriented language models, exhibiting promising performance. BatGPT-15B achieved the highest average accuracy in both the STEM and Others categories with scores of 33.49% and 42.14% respectively. The model demonstrated consistent high performance across the other categories as well: it scored 35.38% in Humanities, 36.31% in Social Science, and 37.00% in China-specific topics. For zero-shot accuracy on the CMMLU STEM subset and the full set, under the context of both direct answer (DA) and chain-of-thought (COT) prompts, BatGPT-15B achieved 33.74% and 34.47% respectively for the STEM category, and 38.48% and 35.91% respectively for the overall set. Overall, BatGPT's robust performance across various topics and its ability to handle different types of prompts demonstrate its wide-ranging capabilities and its suitability for a broad spectrum of applications. § CONCLUSION In this paper, we introduce BatGPT, a large-scale language model by Wuhan University and Shanghai Jiao Tong University. By leveraging a bidirectional autoregressive architecture, BatGPT addresses the limitations of existing models, such as limited memory and hallucinations, allowing it to capture complex dependencies more efficiently. The training of BatGPT incorporates a novel parameter expansion method that leverages pre-training of smaller models and reinforcement learning from both AI and human feedback. This approach enhances the model's alignment performance and enables it to generate text that is highly fluent, coherent, and contextually relevant. However, it is important to acknowledge that challenges and limitations still exist. Ongoing research and development efforts are essential to further refine and improve the capabilities of language models like BatGPT. Ethical considerations regarding bias, fairness, and responsible use also need to be addressed to ensure the deployment of these models benefits society as a whole. As research and development continue, we can expect further breakthroughs in language understanding and generation, paving the way for even more sophisticated and impactful language models in the future.
http://arxiv.org/abs/2307.02833v1
20230706080256
Applying Process Mining on Scientific Workflows: a Case Study
[ "Zahra Sadeghibogar", "Alessandro Berti", "Marco Pegoraro", "Wil M. P. van der Aalst" ]
cs.DB
[ "cs.DB" ]
Zahra Sadeghibogar et al. Chair of Process and Data Science, RWTH Aachen University, Aachen, Germany {sadeghi, a.berti, pegoraro, wvdaalst}@pads.rwth-aachen.de Applying Process Mining on Scientific Workflows: a Case StudyThe authors gratefully acknowledge the German Federal Ministry of Education and Research (BMBF) and the Ministry of Education and Research of North-Rhine Westphalia for supporting this work/project as part of the NHR funding. Also, we thank the Alexander von Humboldt (AvH) Stiftung for supporting our research. Zahra Sadeghibogar0000-0002-6340-9669, Alessandro Berti0000-0002-3279-4795, Marco Pegoraro0000-0002-8997-7517, Wil M.P. van der Aalst0000-0002-0955-6940 ==================================================================================================================================================================================================================================================================================================================================================================================== Computer-based scientific experiments are becoming increasingly data-intensive. High-Performance Computing (HPC) clusters are ideal for executing large scientific experiment workflows. Executing large scientific workflows in an HPC cluster leads to complex flows of data and control within the system, which are difficult to analyze. This paper presents a case study where process mining is applied to logs extracted from SLURM-based HPC clusters, in order to document the running workflows and find the performance bottlenecks. The challenge lies in correlating the jobs recorded in the system to enable the application of mainstream process mining techniques. Users may submit jobs with explicit or implicit interdependencies, leading to the consideration of different event correlation techniques. We present a log extraction technique from SLURM clusters, completed with an experimental. § INTRODUCTION A workflow is a description and automation of a process, in which data is processed by different logical data processing activities according to a set of rules. A scientific workflow is an ensemble of scientific experiments, described in terms of scientific activities with data dependencies between them <cit.>. Scientific workflows allow scientists to model and express the entirety of data processing steps and their dependencies. Fig. <ref> shows an example of scientific workflows depicted as a flow chart, where each task is associated with a command. Modern scientific experiments generate and consume a vast amount of data, making scientific workflows increasingly data-intensive. To process this massive data in a reasonable time, scientists need to use parallel processing methods in the cloud or on an HPC cluster <cit.>, which can provide the necessary computing power for heavy tasks. However, executing scientific workflows on HPC clusters is usually complex and time-consuming due to the large data involved <cit.>. This paper aims to employ process mining techniques to improve the understanding of the workflow execution process and identify optimization opportunities. Over the past decades, there has been a growing interest in the field of process mining <cit.>. Process mining aims to extract information about processes from event logs, i.e., execution histories. This paper applies process mining to existing scientific workflows with the following goals: * Documentation of scientific workflows: reporting which commands are executed and in which order. We pursue this goal by using process discovery techniques, one of the main branches of process mining <cit.>: Process discovery techniques assume that every record in the event log contains at least: (i) a reference to the executed activity, (ii) a reference to the identifier that associates an event to a particular execution of the process, and (iii) the timestamp at which the event has occurred. * Detection of bottlenecks affecting the execution of scientific workflows. We enrich the process model discovered in the previous step with performance results obtained . While the techniques proposed in this paper can be applied to any workflow system, to promote applicability we focus on the SLURM system governing the RWTH HPC cluster , one of the most widely used platforms in the field. The issue in examining the logs obtained from a given workflow system is the absence of a clearly defined case identifier that groups events associated with the same execution. In order to apply process mining on these logs, it is necessary to study the correlation between tasks that are running on the HPC cluster. Fig. <ref> shows an overall view of our approach. The RWTH HPC cluster is observed periodically, and an input log is generated; then, based on the way the user has executed their jobs on the SLURM <cit.>, we propose two different approaches to assign case IDs to events. Finally, we obtain an event log on which process mining techniques can be applied. The remainder of this paper is organized as follows. Sec. <ref> reviews some related works. Sec. <ref> shows some technical notions on how the SLURM system is implemented and which information is available to eventually form an event log. Sec. <ref> explains our approach to apply process mining techniques to the scientific workflows running on SLURM-based HPC clusters. Sec. <ref> introduces some analyses of the event log extracted from the SLURM system of RWTH Aachen University. Finally, Sec. <ref> concludes the paper. § RELATED WORK Many studies have analyzed HPC behavior starting from data collected about the running jobs. In <cit.>, an extension of miniHPC is proposed, to enable job-level monitoring to interpret anomalous behaviors such as load imbalance, CPU and I/O anomalies, or memory leaks. A framework for monitoring, analyzing, and predicting jobs running on PBS-based job scheduler HPCs is defined in <cit.>. The monitoring module captures data about the topology of in-use nodes while a job is running. This provides a deeper understanding of how the job is distributed across the HPCs node network. In <cit.>, a software stack for center-wide and job-aware cluster monitoring to influence node performance is described. Process mining techniques have been used to analyze scientific/business workflow logs. In <cit.>, a technique to mine scientific workflows based on provenance (i.e., the source of information involved in manipulating artifacts) is proposed. In <cit.>, Scientific Workflow Mining as a Service (SWMaaS) is presented to support both intra-cloud and inter-cloud scientific workflow mining. A limitation of <cit.> is that they examine the jobs regardless of their interdependencies. Moreover, in <cit.>, it is assumed that the data source already contains all the necessary information to apply process mining, ignoring the situations in which no case notion is defined. This paper aims to introduce event correlation methods applicable to event data extracted from scientific workflows. § PRELIMINARIES We will focus on analyzing event data from the popular SLURM platform for HPC computing. Hence, in this section, we present some technical notions. To interact with SLURM, we have a set of possible commands. The most essential commands are listed here <cit.>: * : runs a single job. We need to create a script, which can then be submitted on SLURM for real-time execution. * : submits one or more commands for later execution on SLURM. * : reports the states of the running jobs. This command helps us to extract a log for process mining purposes. §.§ Execution of a Single Job on SLURM Any script runs on SLURM as a job. As mentioned above, the execution of a job on SLURM could be easily done with and containing one single command. Understanding the sequential stages a job undergoes for execution, and the data that can be extracted for each job running in the SLURM queue is valuable <cit.>. Typically, jobs pass through several states in the course of their execution. There are a total of 24 possible states for a job, of which three can be seen in Fig. <ref> <cit.>. * PENDING (PD): the job is waiting for resource allocation. * RUNNING (R): the job is currently allocated. * COMPLETING (CG): the job is in the process of completion. The SLURM scheduling queue contains all the information about running jobs. To view this information we use the command. The most important features of the jobs that have been used in our study are listed in Table <ref>. These features could be extracted with the command on the SLURM system. This command shows the list of jobs in the SLURM scheduling queue along with their account, job ID, declared dependency, executed command, status, and project ID information <cit.>. §.§ Execution of a Sequence of Jobs on SLURM To explain how to run a series of jobs (sequence of scripts) on the SLURM workflow system, we will go through an example. Consider a user who wants to run four scripts on SLURM, pre-processing as the first one, then parallel-job1 and parallel-job2, which can be executed in parallel but must be executed after the pre-processing script (because they need the output from pre-processing). Finally, the merge script needs the output of the two parallel jobs for its execution. The user can run the sequences of jobs on SLURM in two ways: either manually (without explicit interdependencies) or automatically (with interdependencies). Execution of a Sequence of Jobs without Explicit Interdependencies: In this case, the user runs the jobs manually—without declaring the inter-dependencies between jobs—and after submitting each job waits for its execution to be completed; then, executes the next job (Fig. <ref>). In this case, each job is executed as independent, and only the user knows that some of these jobs are logically dependent on each other. Execution of a Sequence of Jobs with Explicit Interdependencies: In this scenario, the user uses the SLURM dependency management system and submits all jobs at once with correct inter-dependencies on the SLURM system, as shown in Fig. <ref>. Here, the user uses the command. This command is used to submit a job script for later execution using the option. The script typically contains one or more commands to launch parallel tasks. In this case, the user does not need to wait for the outputs of a single job, but can wait for the execution of all the tasks and retrieve the final results at completion (Fig. <ref>). Fig. <ref> shows the output of the command where the user runs jobs manually, and where the DEPENDENCY column in the PENDING state has no values. Fig. <ref> shows the output of the command where the user has declared explicit interdependencies between jobs. As one may see, the DEPENDENCY column in the PENDING state has a non-empty value. § APPROACH The input of most process mining algorithms is an event log, which contains at least a case, an activity, and a timestamp as attributes for each event. The majority of algorithms presume that the event data is fully accessible and has a clearly defined case notion. However, we cannot assume that we know the complete historical log, because of privacy issues and the required administrative privileges on the target workflow system. Instead, we aim to observe it for a limited amount of time, avoiding the aforementioned issues , as described in Sec. <ref>. Moreover, since the SLURM log does not contain any explicit case notion, in Sec. <ref> we describe event correlation to assign a case to the different events and allow for process mining analyses. §.§ Register SLURM events In order to extract an event log from the system, we perform the following operations periodically (we refer to this as observing the system): * Connect to the access node of the SLURM system * Observe the status (e.g., PENDING, RUNNING, COMPLETING) of the current jobs using the SLURM command. * For each of the listed jobs (rows of the log file), one of the following situations occurs: * The JOBID is new: register an event related to the creation of the job. * The JOBID already exists, but the status has changed: register an event related to the status change. * The JOBID already exists, and the status has not changed: do nothing. All the features mentioned in Table <ref> are recorded for each job. Our log also has a TIMESTAMP column that marks the time of observation of the event, and the COMMAND values are mapped from the executed file path to the executed file name (filtering on the last part of the path). §.§ Event Correlation Let us now obtain case IDs from SLURM. We extract a case ID with different techniques, depending on whether the jobs were executed with or without explicit interdependencies. Case ID Extraction with Explicit Interdependencies: We utilize this technique when the user has specified the inter-dependencies among jobs. This declaration allows the inclusion of the DEPENDENCY column in the extracted log, indicating the jobs on which the current job depends. Note that the DEPENDENCY column for the job lists only the dependencies that have not been completed yet. Thus, the DEPENDENCY list would be naturally empty for a job that is in the RUNNING state. To implement this method, a Directed Acyclic Graph (DAG) is generated for each chain of connected jobs in PENDING state by utilizing the JOBID, and DEPENDENCY columns. The vertices are job IDs and the edges show dependent job IDs, and then a unique case ID will be assigned to all of the connected job IDs as shown in Fig. <ref>. As shown in the table of Fig. <ref>, JID2 and JID3 are dependent on JID1, and JID4 is dependent on JID2 and JID3. This connection is exploited to assign case ID JID4321 (Fig. <ref>, green column of Fig. <ref>). Different cases will be assigned to different discovered connected components. For instance, JID111098 is assigned to another execution of the same chain of commands as JID4321. Case ID Extraction without Explicit Interdependencies: In this case, we do not have explicitly defined job dependencies; therefore, we need to use the attributes at the event level to determine correlations and dependencies between the jobs. We use a combination of the following two attributes in order to define the case identifier: * The account executing the job: it is reported as ACCOUNT in SLURM. * The group of the given job: the project ID is intended to group the jobs belonging to the same project. The status should be empty or default if the user does not call the command with the project ID. The project ID is reported as GROUP in SLURM. So, whenever we execute the same scripts several times, the project ID is reported as the same GROUP in SLURM. We have a many-to-many relationship between accounts and groups. All the jobs executed by an account under a given group are therefore related to the same project. We can use ACCOUNT-GROUP as case ID; in this case, we are certain that the jobs executed by the same user under the same project are all collected. In this technique, we generate a unique case ID per each unique combination of ACCOUNT and GROUP, as shown in Fig. <ref>. The parallel relationship between Parallel-job1 and Parallel-job2 has been discovered based on their occurrence in rows 2, 4 and 8, 10, which show they can be executed in any order. In this method, we may consider only the account instead of considering the combination of group and account, but the advantage of considering also the group is that the control flow of different projects of the same account is not combined. This technique also has limitations, including adding loops for commands, because we do not have a way to recognize that two consecutive executions of the same command are related to different experiments. As a result, the precision is significantly reduced, because many different behaviors and command sequences are allowed by the resulting model. § EXPERIMENTS The HPC Monitoring Cockpit was applied to the SLURM system of the RWTH Aachen University repeatedly, and an event log was obtained[RWTH HPC cluster: <https://help.itc.rwth-aachen.de/service/rhr4fjjutttf/>. A sample event log can be downloaded at the address <https://www.ocpm.info/hpc_log.csv>.]. Some statistics about the considered event log are contained in Table <ref>. As the different accounts belong to different research areas (including physics, chemistry, biology, and computer science) and executed purpose-specific scripts, we could not produce models containing the flow of execution for all the accounts. Instead, we focus on the process models that we can extract for a single account. These process models show the scientific workflows executed by a single user/research group. Moreover, the process model is annotated with performance information on the arcs, allowing for the detection of paths with high execution time (bottlenecks), and therefore fulfilling the second goal of finding root causes of performance problems. To highlight different execution paradigms, we focus on two accounts: * jara0180: contains computations performed on a funded research project (Molecular dynamics simulations of P2X receptors). * thes1331: contains scientific experiments performed for an MSc thesis. The executions carried out by jara0180 take advantage of explicit interdependencies (since HPC expertise is involved). Therefore, for jara0180 we were able to develop a meaningful process model, as depicted in Fig. <ref>. In this figure, we observe two distinct chains of commands. The first chain comprises commands from jobscript_0000 to jobscript_0005 , while the second chain includes the remaining commands related to two different projects and corresponding to 30 distinct cases in the event log. We could still obtain a model from the data (contained in Fig. <ref>) without considering these interdependencies (considering the ACCOUNT and GROUP values lead to two distinct cases). However, this model is less precise because it relies solely on the temporal order of command execution, where every event belongs to the same case in the event log. The executions performed by thes1331 are defined without explicit interdependencies. The case extraction approach, relying on explicit interdependencies, leads to the assignment of a unique case ID to each execution (seven distinct cases). Consequently, the model depicted in Fig. <ref> exhibits concurrency among all the executed commands, rendering it highly imprecise. For thes1331, it is more appropriate to focus on the models discovered without considering the explicit interdependencies (contained in Fig. <ref>), which shows the temporal order of execution of the commands. The discovered models provide valuable insights to users by visualizing the control flow and execution frequency, enabling them to identify bottlenecks and make informed decisions for further improvements. Based on Table <ref>, a mere fraction of HPC users submits their jobs on HPC clusters with explicit interdependencies. Such interdependencies are crucial for identifying connected jobs. Without them, the resulting models are imprecise due to the following scenarios: either every event belongs to a different case, or every event belongs to the same case. § CONCLUSION In this paper, we propose an approach to extract and analyze process mining event logs of an HPC system (in particular, we focus on the SLURM system). While this is not the first application of process mining to HPC systems, existing techniques assume the case notion to be well-defined in the data source. This assumption is not satisfied by mainstream systems, and we propose two different case notions (using and not using explicit interdependencies). Moreover, we propose the HPC Monitoring Cockpit as a tool to connect to the HPC system, extract an event log, and perform a process mining analysis. The analyses allow us to document the execution of scientific workflows for different accounts or research groups utilizing process models that are annotated with performance information (allowing us to detect bottlenecks). Therefore, we can respond to our initial research questions by using process mining techniques. Our event logs are extracted from information that is publicly available in the SLURM system (including the command that is executed and the requested environment, i.e., the number of CPUs, RAM, and disk space required). However, we do not know the detailed content of the commands or have access to more advanced profiling options. This would require collaboration with the specific research groups operating in the HPC systems and availability to modify the execution of scientific workflows to accommodate more detailed process mining analyses. Our process mining analyses rely on a single account or research groups. Since the naming schema of the commands is quite arbitrary, we could not identify shared logical steps (e.g., pre-processing, training of ML model, testing of the model) between different accounts; therefore, we could not produce a generic process model. This is indeed a limitation that could not be tackled without properly naming the commands executed on SLURM and without having insights about the commands. Overall, our approach succeeds in extracting an event log for process mining purposes from the SLURM HPC system, and we can respond to our basic analytical goals. However, given the arbitrary execution styles and naming conventions, we could not produce more general analyses, which remain as a goal for future work. splncs04
http://arxiv.org/abs/2307.02637v1
20230705201301
Surge Routing: Event-informed Multiagent Reinforcement Learning for Autonomous Rideshare
[ "Daniel Garces", "Stephanie Gil" ]
cs.AI
[ "cs.AI", "cs.MA", "cs.RO" ]
[ Alfred Schreiber August 1, 2023 ==================== Large events such as conferences, concerts and sports games, often cause surges in demand for ride services that are not captured in average demand patterns, posing unique challenges for routing algorithms. We propose a learning framework for an autonomous fleet of taxis that scrapes event data from the internet to predict and adapt to surges in demand and generates cooperative routing and pickup policies that service a higher number of requests than other routing protocols. We achieve this through a combination of (i) an event processing framework that scrapes the internet for event information and generates dense vector representations that can be used as input features for a neural network that predicts demand; (ii) a two neural network system that predicts hourly demand over the entire map, using these dense vector representations; (iii) a probabilistic approach that leverages locale occupancy schedules to map publicly available demand data over sectors to discretized street intersections; and finally, (iv) a scalable model-based reinforcement learning framework that uses the predicted demand over intersections to anticipate surges and route taxis using one-agent-at-a-time rollout with limited sampling certainty equivalence. We learn routing and pickup policies using real NYC ride share data for 2022 and information for more than 2000 events across 300 unique venues in Manhattan. We test our approach with a fleet of 100 taxis on a map with 38 different sectors (2235 street intersections). Our experimental results demonstrate that our method obtains routing policies that service 6 more requests on average per minute (around 360 more requests per hour) than other model-based RL frameworks and other classical algorithms in operations research when dealing with surge demand conditions. Large events such as conferences, concerts, and sports games often to create surges in demand for rideshare and taxi services. The future availability of autonomous fleets of taxis to service such event-related demand surges over large urban environments is a compelling motivation both for industry and urban planning efforts <cit.>. However, the autonomous servicing of these requests would require a highly adaptive routing protocol that can accurately anticipate large swings in demand and premptively coordinate the routes of entire fleets of taxis to service this demand. Unfortunately most routing algorithms fail to adapt to these demand surges, either because they do not consider potential future requests and thus do not plan ahead <cit.>, or because they plan ahead using demand models that are based on averaged or short-term history and hence do not account for point disturbances associated with event-related surges <cit.>. During surge times, routing plans that use inaccurate demand information can be worse than using no planning at all (see Fig. <ref>), and even a slight degradation in performance, say failing to pick up 2 requests per minute, will lead to large cumulative opportunity loss (120 requests per hour). The plethora of digital data freely available on the internet in the form of event reviews, event descriptions, and venue occupancy information holds promise for developing event-informed demand prediction mechanisms <cit.>. Recent advances in Natural Language Processing allow for automatically scraping and parsing these large volumes of textual data, and recent advances in reinforcement learning allow for online planning for multiagent problems. [14]r0.3 < g r a p h i c s > Motivating example Ideally, it would be possible to bring these advances to bear on this problem, using event-related information from the internet to inform a learning protocol that autonomously routes a fleet of taxis to service surge-time demand. However major challenges to achieve this include: i)adequately capturing information from the internet such that it is aggregate, and can thus incorporate an arbitrary number of events and venues over a city, but also sufficiently expressive to capture surge demand swings, and ii) deriving a multiagent routing algorithm that is agile enough to adapt to big changes in demand and is scalable to large city-sized state spaces. In this paper we address these challenges by deriving a multiagent routing framework that learns routes from event-informed hourly demand predictions. Our framework captures event information by leveraging sentence embeddings <cit.> generated from a pre-trained Masked Language Model (MLM) <cit.>. We build off spectral clustering <cit.> and graph summarization techniques <cit.> to aggregate sentence embeddings for events in an unsupervised manner and produce aggregate vector representations that can be used as input features to neural networks (NNs) that predict demand. Our choice of representation allows our system to obtain semantically representative embeddings that contain information about multiple events for each individual sector. This allows us to consider an arbitrary number of events, in contrast to previous work on demand prediction <cit.>. To map the demand predictions over sectors to discretized intersections that can be used in routing algorithms, we leverage occupancy schedules<cit.>. Finally, we address the multiagent routing problem over predicted surge demand by building off of one-agent-at-a-time rollout <cit.> with a limited sampling certainty equivalence <cit.> approximation. Here, a major challenge is to compute the expected cost over the future demand uncertainty. Taking into account the effect of this uncertainty over an entire city map is too expensive and thus we derive a “limited sampling” certainty equivalence approach, whereby our method takes into account local area demand information, using a deterministic representation as with classical certainty equivalence <cit.> combined with Bernoulli random variables, to capture demand distribution in areas with a smaller demand per minute (no events) than other parts of the map (with events). This leads to a high fidelity representation of both high and low demand areas of the map. Ultimately, this allows us to achieve feasible computation times while anticipating demand surges caused by events, and coordinating multiple taxi routes to successfully capture and service this demand. As a result, our framework learns routing policies that service 6 more requests per minute on average (360 more requests per hour on average) than other routing frameworks over mid and lower Manhattan. Our system considers an environment with 2235 intersections, 100 agents, and 400 to 1000 requests per hour. A graphical depiction of the overall architecture of our system is shown in Fig. <ref>. The main contributions of this work are as follows: (1) We develop a novel event processing procedure that uses sentence embeddings from a pre-trained MLM<cit.>, spectral clustering <cit.>, and cluster averaging to generate representative event embeddings for each sector of the city, enabling our system to utilize event information on the demand prediction task. Our method can handle an arbitrary number of events and venues in the demand prediction task, addressing the limitations of previous work on demand prediction <cit.>. (2) We propose a novel two NN prediction system that predicts hourly demand for all sectors in the map by integrating event information in the prediction process. (3) We propose a novel probabilistic mechanism that uses occupancy schedules to map demand over sectors of the city to a fixed number of individual street intersections, enabling us to use publicly available data for ride services in our intersection level routing framework. (4) We propose an improved model-based RL routing framework that combines one-agent-at-a-time rollout <cit.>, limited sampling certainty equivalence <cit.>, and our predictive scheme to learn routing policies that can adapt to demand surges caused by events. (5) We empirically demonstrate that our method leads to 6 more requests being serviced per minute on average (360 more requests per hour on average) than other model-based RL frameworks <cit.> and classical algorithms in operations research (OR) <cit.>. We test our approach using real NYC's Taxi and Limousine High-Volume For-Hire-Vehicle (HV-FHV) data <cit.> for a region of Manhattan with 2235 intersections and a fleet of 100 vehicles. § RELATED WORKS Since our approach deals with the problem of dynamic vehicle routing by using text processing and clustering, combined with demand prediction and occupancy studies, we provide a short comparison of other methods <cit.> that tackle dynamic vehicle routing, but fail to anticipate demand surges. We also provide a comparison between demand prediction techniques in the literature <cit.> and our proposed demand prediction module. We introduce related literature that we built upon for text processing and occupancy studies. For this reason, we cover related works in four sections: Dynamic Vehicle Routing, text processing and clustering, demand prediction, and occupancy studies. Dynamic Vehicle Routing This problem has been widely studied in the literature. Earlier works tried tackling this problem using instantaneous assignment approaches <cit.>, and routing heuristics, such as 2-opt <cit.>, local search <cit.>, and genetic algorithms <cit.>. Instantaneous assignment approaches, however, produce myopic policies as they do not consider potential future requests. Sampling-based stochastic optimization methods <cit.> try to solve this issue, but at the expense of long computation times due to the multistep planning objective and the large state space. To improve computation times, several authors considered offline trained approximations, like approximate value iteration<cit.>, Deep Q-learning <cit.>, Policy approximation <cit.>, and transformer-based architectures <cit.>. Offline learning methods tend to be computationally faster at inference time, but they fail to generalize to unknown scenarios not represented in the training data, and they do not scale well as the state space becomes larger. This makes them infeasible for deployment in real urban environments, where the state space is very large and demand can suddenly change. To try to address this issue, other authors have considered online optimization methods, like Monte Carlo Tree Search (MCTS) <cit.>, DESPOT <cit.>, and Multiple Scenario Approach (MSA) <cit.>, and hybrid approaches <cit.> that combine offline trained methods with online optimization. These methods require an accurate estimation of the current demand for the service, which is not a trivial task. Our approach aims to solve the issue of estimating the current demand by using a predictive scheme based on environmental data and event information, and integrating it into an improved model-based RL multivehicle routing framework. Text processing and Unsupervised clustering for Summarization Sentences can be encoded into vectors using different techniques. Some authors proposed word embeddings, such as word2vec <cit.> and GloVe <cit.>, and their aggregated representations <cit.>. These methods, however, do not consider relations between words and therefore are not suitable for capturing complex semantic relations. Other authors have looked at how to obtain sentence embeddings that capture semantic and inter-word relations <cit.>. These word and sentence embeddings can then be used for unsupervised clustering and aggregation. Multiple works in the literature have considered transforming the latent space of the embeddings by constructing a graph based on similarity between embeddings <cit.>, using metrics like cosine similarity, euclidean distance, and radial basis functions. Text is then summarized by following leader-follower algorithms <cit.>, or analyzing the spectral properties of the graph <cit.> <cit.>. For our approach, we build on top of this work by considering a MLM <cit.> to generate dense sentence embeddings and then performing spectral clustering based on the algorithm proposed in <cit.>. We use a simple averaging technique for aggregation of the embeddings based on their cluster label, as this will allow us to combine multiple embeddings instead of choosing a single embedding from a cluster. Demand prediction Predicting demand for transportation services has been a widely studied topic in the literature. Some authors consider time series analysis techniques, including uncertainty analysis <cit.>, and autoregressive integrated moving average (ARIMA) <cit.>. These approaches, however, rely on the data being stationary or following predictable seasonal changes that can easily be removed by seasonal-differencing. Demand for transportation services, however, tends to be highly dynamic and it is affected by external events like concerts and traffic accidents. This situation prevents time series analysis methods from performing as expected when applied to taxi demand data. To address this shortcoming, various authors have considered learning approaches, including convolutional Neural Networks (CNNs) <cit.>, Recurrent Neural Networks (RNNs) <cit.>, and transformer-based architectures <cit.>, to predict demand based on spatial and temporal features (e.g. time of day, weather conditions, etc.). None of these methods use any kind of information about events or a measure of concentration of potential passengers, and hence fail to generalize when the demand changes radically because of an event (e.g. concert, festival, parade, etc). Other authors also look at predicting demand using a combination of spatial and temporal features, as well as information about events <cit.>. These methods, however, only consider a limited number of venues over non-overlapping regions of the map, which means that they fail to generalize to more realistic environment where there are multiple venues sharing the same region of the map. However, these works either only look at predicting the location of potential hotspots for demand without explicitly predicting the number of requests that will enter the system <cit.>, or they only predict demand around a limited number of venues by casting the demand prediction as a time series forecasting task <cit.>. In this paper, we aim to address these limitations by considering a event-informed demand prediction system that predicts the potential number of requests that will enter the ride service system at a given hour, considering an arbitrary number of venues. r0.15 < g r a p h i c s > Our routing environment Occupancy studies Occupancy schedules are numerical descriptions of how occupied a building is at different hours of the day, for different days of the week. These types of schedules have been widely used to model energy consumption for different types of buildings <cit.>. There exists multiple schedules available online, including the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1 standard <cit.>, and the ones provided by COMNET <cit.>. These schedules can be empirically verified using Google's popular times data, which uses quasi-real time data from GPS signals to determine the occupancy of a commercial building. We do not consider Google's popular times data as the source for occupancy information for this project, as accessing it in bulk constitutes a violation of the terms of service for Google Maps. Instead, we use COMNET's schedules, and create a modular interface and formulation that can be easily adapted to use Google's data (or any other real-time data) if it ever becomes available. As far as we know, our work is the first to use occupancy schedules to model concentrations of customers and probabilistically assign demand to a fixed number of intersections based on the estimated concentration of customers around those intersections. § PROBLEM FORMULATION In this section, we present the formulation for our multiagent taxicab routing problem, casting it as a discrete time, finite horizon, stochastic Dynamic Programming (DP) problem. We define the environment, requests, state and control space, and our problem of interest. Environment We assume that taxicabs are deployed in an urban environment with a fixed street topology (see Fig. <ref>). The environment is expressed as a graph G=(V, E), where V={ 1, …, n} corresponds to the set of intersections in the map numbered 1 through n, while E ⊆{ (i,j) | i, j ∈ V} corresponds to the set of directed streets that connect intersections i and j in the map. The set of adjacent intersections to intersection i is denoted as 𝒩_i = { j | j∈ V, (i,j) ∈ E}. We also assume that the environment is divided into sectors s_k ⊆ V, such that V = ⋃_k s_k and s_k ∩ s_h = ∅, ∀ k ≠ h. We denote the set of all sectors in the map as the set S, where S is then a set of sets. We define 𝒜(s_k) ⊆ S as the set of all the sectors adjacent to sector s_k. We define ℐ_s_k⊆ s_k as the set of intersections in sector s_k that can be used as pickup or drop-off locations, following local regulations. We denote the set of intersections that can be used for pickup or drop-off of requests over the entire map as ℐ_V = ⋃_s_k ∈ Sℐ_s_k. Requests We define a request r for the ride service as a tuple r = < ρ_r, δ_r, t_r, ϕ_r >, where δ_r ∈ℐ_V and ρ_r ∈ℐ_V correspond to the nearest available intersection to the request's desired pickup and drop-off locations, respectively; t_r corresponds to the time at which the request entered the system; and ϕ_r ∈{ 0, 1} is an indicator, such that ϕ_r = 1 if the request has been picked up by a vehicle, ϕ_r = 0 otherwise. We model the number of new pickup requests for a specific sector s_k as a random variable η_s_k with an unknown underlying distribution p_η_s_k. Its estimation is denoted as p̃_η_s_k. We denote the realization of η_s_k at time t as η_s_k(t). We denote the set of new pickup requests at time t for a specific sector s_k as 𝐫_𝐬_𝐤, 𝐭 = { r | ρ_r ∈ℐ_s_k, t_r = t}, such that |𝐫_𝐬_𝐤, 𝐭| = η_s_k(t). We model the exact pickup intersections for an arbitrary request given that the request originated at sector s_k as the random variable ρ_s_k with support ℐ_s_k. Similarly, we model the exact drop-off intersection for an arbitrary request given that the request will be dropped off at sector s_k as the random variable δ_s_k with support ℐ_s_k. Both of these random variables have unknown underlying distributions that we denote as p_ρ_s_k and p_δ_s_k, respectively. We also denote their estimated probability distributions as p̃_ρ_s_k and p̃_δ_s_k, respectively. We model the drop-off sector for an arbitrary request given that the request has pick up sector s_k as the random variable β_s_k with an unknown probability distribution p_β_s_k. We denote its estimated probability distribution as p̃_β_s_k. We denote the demand model for a given sector s_k as the set of random variables D_s_k = {η_s_k, ρ_s_k, δ_s_k, β_s_k}. We denote the global demand model as 𝒟 = ⋃_s_k ∈ S D_s_k. We define 𝐫_𝐬_𝐤, 𝐭 = { r | r ∈𝐫_𝐬_𝐤, 𝐭', ϕ_r=0, t' = 1, … t } as the set of outstanding pickup requests for sector s_k that have not been picked up by any taxi till time t. We denote the set of all outstanding pickup requests at time t as 𝐫_𝐭 = ⋃_s_k ∈ S𝐫_(𝐬_𝐤, 𝐭) State representation and control space We assume there is a total of m agents and all agents can perfectly observe all available requests, and all other agents' locations and occupancy status. We represent the state of the system at time t as a tuple x_t = < ν⃗_⃗t⃗, τ⃗_⃗t⃗, 𝐫_𝐭>. ν⃗_⃗t⃗=[ν_t^1, …, ν_t^m] corresponds to the list of locations for all m agents at time t, where ν_t^ℓ∈ V corresponds to the closest intersection to the geographical position of agent ℓ. τ⃗_⃗t⃗ = [τ_t^1, …, τ_t^m] is the list of time remaining in current trip for all m agents. If agent ℓ is available, then it has not picked up a request and hence τ_t^ℓ = 0, otherwise τ_t^ℓ∈ℕ^+. We denote the control space for agent ℓ at time t as 𝐔_t^ℓ(x_t). If the agent is available (i.e. τ_t^ℓ = 0), then 𝐔_t^ℓ(x_t) = {𝒩_ν_t^ℓ, ν_t^ℓ, ψ}, where ψ corresponds to a special pickup control that becomes available if there is a request r ∈𝐫_𝐭 a the location of agent ℓ such that ρ_r = ν_t^ℓ. If the agent is currently servicing a request r (i.e. τ_t^ℓ > 0), then 𝐔_t^ℓ(x_t) = {ζ}, where ζ corresponds to the next hop in Dijkstra's shortest path between agent ℓ's current location ν_t^ℓ and the destination of the request δ_r. We denote all possible controls for all m agents at state x_t as 𝐔_t (x_t) = 𝐔_t^1(x_t) ×…×𝐔_t^m(x_t). Problem We are interested in learning a cooperative pickup and routing policy that minimizes the number of outstanding requests over a finite time horizon of length N, leveraging event information to better address demand surges. We denote the state transition function as f, such that x_t+1 = f_t(x_t, u_t, 𝒟), where x_t+1 is the resulting state after control u_t ∈𝐔_t (x_t) has been applied from state x_t considering the realizations for all random variables in 𝒟. We define the stage cost g_t(x_t, u_t, 𝒟) = |𝐫_𝐭| as the number of outstanding requests over the entire map at time t. We define a policy π = {μ_1, …μ_N} as a set of functions that maps state x_t into control u_t = μ_t(x_t) ∈𝐔_t (x_t), with its cost given by J_π(x_1) = E [g_N(x_N) + ∑_t=1^N-1g_t(x_t, μ_t(x_t), 𝒟) ], where g_N(x_N) = |𝐫_𝐍| is the terminal cost. We consider policy improvement schemes, such as rollout <cit.>, that allows us to obtain a lower cost policy by improving upon a base policy. We define a base policy π̅ = {μ̅_1, …μ̅_N } as an easy to compute heuristic that is given. Our objective becomes then the problem of finding an approximate policy π̃ = {μ̃_1, …μ̃_N }, such that given base policy π̅, we find the minimizing action for state x_t at time t by solving: μ̃_t(x_t) ∈min_u_t ∈𝐔_t (x_t) E_𝒟[ g_t(x_t, u_t, 𝒟) + J̃_π̅,t(x_t+1) ] Where J̃_π̅,t(x_t+1) = Ĵ_T + ∑_t' = (t+1)^T g_t'(x_t', μ̅_t'(x_t'), 𝒟) is a cost approximation derived from applying the base policy π̅ for H time steps from state x_t+1 with a terminal cost approximation Ĵ_T = |𝐫_𝐓|, where T=(t+1) + H. To integrate this formulation into our specific application, we must solve two Problems. Problem 1: We wish to estimate the probability distributions of the random variables that specify the demand model 𝒟 to obtain 𝒟̃ = ⋃_s_k ∈ S{η̃_s_k, ρ̃_s_k, δ̃_s_k, β̃_s_k} with underlying distributions p̃_η_s_k, p̃_ρ_s_k, p̃_δ_s_k, and p̃_β_s_k that capture demand surges produced by events. Problem 2: We aim to reduce the computational cost of approximating the expectation Eq. <ref>, for the multiagent case, using the surge data in 𝒟̃. § OUR APPROACH To solve Problem 1 in Sec. <ref>, we need to derive estimates p̃_η_s_k, p̃_ρ_s_k, p̃_δ_s_k, and p̃_β_s_k. To estimate p̃_η_s_k, we developed a novel event-informed demand estimation procedure that leverages sentence embeddings <cit.> and spectral clustering techniques<cit.> to generate vector representations for events. We propose a novel two NN demand prediction scheme that predicts the number of requests that will enter the system at each sector s_k ∈ S using these vector representations. Finally, We build off of the idea of Certainty Equivalence <cit.> to propose a novel way of estimating p̃_η_s_k by uniformly distributing the predicted number of requests for sector s_k over the entire time horizon N. To estimate p̃_ρ_s_k and p̃_δ_s_k, we leverage occupancy schedules<cit.> to derive a novel probabilistic method that maps demand over a sector s_k to individual intersections j∈ℐ_s_k. To estimate p̃_β_s_k, we use the relative frequency of pickup and drop-off sectors in historical data conditioned on having pickup sector s_k. The combination of all these estimation procedures, allows our method to account for demand surges and transfer that information to our routing framework. To solve Problem 2 in Sec. <ref>, we build off of the idea of one-agent-at-a-time rollout <cit.> and Certainty Equivalence scenarios <cit.> to develop a novel rollout-based scalable routing framework that scales linearly with the number of agents and has a reduced sampling space that allows our system to decrease the number of simulations required to estimate the expectation in Eq. <ref>. The combination of these two components makes our system capable of handling large fleets of taxis over large maps without incurring prohibitively long computation times. We will cover the specifics of our scalable routing framework first, in order to motivate the need for the estimation of p̃_η_s_k, p̃_ρ_s_k, p̃_δ_s_k, and p̃_β_s_k. §.§ Rollout-based scalable routing framework r0.48 < g r a p h i c s > Example of one-agent-at-a-time rollout with two agents, where each agent only has two available actions. The proposed rollout-based scalable routing framework is mostly composed of the model-based RL module shown in Fig. <ref>.4, which leverages one-agent-at-a-time rollout <cit.>,<cit.>, combined with a limited sampling formulation derived from the idea of scenarios in certainty equivalence <cit.> to obtain policy π̃. Under this framework, we replace the minimization (Eq. <ref>) given in problem statement for a one-agent-at-a-time formulation. More specifically, agent ℓ's one-at-a-time rollout control at state x_t, given that agent ℓ is at an intersection j that is inside sector s_k, is then: ũ_t^ℓ∈min_u^ℓ_t ∈𝐔^ℓ_k(x_t) E_𝒟̃_A[ g_t(x_t, u̅, 𝒟̃_A) + J̃_π̅, t^(A)(x_t+1)] where u̅=(ũ_t^1,…, ũ_t^ℓ-1, u^ℓ_t, μ̅_t^ℓ+1(x_t),…, μ̅_t^m(x_t)); 𝒟̃_A = {η̃_s_h, ρ̃_s_h, δ̃_s_h, β̃_s_h | ∀ s_h ∈𝒜(s_k) ⋃{ s_k}} corresponds to the local set of random variables of adjacent sectors to s_k where each random variable has estimated probability distributions derived from the event-informed demand estimation procedure; and J̃_π̅, t^(A)(x_t+1) = Ĵ_T + ∑_t' = (t+1)^T g_t'(x_t', μ̅_t'(x_t'), 𝒟̃_A) corresponds to the cost approximation of executing the base policy for H steps, considering samples from the estimated demand distributions in 𝒟̃_A, and a terminal cost approximation Ĵ_T = |𝐫_𝐓| with T=(t+1)+H. A graphical representation of the proposed formulation is presented in Fig. <ref>. §.§ Event-informed demand estimation In this section we explain how we estimate the underlying probability distributions p̃_η_s_k, p̃_ρ_s_k, p̃_δ_s_k, and p̃_β_s_k, which compose 𝒟̃_A and hence are needed to approximate the expectation in Eq. <ref>. §.§.§ Estimating the probability distribution for the number of requests p̃_η_s_k We estimate p̃_η_s_k by leveraging the event processing module, the demand prediction module (see Fig. <ref>.1-2), and a novel probabilistic approach that derives a semi-stochastic representation for p̃_η_s_k from the predicted demand. Event data and sentence embeddings We need a list of events that includes the date of the event, the title of the event, a small description of the event, as well as the event's venue. Such a list of events can be scraped from the internet using queries <cit.> or it can be obtained from an online database for events <cit.>. Events' attendance depends on multiple factors, including the public's attitude towards such an event. For this reason, we try to estimate a proxy of the public's attitude towards an event by considering reviews for event-venues pairs. We collect K reviews for each event-venue pair. For each review's textual excerpt we generate a sentence embedding using a pre-trained MLM, specifically RoBERTa large v1 <cit.>. Sentence embeddings tend to capture semantic information and inter-word relations <cit.>, and hence are useful to capture the semantic content of reviews in a dense vector representation. We denote the k-th sentence embedding for a specific event as e⃗_k ∈ℝ^1024, and we denote the set of all K embeddings as ℰ. Spectral clustering For each event, we consider its associated K review embeddings. We want to obtain semantically relevant representations that aggregate the reviews so we can use such summarized representations as inputs for a NN that predicts demand. To do this we consider spectral clustering on the latent space of the reviews. We consider a Gaussian radial basis function as the similarity score s(e⃗_k, e⃗_h) = exp(-γ· ||e⃗_k - e⃗_h||_2^2) with γ=1. To execute spectral clustering, we construct a similarity graph G_C = (V_C, E_C), where each vertex v_k corresponds to the review embedding e⃗_k. We consider a fully connected graph, where all vertices are connected to all other vertices, and the weight of each edge (v_k, v_h) is given by the similarity score s(e⃗_k, e⃗_h). Using this complete graph, we then apply spectral clustering for b=3 clusters using the algorithm proposed in <cit.>. We choose 3 clusters to balance the need to better capture the semantic diversity of the reviews and the need to maintain a small number of clusters to produce input features that are still usable as inputs for a simple NN. We denote the cluster label assigned to embedding e⃗_k as b̂(e⃗_k) ∈{ 1, …, 3}. We denote the set of embeddings that have cluster label a for a ∈{ 1, …, 3} as ℬ_a = {e⃗_k | b̂(e⃗_k)=a, e⃗_k ∈ℰ}. Cluster averaging Once all reviews for an event are clustered, we average the embeddings for each assigned cluster to obtain semantically relevant averaged embedding ê_a = ∑_e⃗_k∈ℬ_ae⃗_k/|ℬ_a| for each cluster label a ∈{1, …, 3}. We then stack the resulting 3 embeddings to obtain a dense representation R_z = [ê_1^⊤, ê_2^⊤, ê_3^⊤]^⊤ for arbitrary event z. We denote the set of events in sector s_k as z_s_k. we denote the average dense representation for events in sector s_k as R̂_s_k = ∑_z ∈ z_s_k R_z/|z_s_k|. For each event title and description we generate an additional sentence embedding using the same pre-trained MLM. This embedding provides additional context to the NN to differentiate between events that share the same venue and hence might have some of the same reviews. For arbitrary event z, We denote this embedding as q⃗_z ∈ℝ^1024. We denote the average title embedding for events in sector s_k as q̂_s_k = ∑_z ∈ z_s_kq⃗_z/|z_s_k|. We define the final unified sector feature for an arbitrary sector s_k as F_s_k = [q̂_s_k^⊤, R̂_s_k^⊤]^⊤∈ℝ^4096. Demand prediction module The demand prediction module is composed of a spatial temporal (day of the week, month, and hour) and spatial (weather) data collection pipeline followed by a two NN prediction mechanism (see Fig. <ref>.2). If there are no events happening on sector s_k, the system considers a NN that takes as input only temporal and spatial data in the form of a vector w⃗. If there is an event happening on sector s_k, the system enhances w⃗ with the unified sector feature F_s_k from the demand prediction module to create input feature F_s_k^+ = [w⃗^⊤, F_s_k^⊤]^⊤. For simplicity, we denote the predicted demand, irrespective of the NN that produced it, as ŷ_s_k. Deriving minute demand distribution from hourly predicted demand Following the concept of Certainty Equivalence presented in <cit.>, we derive a semi-stochastic representation for p̃_η_s_k by evenly distributing the predicted hourly number of requests for sector s_k over the time horizon N=60 to obtain the number of requests that will enter the system at each minute. In this sense, we obtain a single descriptor η̃_s_k^(f) for all realizations η̃_s_k(t), 1≤ t ≤ N. More formally, We define η̃_s_k^(f) = ŷ_s_k/N. If η̃_s_k^(f) > 1, we round this value to obtain a deterministic quantity η̅_s_k that will be used as the number of requests that enter sector s_k at each minute t. If η̃_s_k^(f) < 1, then we define a Bernoulli random variable η̃_s_k^b ∼ Bern(η̃_s_k^(f)), and its realization at time t as η̃_s_k^b(t), such that η̃_s_k^b(t)=1 with probability η̃_s_k^(f). We set η̅_s_k = η̃_s_k^b. For each minute t in time horizon N=60, we instantiate η_s_k^b(t) to determine if a request appeared in sector s_k at minute t. In contrast to vanilla certainty equivalence <cit.>, our approach does not fully remove the stochasticity for the random variable η̃_s_k to avoid underestimating demand. §.§.§ Estimating probability distributions for the pickup and drop-off intersections, p̃_ρ_s_k and p̃_δ_s_k, and the conditional matching p̃_β_s_k Here, we tackle the problem of going from sector level demand to intersection-level demand that can be used as input to the routing optimization in Eq.<ref>. To achieve this, the probability distributions for pickups and drop-offs over intersections, p̃_ρ_s_k and p̃_δ_s_k, are estimated by leveraging the demand assignment module (see Fig. <ref>.3), which given a set of intersections ℐ_s_k for sector s_k, returns a probability distribution over ℐ_s_k based on the number of locales of interest (i.e. restaurants, bars, cafes, etc) near that intersection, their respective occupancy schedule <cit.>, and an estimate of their maximum occupancy. [13]r0.4 < g r a p h i c s > Mapping probability distributions over sectors to intersections for pickup and drop-off. The potential number of riders at each intersection is estimated using occupancy schedules and the size of locale A graphical representation of this process is shown in Fig. <ref>. More formally, let's consider sector s_k and an intersection j such that j ∈ℐ_s_k. We denote n_λ(j), as the number of locales for a specific locale type λ∈Λ near intersection j, where Λ is the set of all locale types. We denote o_λ and p_λ, as the estimated maximum occupancy and the occupancy schedule for locale type λ, respectively. For a given hour h_t, we denote the percentage occupancy of locale type λ as p_λ(h_t). We denote the estimate of the number of potential customers at node j given an hour h_t as O(j, h_t) = ∑_λ∈Λ n_λ(j) · (p_λ(h_t) · o_λ). To obtain an estimate of p̃_ρ_s_k and p̃_δ_s_k at a given hour h_t, we normalize O(j, h_t) for each node j by dividing it by ∑_i ∈ℐ_s_k O(i, h_t). To estimate p̃_β_s_k, the probability distribution of having a request being dropped off at any sector given that it was picked up at sector s_k, we consider historical demand data. We estimate the conditional probability distribution by looking at the relative frequency of pickup-drop-off sector pairs in the data given a particular pickup sector. § EXPERIMENTAL EVALUATION To evaluate our approach, we consider ride-share trip data from 2022 obtained from NYC's HV-FHV datasets <cit.>. We consider a section of the map with 38 sectors (2235 nodes and 4566 edges) across mid and lower Manhattan (see Fig. <ref>). We collect data for more than 2,000 different events during 2022 hosted on 300 unique venues across all 38 sectors. We consider that each time step in the system corresponds to 1 minute, and the time horizon N=60 represents an hour. §.§ Implementation setup To implement the system, we need to deal with two main areas. (1) Review data and spectral clustering: We obtain a list of events for a specific date using the PredictHQ API <cit.>. We consider seven event categories: conferences, expos, concerts, festivals, performing arts, community gatherings, and sports events. For each of these events, we scrape Google Maps using SerpAPI <cit.> to obtain reviews related to an event and its venue. We set K=100 to collect a maximum of 100 reviews. (2) Demand prediction, demand assignment and model-based RL routing modules: All the weather related data is obtained using Open-Meteo Historical Weather Data API<cit.>, which uses ERA5<cit.> and ERA5-Land weather data<cit.>. We train both NNs for 100 epochs using the first 8 months of data for 2022. The remaining 4 months were held out as a test set. For both NNs, we select mean squared error (MSE) as the loss function, we use Adam with a learning rate of 1× 10^-4 and a weight decay of 1× 10^-6 as an optimizer, we choose a batch size of 64, and we shuffle the batches after every epoch. The NN that predicts demand using w⃗ as input has 2 fully-connected hidden layers, each with 256 neurons. The NN that predicts demand using F_s_k^+ has 2 fully-connected hidden layers with 4096 neurons each. Architectural parameters for both NNs were selected using a hyper-parameter search, and a simple feed forward architecture was chosen as it was the fastest architecture to train, while still obtaining comparable results to other more complex architectures. All training was done on a single NVIDIA RTX A6000, and it took around 2 hours for each NN. All evaluation results are calculated using the predictions for the 4 months of data in the test set. For the demand assignment module, we retrieve all locales of interest for each intersection j ∈ℐ_V using Google Maps' Nearby Search API <cit.>. We consider four locale types: retail spaces, restaurants, hotels, and hospitals. We retrieve occupancy schedules for each locale type from COMNET <cit.>. For the rollout-based routing module, we set H = 10. We use 100 Monte-Carlo simulations to estimate the expectation for the one-agent-at-a-time online minimization, and choose π̅ to be the fastest computing heuristic, the greedy policy. §.§ Baselines Our main results compare our approach against the following three baselines: (1) Greedy policy: available taxis are routed to their nearest outstanding request as given by Dijkstra's shortest path algorithm without coordination among them. (2) Instantaneous assignment: uses a variation of broadcast of local eligibility (BLE) <cit.> for iterative task assignment, performing a deterministic instantaneous matching of available taxis to request currently in the system. (3) <cit.>: in its scalable implementation, this methods uses a one-agent-a-time rollout with instantaneous assignment as the base policy. It estimates the current demand using the demand of the previous hour of operation of the system. This method uses a vanilla application of certainty equivalence, utilizing fully deterministic substitutes for η̃_s_h, ρ̃_s_h, δ̃_s_h, but preserving the stochasticity for β̃_s_h and the order in which requests arrive by independently sampling from a pre-computed pool of pickups and drop-offs. §.§ Main results In this section, we present a comparative study of our proposed approach compared to the three other baselines described in Sec. <ref>. We evaluate all policies using the held out test dataset described in Sec. <ref>. We use the real requests in the ride service data <cit.> as the ground truth requests entering the system. First, we are interested in comparing run-times for all methods. We calculate the average execution time for a single time step, by considering the execution time for the entire time horizon and then dividing it by the length of the time horizon. Results for the run-time comparison are shown in Table <ref>. As evidenced in the table, our method is able to learn policies in approximately a third of the time compared to <cit.>. Since our method is able to reach a routing decision for the next minute using less than a minute of computation, our approach could be used for online planning in real-time. [13]r0.6 < g r a p h i c s > Average number of outstanding requests per minute for the randomly chosen date Second, we are interested in the performance of our approach in routing scenarios. We are interested in comparing how many outstanding requests there are in the system per minute for all four routing algorithms. For this reason, we define the average number of outstanding requests per minute for an arbitrary trajectory as ∑_t=1^60|𝐫_𝐭|/60. To present aggregate results and account for the stochasticity of the Monte-Carlo simulations we present results averaged across 25 trajectories with random initial states for a randomly chosen date (in this case, November 17, 2022). For more results on additional dates refer to the Appendix. Table <ref> shows that our approach obtains the lowest average number of outstanding requests per time step compared to the other methods. Our approach results in policies that have 2 to 10 fewer outstanding requests per minute than all the other methods, which means that our system is able to service more requests per minute than all other methods. While <cit.> only outperforms the greedy policy and instantaneous assignment for 3pm and 5pm, our method consistently outperforms all policies throughout the presented time-window. In this case, <cit.> performs worse than methods that do not consider future potential requests for 7pm and 9pm, since it uses the last hour of operation of the system as a proxy for the current demand for the ride-service and hence fails to capture the large surges in demand caused by events. This to an online optimization that samples from a demand model that is not actually representative of the real, current demand. Our method, on the other hand, uses event information to better inform routing, which combined with our improved routing framework allows our system to maintain a competitive advantage over all the other methods. A graphical representation of these results is shown in Fig. <ref> §.§ Ablation studies To better understand the effect of prediction errors, we isolate the demand prediction system and compare it against a NN that only uses temporal and spatial data as input (we call this standard NN), and using the previous hour of operation of the system as a proxy for the current demand as proposed in <cit.>. Results for the prediction errors of all three methods are shown in Fig. <ref>. The figure presents prediction errors for a sector s_k for each hour of the day averaged over all days in November 2022, where we compute average percent error PE_s_k = 1/|X|∑_X|y_s_k - ŷ_s_k|/y_s_k, where X is the number of data points in the test set for sector s_k at the hour of interest, ŷ_s_k is the predicted number of requests entering sector s_k, and y_s_k is the actual number of requests entering sector s_k. The selected sector has two main surges caused by theater shows (usually starting at 5pm), concerts, and sports events (usually starting at 7-9pm). As we see, our prediction scheme outperforms all other methods, obtaining 3%-10% improvement on average percent error. [16]r0.4 < g r a p h i c s > PE_h over sectors Second, to better understand the advantage of using event information, we compare percent error for a NN that only considers temporal and spatial data as input and our method. For these results, we look at averaged percent error for each sector, meaning that for each sector s_h we compute PE_h = 1/|X_h|∑_X_h|y_s_h - ŷ_s_h|/y_s_h, where X_h is the number of data points in the test set for sector s_h over all hours, ŷ_s_k is the predicted number of requests entering sector s_k, and y_s_k is the actual number of requests entering sector s_k. Results are shown in Fig. <ref>. As we can see, our method results in an 5-10% improvement on prediction accuracy over the standard NN for sectors where events are concentrated. §.§ Limitations and future work Since we consider a fixed fleet size, larger demand surges still lead to larger number of outstanding requests compared to hours where there are less events. As future work, we want to consider a system where there are taxis stored at warehouses throughout the map, and we can dynamically change the fleet size to address this rise in demand. Another important limitation of our approach is that it relies on sampling to estimate the expected cost of each action, and hence there is still a small, but non-zero, probability that the samples obtained might not be representative of the real demand, leading to a degrade in performance. As shown in the simulations, this probability is very small and performance degradation happens very infrequently. § CONCLUSION In this paper we presented an event-informed multi-agent RL routing framework that leverages event data from the internet to predict demand surges and route taxis accordingly. Our method learns policies that serve 6 more requests on average per minute than other methods.
http://arxiv.org/abs/2307.01071v1
20230703145355
Two-Dimensional Strain Mapping with Scanning Precession Electron Diffraction: An Investigation into Data Analysis Routines
[ "Phillip Crout", "Dipanwita Chatterjee", "Ingeborg Nævra Prestholdt", "Tor Inge Thorsen", "P. A. Midgley", "Antonius T. J. van Helvoort" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ INTRODUCTION Strain is an important feature of materials in a wide range of applications as it can have pronounced effects on the electrical, optical and mechanical properties of a sample. Once understood, information about the strain responses of a materials can be exploited to engineer desirable properties in a process known as `strain engineering'. This approach is widely used to improve the performance of micro-electronic devices <cit.>, tune the opto-electronic properties of semiconductors <cit.> and modify the strength and hardness in metal alloy-precipitate systems <cit.>. For these methods it is essential to have accurate measurements of the strain fields present in a sample. X-Ray diffraction <cit.> and micro-Raman spectroscopy <cit.> are common mapping choices as they can provide highly accurate quantitative strain measurement. However the spatial resolution of these techniques is limited and in many sample the strain fields will vary at length scales on the order of nanometers. In this regime methods based on transmission electron microscopy (TEM) stand out. Several TEM-based methods for strain analysis are available, each with their own strengths and weakness - for a detailed review see <cit.>. Here we will focus on approaches that collect complete diffraction patterns at many points distributed across a sample[Naming conventions differ and this technique has been described as SEND, NBED and SED in the literature but it is our belief that the similarities between these methods far exceed their differences.]. The advantage of these approaches include a wide field of view, simple set up and transparent analysis pipelines. Unfortunately, these advantages comes at the cost of lowered precision compared to other approaches. One way to improve precision is to include precession alongside scanning, to form scanning precession electron diffraction (SPED). In SPED an electron beam is rastered across a sample while also undergoing a conical rocking motion (precession) to improve the sampling of reciprocal space and reduce the effects of dynamical diffraction in the final pattern <cit.>. By `derocking' the beam below the sample a diffraction pattern that is qualitatively similar to one that would be produced by kinematical diffraction is observed <cit.>. For other application the exact nature/interpretation of the intensities under `quasi-kinematical' conditions may be of interest, but for strain mapping applications it's the increased number of Bragg peaks as well as the improved uniformity of diffraction disks that proves most beneficial <cit.>. As scanning electron diffraction experiments create rich 4D datasets a large number of analytical approaches are possible with strain mapping being one of the most regularly applied. Recently the 4D STEM approaches have benefited from vastly improved detector quality with the advent and adoption of direct electron detectors allowing for the design of routines that require subpixel spot location <cit.>. This development has happened in parallel with an explosion of academic open source software designed to handle and process such datasets <cit.>. These projects allow for the transparent ingestion and analysis of TEM data and have brought new ideas to characterization workflows. While many researchers work with a converged beam (an approach sometimes called scanning CBED) our interest here is in the `quasi parallel beam' setting in which disks are well separated (see <cit.> for discussion of `quasi parallel beams') as this offers a simplified experimental protocol and analysis regime, while maintaining the flexibility to conduct separate investigations on the same data (eg. orientation mapping). Although different approaches have been demonstrated for analysing S(P)ED maps comparative work is limited and primarily qualitative in nature <cit.>. A quantitative comparison requires analysis is done on the same strain field with multiple approaches. This manuscript thus sets out to investigate the importance of the inputs provided when producing a strain map. By performing the entire mapping pipeline on a GaAs nanowire with a strained insert we seek to elucidate which elements of the process do (and don`t) have a significant effect on the final results. We describe the conditions under which three peak position finding algorithms (here: cross correlation, center of mass and curve fitting) perform well and provide the code used to generate those finding <cit.>. § MATERIALS AND METHODS §.§ Sample Preparation and Microscopy In this work we use nanowires composed of Zinc Blende (ZB) and Wurtzite (WZ) GaAs with GaAs/GaAsSb axial heterostructure. The nanowires were grown by self-catalyzation on a Si (111) substrate with the vapor-liquid-solid technique using molecular beam epitaxy. The growth and structure of these nanowires are detailed elsewhere <cit.>. The sample was scratched under isopropanol with a sharp diamond scraper and the droplet with detached nanowires wetted a TEM grid with an electron transparent C-film. We display the sample such that the top/left of the nanowire is the Wurtzite phase (not studied) followed by the insert and then the Zinc Blende GaAs. A contrast image demonstrating this is presented in Figure <ref>. The nanowire under investigation is about 4 μm long with a width of approximately 150 nm. The GaAsSb insert is of the length ∼ 130 nm. The growth direction was [111]/[0001] for the ZB/WZ respectively. A JEOL JEM-2100F operated at an acceleration voltage of 200 kV for both the conventional TEM characterization of the specimen (which we used for verification and do not show here) and the subsequent SPED experiments. For the SPED work, the probe was used in nano beam diffraction (NBD) mode with a condenser aperture of 10 mm size and a spot size of 1 nm. Beam scanning was produced by a NanoMEGAS scanner and the precession alignment was conducted using the DigiSTAR software provided by the NanoMEGAS. A precession angle of 0.58^∘, an exposure time of 10 ms and a nominal step size of 1 nm was used in all the scans. The diffraction data was acquired using a Quantum Detectors Merlin direct electron detector with a sensor size of 256x256 pixels. Although multiple scans were collected and investigated, all results are presented from the scan described as `2' in Table 1 as this scan provided the best results. Strain measurements and other S(P)ED data handling were done using Python code with the open-source library Pyxem <cit.>. The full processing pipeline can be found in the notebooks available at reference <cit.>. §.§ Image Processing A suitable image processing pipeline was found via a trial-and-error approach. To begin with we deal with full patterns, blocking the direct beam before cropping the relevant spots out. Each spot is then processed by passing it through a minimum filter, and retaining the largest connected region. This helps deal with the direct beam bloom in the corners of the spot images. No further image processing is applied. §.§ Center Finding Algorithms Under idealized parallel illumination conditions, a Bragg peak is infinitely sharp on a detector and provides perfect information about an associated inter-planar spacing. However, in a scanning set up, the electron beam must be converged, causing peaks to spread into disks. Each point on these disks is then subject to the noise properties of the recording system. Finding the center of these resulting disks is a crucial step in calculating strain and as such several routines have been designed and described for so called `disk finding algorithms' <cit.>. In this work we compare three such methods. The `center of mass' approach is described first and was chosen for its intuitiveness. Following this we move onto cross correlation methods, which are commonly used and extensively described in the S(P)ED strain mapping literature <cit.>. As such they provide an excellent baseline, as well as offering the interested reader a sensible starting point for considering variations with other parameters (detector setup, optical setup, post processing routine). Finally, we present a curve fitting method for finding spot centers <cit.>. §.§.§ Center of mass The centre of mass (CoM) of an object can intuitively be understood as the point from which mass `appears to act'. By analogy we can treat the intensity of a diffracting signal as its `mass' and apply a similar calculation approach. In general the center of mass/intensity for an object made of point particle is given by: R_CoM = ∑_ir_i m_i/∑_i m_i where R_CoM is the location of CoM, and r_i and m_i are the location and weight of the constituent elements (point particles/pixels). To implement this method with an individual diffraction pattern we proceed as follows: Firstly a square region containing the peak is selected. The intensity at each pixel within this square is then known and equation <ref> can be applied. Using this method the position of the CoM for the same disc is calculated for all the diffraction patterns in the SPED dataset. §.§.§ Cross correlation To perform cross correlation a small reference image (usually a simulated diffraction disk of known center) is `cross correlated' with a spot from a sample. This cross correlation involves computing the value of F(x,y) = g_1(x,y) ∗ g_2 where g_1 and g_2 are reference and experimental images and ∗ is the convolution operator for potential displacements (x,y) and selecting the maximum value of F(x,y). This is most easily done via a Fourier transform although we leave the internals to established Python libraries designed for this purpose <cit.>. To obtain subpixel precision both the reference and the sample spot can be up-sampled. §.§.§ Curve Fitting Spots from `quasi parallel' diffraction patterns show fairly consistent intensity drop off from spot center to edge. This has motivated other workers to fit models that incorporate this information. One common approach is to fit Gaussian's directly to the spot centers. The model fitting is simply least squares where the parameters are the solution to: min∑_i | r_i - M(i) |^2 in which i are the independent observations characterised by some known variable and M is a model with unknown parameters. In our work we will model the intensity as following I(r) = I_0 e^|r-μ|/σ^2. for I(r) an intensity as a function of position, I_0 a fitting constant, μ an optimised center and σ a fit width. In implementation it turns out to be far more computationally efficient to independently fit two directions and then combine the results. This is because 2D approaches require the recalculation of the values of the displacement at each step. Fitting two independent models (in which the distances can be found from a lookup table) is far quicker. Bearing this in mind, our final algorithm is: * Cut out a square in which the spot is located (as in previous methods). * Locate the highest intensity pixel in said square. * Fit a 1D Gaussian to the physically relevant region of the row containing the highest intensity pixel. * Fit a 1D Gaussian to the column that contains the center of our previous fit. * Return the results as an (c_x,c_y) pair. with implementation code available at <cit.> and an example in Figure <ref>. An alternative functional fit, that of a top-hat function was also tried as these may better capture the intensity in our disks. The process is exactly as above except the function for I(r) becomes: I(r) = I_0× T(|r-μ|/w). where T(x) is 1 for x < 1 and 0 otherwise, w is a fit width parameters and then other notation is as in (<ref>). §.§ Strain Calculations Strain (ϵ) is a dimensionless tensor that measures the fractional displacement of atoms in a crystal lattice. In this, as with most TEM work, we will ignore the direction parallel to the beam, leaving us with a two dimensional problem. However even with this simplifications a number of options are available to us for converting our high precision locations of Bragg spots into a strain tensor. This work adopts an algorithm that has been a staple workflow for mapping strain using SPED data for some time <cit.>. To begin with an unstrained region of the sample is found, and appropriate Bragg spots selected (these spots must be present in all patterns under consideration). The centers of these spots are then refined to subpixel precision at all relevant probe positions. The outputs are then related by a series of equations: L^i = T L^i_0 T=RU=[ cosθ sinθ; -sinθ cosθ ][ U_11 U_12; U_12 U_22 ] where L^i are measured spot positions in the sample, L^i_0 are measured spot positions in the undeformed region, T is the mapping between the two (which varies from pixel to pixel), θ is the right handed lattice rotation about an axis pointing into the diffraction pattern and U is the deformation tensor. From this pair of equations we can directly extract θ. The strain fields (ϵ) are then found by evaluating ϵ^* = [ ϵ^*_11 ϵ^*_12; ϵ^*_12 ϵ^*_22 ] = U - I where ^* indicates a value in reciprocal space. The relationship between strains in reciprocal space and real space is then given by: ϵ = 1/(1+ϵ^*) -1 ≈ - ϵ^* = I - U where we have included the small strain approximation (ϵ^*≪ 1) for comparison with other sources <cit.>. §.§ Strain extraction from peak positions There are now a number of ways to take our peak positions and convert them into strain values even given the constraints of the equations in section <ref>. This is because while T is a 2×2 matrix L^i need not be meaning that equation <ref> is over-determined. This `excess' of data allows for some flexibility of approach (and generally precludes the existence of a unique solution). We note that Mahr et al. were able to exploit this situation to produce a simultaneous fitting approach with success <cit.>, but for clarity we choose here to separated our peak finding and strain mapping stages. One simple approach to the problem is to sum the measured Bragg vectors into two orthogonal proxy vectors (potentially resolving vectors into components before summing). We would expect the error of such approach to be given by the standard formula for the summation of independent measurements with errors: e_T = √(∑_i e_i^2) for e_T the absolute error in the (fictional) spot position and e_i the real errors in our measurements. However, because strain is a fractional relation, the error in the strain instead is: Δϵ = √(∑_i e_i^2)/L where L is the unstrained length of the spots, we are treating the measurements in the unstrained region as errorless and the strains as small. If each spot measurement has the same error distribution this reduces to: Δϵ = √(N)e_i/L. As L is not fixed but in fact a function of the spots chosen we can rewrite this as Δϵ = √(N(c))e_i/L(c) where (c) denote a value is a function of the spots chosen. For most use cases (including this study) equation <ref> should suffice for estimating the error in a strain map, however if spots have markedly different error characteristics (eg. high angle spots have poorer signal-to-noise) one may wish to resort to the more involved approach of equation <ref>. § RESULTS AND DISCUSSION After a suitable image processing routine was found and applied all three methods under consideration produced maps that were in qualitative agreement with one another (Figure <ref>). Functional fitting (using either a Gaussian or a top-hat) had significantly worse noise properties but still provided a clear indication of lattice expansion at the insert. Within the insert we see strains on the order of 2%. The apparent compression above the insert is fictitious and due to our choice to align the microscope to the insert and be ignored as the insert causes a kink in the nanowire and the area rotates off zone. The other distinctive feature of these images are the blob shaped features which are caused by carbon contamination. During this investigation we found that spot quality made a significant difference to output strain maps. In Figure <ref> we presented ϵ_yy maps, as these spots are consistent throughout the scan region. In Figure <ref> we illustrate why this was necessary as the ϵ_xx component offers a significantly less clear story. The ϵ_xx map in Figure <ref> required several re-selections of spots to generate, which are in Figure <ref>. So while this post-collection approach did eventually succeed it would have been significantly simpler to work with our sample had our alignments allowed us to capture four orthogonal spots within the entire scan area. In what follows we focus on the more reliable ϵ_yy direction (due to the quality of spots in this direction) to facilitate our discussion image processing choices. As described in section <ref> our approach contained four main steps (block the direct beam, cut the spots out, apply a minimum filter and then retain connected regions). While the first two steps did prove essential, they also show very limited sensitivity to the parameters selected and we will not consider them further. Keeping only connected pixels also helped to reduce erroneous results but as it contains no free parameters we suggest researcher simply toggle it on/off as they see fit. This leaves the intensity filter value as the key user defined parameter influencing the final strain map quality. In Figure <ref> we provide derived maps for three values of the filtering ratio and see that a fairly small change (± 15%) induces significant degradation in the quality of the final image, which in this case appears primarily as salt and pepper noise. Beyond the map quality shown in in Figure <ref> there are several other considerations when selecting a center finding method. The center of mass method tended to slightly underestimate strain, especially when the direct beam was included. It also produced unphysically smooth maps with noisy data which risks lulling the inattentive researcher into a false sense of security. By inspection of the lower part of Figure <ref> we also see that the noise properties in unstrained regions is worse than that offered by cross correlations. These deficiencies aside, the center of mass is by far the easiest to implement as well as being the cheapest (in computational time) option, allowing for rapid iterations (eg. when tuning image processing parameters). In contrast, both curve fitting approaches produced deeply unpleasing maps, with significant noise. Furthermore, convergence of the underlying fitting function isn't guaranteed meaning further work is needed design and implement approaches to capturing errors. These `non-convergence timeouts' also cause the code to run far slower than the center of mass approach despite being only slightly more involved. This is disappointing as in theory a curve fitting approach most accurately captures the physics of a detector. The well-established cross correlation method provided excellent results but took the most effort to coax correct results. In Figure <ref> we demonstrate how a small change in the image processing routine can have a dramatic impact on the quality of result obtained. While the actual algorithm takes very few parameters, all but the most aggressively cleaned spots would return complete nonsense. Due to the more involved nature of the algorithm it also proves difficult to assess a spot for defects that might have been the cause of errors. This is in stark contrast to the other methods, for which even the most cursory inspection would tend to indicate what was at fault. Work in this field often focuses on algorithmic or technical developments at the expense of comprehensively detailing the collection and processing regimes used. Our experience with this dataset suggest that these image processing routines - as well as the initial microscope configuration and alignment - are in fact far more important. To improve the quality of our output maps we made use of several processes that are generally not well reported in the literature on strain mapping. Our choice to work only with cropped spots led to a significant decrease in required computing resources allowing for more extensive manual checks to be conducted at almost no risk of reduced accuracy. Furthermore wherever possible we used pairs of spots located at ±g to form our measurements vectors. This prevents artefacts associated with direct beam shift <cit.>. We also sought to use as many spots as possible to improve the SNR in the final map (see section <ref>). § CONCLUSIONS We have provided a detailed introduction to strain mapping with scanning precession electron diffraction by applying several methods to map strain in an axial heterostructured GaAs/GaASb/GaAs nanowire. This has allowed for a comparison of three popular spot finding approaches. Although it is our opinion that other components of the investigation (specifically the trio of data collection routine, image processing approach and spot selection) are greater determinant of outcome than the specific spot finding method, we can still provide some conclusions about each peak finding approach. Curve fitting approaches (with either Gaussian or top-hat fits) produced comparatively poor maps. This result was surprising, as such approaches might be expected to most accurately capture the physics involved in a SPED experiment. While the center of mass method did tend to slightly underestimate strain it was by far the easiest method to implement as well producing results extremely quickly. In contrast the well-established cross correlation method provided excellent results but was slow and fiddly to work with. All three methods required some fine user interventions and we have provided complete implementations for users to tune as they see fit <cit.>. We believe that our key finding is that a greater emphasis on the data collection component of an investigation will be productive for the vast majority of researchers. We would suggest using large precession angles with samples put as close to zone-axis as possible. Alongside this having two pairs of orthogonal spots produces significantly better results. One potential difficulty is sample tilt, but if this is not too severe through the region of interest we believe any correct implementation of the spot finding methods detailed in this paper will produce good results. Conceptualization, I.N.P, D.C, T.I.T and A.T.J.H; methodology, all listed authors; software and formal analysis P.C; data curation, D.C, I.N.P and T.I.T; writing—original draft preparation, P.C and D.C; writing—review and editing, P.C and A.T.J.H; visualization, P.C and I.N.P.; supervision, project administration and funding acquisition: A.T.J.H and P.A.M. All authors have read and agreed to the published version of the manuscript. This project formed a component of P.C's doctoral research which was funded by the EPSRC and NPL. All authors acknowledge the Research Council of Norway for their support through the Norwegian Center for Transmission Electron Microscopy: NORTEM (Grant no. 197405). The raw data associated with this manuscript can be found at <cit.>. The Python scripts used to create many of the results are freely available at <cit.>. no -0cm References
http://arxiv.org/abs/2307.01399v1
20230703233408
Minimax rates of convergence for nonparametric location-scale models
[ "Bingxin Zhao", "Yuhong Yang" ]
math.ST
[ "math.ST", "stat.TH" ]
@nat@width>@nat@width kframe @end@of@kframe @end@of@kframe ##1totalleftmargin -##1- --totalleftmargin -totalleftmargin@ setminipage @end@of@kframe theoremTheorem 1.5 condition conditionCondition definition definitionDefinition remark remarkRemark corollary corollaryCorollary lemma lemmaLemma proposition propositionProposition assumption assumptionAssumption comment commentComment acknowledgments acknowledgmentsAcknowledgments upquote.sty a]Bingxin Zhao a]Yuhong Yang [a]School of Statistics, University of Minnesota, 313 Ford Hall, 224 Church St SE, Minneapolis, USA Minimax rates of convergence for nonparametric location-scale models [ ==================================================================== This paper studies minimax rates of convergence for nonparametric location-scale models, which include mean, quantile and expectile regression settings. Under Hellinger differentiability on the error distribution and other mild conditions, we show that the minimax rate of convergence for estimating the regression function under the squared L_2 loss is determined by the metric entropy of the nonparametric function class. Different error distributions, including asymmetric Laplace distribution, asymmetric connected double truncated gamma distribution, connected normal-Laplace distribution, Cauchy distribution and asymmetric normal distribution are studied as examples. Applications on low order interaction models and multiple index models are also given. Keywords: quantile regression, expectile regression, minimax, metric entropy. § INTRODUCTION Nonparametric regression is a widely used tool to characterize relationship between a response variable and the predictors. In many economic and related applications, the random errors often exhibit heavy tail behaviors and regression methods under normal or other light-tail error distributions may lead to overly optimistic conclusions via the poorly estimated regression relationship. For instance, quantile and expectile regressions are frequently applied for estimation of Value-at-risk (VaR) and expected shortfall (ES), which are standard risk measures for financial institutions (see, e.g., <cit.>, <cit.>, <cit.>, and <cit.>). By definition, VaR and ES attempt to quantify extreme losses (in somewhat different ways). As such, they can be very sensitive to the tail behaviors of the error distribution. Obviously, overly optimistic VaR and ES estimates do not serve the intended purpose of preventing financial disasters. In this background, it is important to study nonparametric regression under heavy-tailed error distributions. Our goal in this work is to provide a theoretical understanding on intrinsically how well one can estimate a target regression function in a general nonparametric location-scale framework. Below, we briefly review quantile and expectile regressions, and discuss the lack of a general understanding in the existing literature of nonparametric regression from perspectives of minimax rates of convergence especially when the regression error has a heavy-tail distribution (e.g., Cauchy), which motivates our work. §.§ Quantile regression Quantile regression (QR), initiated by <cit.>, is naturally associated with an asymmetric absolute loss function ρ_τ(y)=y(τ-I(y<0)) (also known as check loss), where 0<τ<1 is the quantile level. With the quantile level ranging from 0 to 1, QR allows one to make inference on the entire conditional distribution, which goes much beyond the conditional mean/median regression. Linear QR has been researched extensively. <cit.>, <cit.> and <cit.> considered the asymptotic behaviors of linear estimators. Penalized linear QR has also been investigated by <cit.> and <cit.>. To alleviate restrictiveness of linear modeling, semiparamtric and nonparametric models have been studied in the literature. <cit.> and <cit.> studied the partially linear additive model and investigated the asymptotic properties of the proposed estimators. <cit.> considered nonconvex penalized estimators with the oracle property under high dimensional partially linear additive QR. Moreover, high dimensional additive QR has been examined by <cit.>, <cit.> and <cit.>. Specifically, <cit.> proposed a nonparametric additive estimator with the oracle property. <cit.> proposed a two-fold LASSO-type regularized learning approach in sparse additive QR in a reproducing kernel Hilbert space (RKHS) framework. <cit.> analyzed the group LASSO estimator under a high dimensional additive setting. <cit.> studied the single-index QR, in which the unknown link function is estimated by local linear QR. <cit.> advocated model combination for QR and proposed the adaptive quantile regression by mixing (AQRM) to average parametric and/or nonparametric QR methods so as to achieve their best performance. §.§ Expectile regression Expectile regression (ER), initiated by <cit.> and further developed by <cit.>, also offers an approach to understanding the full range of conditional distribution of the response given the covariates. ER is based on minimizing asymmetrically weighted square residuals, using the asymmetric square loss l_ϕ(y)=y^2|ϕ-I(y<0)| at the expectile level ϕ∈(0,1). There are strengths and weaknesses of ER compared with QR as discussed in the literature. Since the asymmetric square loss (ASL) is differentiable everywhere while the check loss is nondifferentiable at zero, ER with the ASL can be computationally more advantageous than QR based on the check loss (<cit.>). On the other hand, the ASL is sensitive to outliers, while the check loss is more robust. Also, expectiles lack an intuitive interpretation. Linear ER has been well studied. <cit.> and <cit.> pioneered the investigation of asymmetric least squares (ALS). <cit.> and <cit.> studied penalized linear ER. Without the restriction of linearity of the regression function, <cit.>, <cit.>, <cit.>, <cit.> and <cit.> investigated semiparametric or nonparametric ER. <cit.> adopted the regression tree based gradient boosting estimator by <cit.>. <cit.> discussed the nonparametric multiple regression in the RKHS framework and proposed an estimator based on majorization-minimization (MM) algorithm. <cit.> extended the single index model (<cit.>) to ER and proposed an algorithm to estimate the nonlinear function and covariates jointly. <cit.> used the exponential weighting scheme to aggregate the different ER estimators and derived oracle inequalities for the aggregated estimator. <cit.> developed the expectile regression neural network (ERNN), which is estimated through standard gradient based optimization algorithms. §.§ Existing minimax results on QR, ER and related problems The aforementioned asymptotic results on quantile and expectile regressions in the literature are mostly on risk upper bounds or pointwise asymptotic performance for specific QR and ER methods. Indeed, very little has been done on minimax QR and ER. <cit.> solved an "ill-posed" inverse problem of nonparametric instrumental variables estimator of QR, and derived an upper bound of optimal convergence rate in probability, which depends on the severity of the "ill-posed" inverse problem and the complexity of nonparametric estimator. <cit.> considered functional QR, where the dependent variable is a scalar and the covariate is a function. He considered an estimator for the slope function by the principal component analysis, and derived an upper bound of minimax rate of convergence under some smoothness assumptions on the covariance kernel of the covariate and the slope function. <cit.> considered an "ill-posed" inverse problem under both nonparametric indirect regression (NPIR) and nonparametric instrumental variables regression (NPIV). They established an informative minimax lower bound under some approximation number and the link conditions. <cit.> derived an upper bound for the convergence rate of a penalized sieve minimum distance (PSMD) estimator under the nonparametric additive quantile instrumental variable regression model, and showed the resulting convergence rate coincides with the known optimal rate. <cit.> proposed a two-fold LASSO-type regularized learning approach for sparse additive QR in a high dimensional RKHS framework. They derived an upper bound of convergence rate of the proposed estimator, which attains the minimax lower bound (up to a logarithmic factor in the ambient dimension) for sparse additive mean regression in <cit.>. <cit.> explored the use of kernel-based regularized empirical risk minimization methods for ER. They considered the Gaussian kernels and ALS loss function, and derived an upper bound of convergence rate. Also, the established rate is minimax optimal up to a polynomial order of logarithmic factor of the sample size n. In summary, while various interesting results have been obtained, there are no minimax results that hold for general quantile or expectile function classes. In fact, to our knowledge, no formal risk lower bounds have been established for ER and QR under a general non-normal error distribution. In particular, it is unknown if a severely heavy tailed distribution such as Cauchy would lead to a slower minimax rate of convergence for the estimation of a quantile function. The objective of this paper is to offer such a theory. §.§ Contribution and organization of this paper This work studies the minimax rate of convergence for nonparametric regression in a location-scale framework in terms of the metric entropy of the class of regression functions of interest. Metric entropy is known to be the key quality that determines the minimax rate of convergence in mean regression, see, e.g, <cit.> for key results and references. The error distribution of QR and ER models typically does not follow normality as considered in, e.g, <cit.> and <cit.>, and may have much heavier and asymmetric tails. The main contribution of our paper is that we derive the minimax rates of convergence precisely under the squared L_2 distance for a general class of regression functions, under mild conditions. In particular, our results are applicable for QR and ER with asymmetric Laplace distribution, asymmetric connected double truncated gamma distribution, connected normal-Laplace distribution, Cauchy distribution (for QR) and asymmetric normal distribution. Since metric entropy orders are known (or can be derived) for most of familiar function classes, our results readily enable the determination of minimax rates of convergence for QR and ER in the nonparametric location-scale framework. It is helpful to note that in previous work on convergence of QR and ER methods, minimax lower rates for mean regression under normal errors are typically used to justify optimality of the proposed methods (sometimes with gaps between upper and lower rates in logarithmic factors in the sample size). This does not seem to be a generally rigorous argument, especially for heavy-tail error distributions such as Cauchy that does not even have the first moment. Our work provides optimal minimax lower rates that are valid for general error distributions under minor conditions. Interestingly, it turns out that heavy-tailedness of the regression error does not damage the minimax rate of convergence for regression estimation. For example, for estimating a quantile function in a Lipschitz class on [0,1]^d with smoothness parameter α >0, the minimax rate of convergence under squared L_2 loss is n^-2α/(2α +d), whether the error is Gaussian, double-exponential or Cauchy. Therefore, while the long tail of Cauchy is certainly likely to produce a significant number of outliers, they do not affect the rate of convergence if the regression function is estimated properly (see <Ref> for more discussions). The rest of this article is organized as follows. In <ref>, we set up the problem with some useful definitions. In <ref>, we present the general results on minimax rates for location-scale models including nonparametric QR and ER. <Ref> examines the examples of the asymmetric Laplace error distribution, the asymmetric connected double truncated gamma error distribution, the Cauchy distribution and the connected normal-Laplace error distribution. <Ref> considers the asymmetric normal error distribution for ER. In <ref>, we provide applications on low order interaction models and multiple index models. <Ref> discusses several important issues and considers a feasible estimator with (near) minimax optimality. <Ref> concludes our paper. The proofs of the main results are given in the <Ref> and Supplementary Materials. § PROBLEM SETUP In this paper, in a nonparametric location-scale regression framework, we unify mean, quantile, expectile regressions and their generalizations. In the following content, when comparing nonnegative sequences, we will use the symbols ≽ and ≍, where k_n≽ s_n is the same as s_n=O(k_n) and k_n≍ s_n means k_n≽ s_n and k_n≼ s_n. §.§ Location-scale models We assume the regression model: Y_i=μ(X_i)+σ(X_i)ϵ_i, i=1,2,⋯,n, where Y_i is a real valued response random variable, μ(x) is the regression function, σ (x) is a scale function that, together with ϵ, determine the heterogeneous random noise. Here ϵ_i has a distribution with known density f with respect to the Lebesgue measure, which may have a heavy tail (e.g., without a finite mean), and the covariate X_i and ϵ_i are independent. The observations (X_i,Y_i)_i=1^n are i.i.d. from a distribution with a generic copy (X, Y), and the explanatory variable X∈X has distribution P with density h belonging to a density class ℋ. The regression function μ is assumed to be in a nonparametric function class 𝒰, where 𝒰 satisfies μ∈𝒰sup‖μ‖_∞<∞. The scale function σ is assumed to be in some function class Σ with lower and upper bounds: there exists a fixed σ>1 and σ=σ^-1 such that σ≤σ(x)≤σ for all x∈X. For example, Σ = {exp{ζ (x; ν)}: ν∈Ψ}, where ζ (x; ν) is of a parametric form (e.g., linear in the covariates) and Ψ is a compact parameter space. We call (<ref>) a location-scale regression model. It specializes to different types of regression depending on the property of f: e.g., if f has mean 0, then (<ref>) is a usual mean regression model; if f has τ-th quantile equal 0, then (<ref>) corresponds to a QR model at level τ; if f has τ-th expectile equal 0, then (<ref>) is an ER model. More details are given below. §.§ Mean regression If the error ϵ has mean 0, the location-scale model (<ref>) corresponds to a mean regression model with heteroscedastic error. Here the regression function is the conditional mean of Y given X=x: μ(x)=𝔼(Y|X=x). Note that μ(x) is well defined if 𝔼(|Y||X=x) exists almost surely. However, if an estimator is obtained by least squares, a finite second moment of Y given X=x is typically required. §.§ Quantile regression If the error ϵ has a distribution with the τ-th quantile being 0, (<ref>) is just a QR model. The regression function μ (x)=μ_τ (x) is the τ-th conditional quantile of Y given X=x, which is defined as μ(x)=F_Y^-1(τ|x)=inf{ y:F_Y(y|X=x)≥τ}, where F_Y(y|x) denotes the cumulative distribution function (CDF) of Y given X=x. Note that μ(x) is defined based on the inverse of the CDF, without any requirement of moment conditions of Y|X=x. However, when using the check loss ρ_τ(y)=y(τ-I(y<0)) to obtain a QR estimator, the existence of E(|Y| |X=x) is required. §.§ Expectile regression If the error ϵ has a distribution with the ϕ-th expectile equal 0 (0<ϕ<1), (<ref>) corresponds to an ER model. The expectile regression function is the ϕ-th conditional expectile of Y given X=x: μ(x)={z:∫_-∞^z(z-y)dF_Y(y|X=x)=ϕ∫_-∞^∞| z-y| dF_Y(y|X=x)}. When ϕ=1/2, expectile regression reduces to the mean regression. Note that μ(x) is well defined if 𝔼(|Y||X=x)<∞. However, a finite second moment of Y given X=x is required in order to obtain the ER estimator using asymmetric least squares (ALS) based on the asymmetric square loss l_ϕ(y)=y^2|ϕ-I(y<0)|. §.§ Momentile and ψ-tile regressions Expectile regression can be generalized to momentile regression. Let k≥ 1 be an integer and assume the k-th moment of the conditional distribution of Y given X=x exists for x∈X. Then the k-th momentile regression function is the ϕ-th conditional momentile of Y given X=x as defined by μ(x)= {z:∫_-∞^z(z-y)^kdF_Y(y|X=x)=ϕ∫_-∞^∞| z-y|^k dF_Y(y|X=x)}. For k-momentile regression at level ϕ, the error ϵ in (<ref>) is assumed to have the ϕ-th k-momentile equal 0. Similar to the case of expectile, when existing, there is a unique choice of z that satisfies the integral equation above. The concept can be further generalized to facilitate e.g., robust regression under the Huber loss (to be discussed soon). Let ψ: [0, ∞ )→ [0, ∞ ) be a non-negative function satisfying ∫_-∞^∞ψ(|t-a|)f(t)dt<∞ for all a∈ℝ. Then the ψ-tile regression function is the ϕ-th conditional ψ-tile of Y given X=x as defined by μ_ϕ (x;ψ)=inf{z:∫_-∞^zψ(z-y)dF_Y(y|X=x)=ϕ∫_-∞^∞ψ(| z-y|) dF_Y(y|X=x)}. For ψ-tile regression at level ϕ, the error ϵ is assumed to have the ϕ-th ψ-tile equal 0. Note that when ψ=1, ψ-tile regression reduces to the quantile regression. Similar to the case of quantile, the set of z satisfying the above integral equality may not be unique and we choose their infimum for ψ-tile to be well-defined. For convenience, we will use ψ-tile regression as a general term to include quantile regression, mean regression, expectile regression, or regression under another sensible choice of ψ. A non-standard choice of ψ than those for the familiar quantile and expectile regressions can be useful. For instance, let ψ_H(x)=2min (x,1), which corresponds to the (differentiable) Huber loss function x^2I_{0≤ x≤ 1}+(2x-1)I_{x>1}. Then the resulting Huber-tile regression emphasizes robustness to outliers compared to the expectile regression while maintaining the computational advantage of expectile over quantile regression. If the distribution of ϵ is symmetric about 0, then the 1/2-th Huber-tile of Y given x is simply the same as the median (and mean, when existing) regression function. But Huber-tile regression is oriented towards robustness to outliers. The ψ-tile regression at level ϕ corresponds to the loss function L(y,ŷ)=J(y-ŷ), where J(z)=ϕ∫_0^z ψ(t)dt I_{z≥ 0} +(1-ϕ) ∫_0^|z|ψ(t)dt I_{z< 0}. It yields the check loss and the asymmetric square loss with ψ=1 and ψ(x)=x respectively. For Huber-tile ψ_H, we have J(z)=ϕ(z^2I_{0≤ z≤ 1}+(2z-1)I_{z>1}))+(1-ϕ)(z^2I_{-1≤ z< 0}-(2z+1)I_{z<-1}). Based on model (<ref>) with a general error distribution of density f, the conditional ψ-tile of Y given X=x, when existing, is of the form μ(x)+σ(x) t_ϕ, ψ(f), where t_ϕ, ψ(f) denotes ϕ-th ψ-tile of the error distribution with density f. Therefore, in the framework of location-scale model, the models for quantile, expectile, momentile, or ψ-tile, when existing, are closely related in terms of a simple shifting. For instance, if we assume model (<ref>) with τ-th quantile of the distribution of ϵ being zero, then the τ-th conditional quantile function of Y given X=x is just μ(x), but the ϕ-th conditional ψ-tile is μ(x)+σ(x) t_ϕ, ψ(f). §.§ Minimax risk and metric entropy Let μ-υ^2_2,h=∫ (μ(x)-υ(x))^2h(x)dx be the squared L_2(h) distance with respect to the measure induced by X. To assess estimation performance, we use the squared L_2 loss. (Minimax risk) The minimax risk of estimating μ∈𝒰 for the location-scale model under the squared L_2 loss is R_n=μ̂infμ∈𝒰,σ∈Σ, h∈ℋsup𝔼‖μ-μ̂‖_2,h^2, where μ̂ is over all possible estimators of μ based on the data. It is well-known that metric entropy plays a fundamental role in determining the minimax rate of convergence in statistical models (e.g, <cit.>, <cit.>, <cit.> and <cit.> for results and more historical references). Consider a totally bounded function class 𝒰 under the L_2(h) distance. (Packing ε-entropy) A finite set N_ε⊂𝒰 is said to be an ε-packing set in 𝒰 with separation ε>0, if for any functions μ,μ^'∈ N_ε, μμ^', we have μ-μ^'_2,h>ε. The logarithm of the maximum cardinality of ε-packing sets is called the packing ε-entropy of 𝒰 in the L_2(h) distance, and is denoted M(ε;𝒰). In this paper, we are interested in nonparametric regression under location-scale model. For a nonparametric class 𝒰, it usually satisfies the following richness property. (Richness) A function class 𝒰 has the richness property, if for some 0<α<1, it satisfies limε→ 0infM(αε)/M(ε)>1, where M(ε) is a presumed known order of M(ε;𝒰). Note that for nonparametric function classes, the metric entropy order is typically (1/ε)^c_1(log1/ε)^c_2 with c_1>0 and c_2∈ℝ. It clearly satisfied the richness condition. (Order regularity) A function class is (metric entropy) order regular if for every constant c>0, M(ε) and M(cε) are of the same order, i.e., M(ε) ≍ M(cε), as ε→ 0, where M(ε) is a presumed known order of M(ε;𝒰). Recall the definitions of Kullback-Leibler (K-L) divergence and Hellinger distance between two probability densities q_1(x) and q_2(x) with respect to a σ-finite measure ν: D(q_1||q_2)=∫ q_1(x)logq_1(x)/q_2(x)dν and d_H(q_1,q_2)= (∫(√(q_1(x))-√(q_2(x)))^2dν)^1/2, respectively. § MAIN RESULTS Recall f(y) is the probability density function of the error ϵ_i with respect to the Lebesgue measure in the location-scale model (<ref>). Clearly, conditions on f are needed to derive the minimax rate of convergence for estimating μ. Consider the location-scale family {1/σf(y-η/σ),η∈ℝ, σ∈ (0,+∞)}, and let f_η,σ denote the density. We assume D(f_0,1||f_η,σ)<∞ for η∈ℝ and σ∈ (0,∞) and it is continuous in (η,σ). The K-L divergence between f_0,1 and f_η,σ is locally upper bounded in order by η^2+(1-σ)^2 for (η,σ) near (0,1). That is, D(f_0,1||f_η,σ)= O(η^2+(1-σ)^2), for (η,σ) near (0,1). <Ref> is rather mild and it is satisfied by Gaussian, double-exponential, Student's t and many other distributions, see <cit.>, <cit.> and <cit.>. Note that if D(f_0,1||f_η,σ)<∞ is twice differentiable at (η,σ)=(0,1), then <Ref> is readily seen to be satisfied. Hellinger differentiability plays an important role in our derivation of sensible minimax upper bounds. Let ℱ={f_θ:θ∈Θ} be a subset of nonnegative functions that are integrable, indexed by θ in a subset Θ of ℝ^2. In our case, it is said to be Hellinger differentiable (see <cit.>, <cit.> and <cit.>) at a point θ_0 of Θ if the map θ→ξ_θ(y):=√(f_θ(y)) is differentiable in L_2 norm at θ_0. That is, if there exists a vector ξ̇_θ_0(y) of square integrable functions, called the Hellinger derivative at θ_0, such that ξ_θ(y)=ξ_θ_0(y)+(θ-θ_0)^'ξ̇_θ_0(y)+r_θ(y), where r_θ=√(∫ r_θ^2(y)dy)=o(|θ-θ_0 |) near θ_0 and |θ-θ_0 | is the Euclidean distance between θ and θ_0. In particular, we say the location-scale family (<ref>) is Hellinger differentiable at (η,σ)=(0,1) if the map (η,σ)→ξ_η,σ(y):=√(f_η,σ(y)) satisfies (<ref>) at (η,σ)=(0,1). The location-scale family (<ref>) is Hellinger differentiable at (η,σ)=(0,1) and the components of the Hellinger derivative are not linearly dependent. Besides conditions on the error distribution, we also need some constraint on the density function class ℋ. There exists a density h_0∈ℋ and a constant C<∞ such that for all h∈ℋ, we have sup_x∈𝒳h(x)/h_0(x)≤ C. Let M_h_0(ε;𝒰) and M_h_0(ε;Σ) denote the metric entropy of 𝒰 and Σ under the L_2(h_0) distance, respectively. M_h_0(ε;𝒰) is rich and regular. In addition, M_h_0(ε;Σ)=O(M_h_0(ε;𝒰)) as ε→ 0. Condition <ref> is sensible. In real applications, often parametric families of Σ are considered. For instance, for the family Σ = {exp{ζ (x; ν)}: ν∈Ψ}, where ζ (x; ν) is of a parametric form and Ψ is a compact parameter space, the metric entropy of Σ is of order log (1/ε), which is dominated by the metric entropy order of a nonparametric class 𝒰. See <Ref> for more discussion on the requirement M_h_0(ε;Σ)=O(M_h_0(ε;𝒰)). For the location-scale model, under Conditions 1-4, the minimax rate of convergence of estimating the regression function μ∈𝒰 is μ̂inf μ∈𝒰,σ∈Σ, h∈ℋsup𝔼μ-μ̂_2,h^2 ≍ε_n^2, where ε_n is determined by M_h_0(ε_n;𝒰)=nε_n^2. In the derivation of the minimax rate of convergence, as seen in the proof of the theorem, matching (in order) minimax upper and lower bounds are obtained, both in terms of the metric entropy of the regression class. In this paper, we focus on nonparametric location-scale model. Many results on parametric quantile/expectile regression have been obtained as mentioned in the introduction, with the familiar parametric rate of convergence. To show such a rate is minimax optimal, a local entropy argument is needed in the derivation of the minimax lower bounds via Fano's inequality (see <cit.> and <cit.>). In the above theorem, we assume the density f of ϵ is known. If f is unknown, but known in a finite set of densities that satisfy Conditions 1-2, then the minimax rate of convergence stays the same. It is interesting to observe that in the literature, upper bound results are often obtained with moment type conditions on the error distribution (e.g., <cit.>). While the density f is not required to be known, such results work with specific smoothness function classes to facilitate the construction of an optimal-rate estimator. In contrast, our approach captures the essential role of the metric entropy of the regression function class at the expense of restricting f to be known. See <Ref> for the proof of <Ref>. Note that for the commonly encountered smoothness function classes (e.g., Sobolev and Besov classes), the metric entropy orders are well-known (see, e.g., <cit.>; <cit.> for examples). Therefore the above theorem immediately characterizes the optimal rate of convergence for estimating a quantile/expectile (or ψ-tile) regression function in such a class. It is also important to point out that the conditions required for the theorem do not force the existence of the second or even the first moment of the error distribution as is often required for risk upper bounds on regression estimation. To be fair, previous results in the literature may have the advantage of allowing general distributions of possibly dependent errors (e.g., <cit.>). It seems that in the literature, minimax lower bounds for regression are typically derived under normality. Therefore the lower bounding part of Theorem <ref> provides a much needed understanding of the fastest possible rate of convergence as the right target when examining specific regression methods on their optimality in estimating the regression function where normality is inappropriate. For example, consider Lipschitz class Lip(α; C; [0,1]^d) that consists of all functions on [0,1]^d with all ⌊α⌋ partial derivatives satisfying sup_x_1,x_2∈ [0,1]^d |g(x_1)-g(x_2)|≤ C||x_1-x_2||^α-⌊α⌋. Then the packing entropy of the class under the L_2 distance is known to be of order ϵ ^-d/α as ϵ→ 0. Besides the normal distribution, as will be seen in the next sections, asymmetric Laplace distributions, and Cauchy distributions all satisfy the conditions for the theorem. Therefore, solving the simple equation ϵ ^-d/α=nϵ^2, which gives the solution ϵ_n=n^-α/(2α+d), by Theorem 1, we know the minimax rate of convergence under the square L_2 loss is n^-2α/(2α+d) for the aforementioned error distributions, regardless of the degree of heavy-tailness. § ERROR DISTRIBUTION EXAMPLES OF QUANTILE REGRESSION §.§ Asymmetric Laplace error distribution The asymmetric Laplace distribution (ALD) has been widely used in economics, finance and related applications (<cit.>). More recently, <cit.> and <cit.> investigated on the VaR and ES estimation based on ALD. It has the following probability density function: f(y|η,σ,τ)=τ(1-τ)/σexp[-ρ_τ(y-η/σ)],-∞<y<∞, where 0<τ<1 is the skewness parameter, σ>0 is the scale parameter, -∞<η<∞ is the location parameter, and ρ_τ is the check loss. Also, the τ-th quantile of the above ALD equals 0 if the location parameter η=0. §.§ Asymmetric connected double truncated gamma error distribution Although ALD has been widely used in asymmetric distribution scenarios, the peaky behavior at the origin is not desirable in many applications. To eliminate this behavior, <cit.> constructed an asymmetric connected double truncated gamma (ACDTG) distribution (see <cit.> for related properties of asymmetric densities). For t>0 and k∈ℝ, define the upper incomplete gamma function by Γ(t,k)=∫_k^+∞x^t-1e^-xdx. The ACDTG distribution has the following probability density function: f(y|η,σ,α,τ)=τ(1-τ)/σΓ(α+1,α)[α+ρ_τ(y-η/σ)]^αexp{-[α+ρ_τ(y-η/σ)]},-∞<y<∞, where α≥ 0 is the shape parameter, 0<τ<1 is the skewness parameter, σ is the scale parameter, and -∞<η<∞ is the location parameter. Also, the τ-th quantile of ACDTG is 0 if the location parameter η=0. §.§ Connected normal-Laplace error distribution The previous asymmetric distributions have the same form in the density expression but different slope parameters on the two sides of η. It is conceivable that the asymmetry can go beyond. Here, we consider an example where the shapes of the density are different on the left and right sides. Specifically, the left side follows a normal distribution and the right side follows a Laplace distribution. We name it connected normal-Laplace (CNL) error distribution. The probability density function f(y|η,σ,τ,p) is 2τ/√(2π)σexp[-(y-η)^2/2σ^2]I(y≤η)+1-τ/βσexp[-1/βσ(y-η)]I(y>η),-∞<y<∞, where σ>0 is the global scale parameter, β>0 is the additional scale parameter for the right side of the distribution, -∞<η<∞ is the location parameter and 0<τ<1 is the proportion parameter. Also, we have the constraint 1-τ/β=2τ/√(2π), which guarantees that <Ref> is satisfied. This distribution has τ-th quantile being 0 if the location parameter η=0. §.§ Cauchy distribution The above distributions have sub-exponential behaviors. However, heavier tails could be more proper in some applications. Thus, we consider an example with heavier tails than the previous distributions, which is the Cauchy distribution. It has the probability density function: f(y|η, σ) = 1/πσ[1+(y-η/σ)^2], -∞ < y < ∞, where -∞ < η <∞ is the location parameter and σ>0 is the scale parameter. Since the mean of Cauchy distribution does not exist, the estimation of the quantile function cannot be realized through the use of the check loss. However, the quantile regression function can still be estimated optimally by other methods. §.§ Minimax rates under the above error distributions Under Conditions 3-4, if the error of quantile regression follows one of the distributions presented in the previous subsections, the minimax rate of convergence of estimating the quantile regression function μ∈𝒰 under the squared L_2 distance is μ̂inf μ∈𝒰,σ∈Σ,h∈ℋsup𝔼μ-μ̂_2,h^2 ≍ε_n^2, where ε_n is determined by M_h_0(ε_n;𝒰)=nε_n^2. § AN ERROR DISTRIBUTION EXAMPLE OF EXPECTILE REGRESSION §.§ Asymmetric normal distribution Let us study a well-known distribution called asymmetric normal distribution, see <cit.> and <cit.>. Some existing results on this distribution and the application on economics data can be found in <cit.>. It has probability density function f(y|η,σ,ϕ): 2/√(π)σ√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ)exp[-ϕ/σ^2(y-η)^2I(y≥η)-1-ϕ/σ^2(y-η)^2I(y<η)], -∞<y<∞, where 0<ϕ<1 is the skewness parameter, σ>0 is the scale parameter, and -∞ < η<∞ is the location parameter. Given the location parameter η=0, the ϕ-th expectile equals 0. §.§ Minimax rate under the asymmetric normal distribution Under Conditions 3-4, if the error of expectile regression follows the asymmetric normal distribution, the minimax rate of convergence of estimating the expectile regression function μ∈𝒰 under the squared L_2 distance is μ̂inf μ∈𝒰,σ∈Σ,h∈ℋsup𝔼μ-μ̂_2,h^2 ≍ε_n^2, where ε_n is determined by M_h_0(ε_n;𝒰)=nε_n^2. The expectiles (when existing) and quantiles are related to each other. In the location-scale model, <cit.> gave a mapping between quantiles and expectiles. Hence, under the same distribution of ϵ, the τ-th quantile is equivalent to the ϕ-th expectile at a corresponding level ϕ that depends on τ and the distribution of ϵ. Therefore, except the Cauchy distribution, the error distributions in Section 4 can be used under ER and the error distribution in Section 5 can also be used under QR. § MINIMAX RATES OF CONVERGENCE FOR SOME DIMENSION REDUCTION MODELS In this section, we apply the main theorem to derive minimax rates for some dimension reduction models. In the following two subsections, we consider x∈ [0,1]^d and let ℋ be the class of densities h on [0,1]^d that satisfied 1/C≤ h(x)≤C for a constant C>1. Suppose Σ={exp(∑_i=1^dc_ix_i):1≤ i≤ dmax| c_i|≤ C} for some known C>0. Let the error distribution be any of the examples considered in the previous sections. §.§ Additive and low order interaction models We first consider the regression functions in Sobolev classes with different interaction orders and smoothness. For r≥ 1, let z=(z_1,⋯,z_r)∈[0,1]^r and k=(k_1,⋯,k_r) with integer k_i≥ 0, i=1,⋯,r. Also, let D^k denote the differentiation operator D^k=∂^‖k‖_1/∂ z_1^k_1⋯∂ z_r^k_r. For integer α, define the r-dimensional Sobolev norm ‖ g‖_𝒢_2^α,r=‖ g‖_2+( ∑_‖k‖_1=α∫_[0,1]^r| D^kg|^2dz)^1/2. Let 𝒢_2^α,r(C) denote the set of all functions g on [0,1]^r with ‖ g‖_𝒢_2^α,r≤ C for some known C>0. Then, we consider the following function classes on [0,1]^d: T_1(α;C)={∑_i=1^dg_i(x_i):g_i∈𝒢_2^α,1(C), 1≤ i≤ d}, T_2(α;C)={∑_1≤ i<j≤ dg_i,j(x_i,x_j):g_i,j∈𝒢_2^α,2(C), 1≤ i<j≤ d}, ⋮ T_d(α;C)=𝒢_2^α,d(C), where α≥ 1 and C>0. Note that T_1(α;C) contains additive functions and T_r(α;C),1<r≤ d have higher order interactions. The L_2 metric entropies of these classes are of the same orders as 𝒢_2^α,1(C),𝒢_2^α,2(C),⋯,𝒢_2^α,d(C), respectively, which are ϵ^-1/α,ϵ^-2/α,⋯,ϵ^-d/α, respectively (see, e.g, <cit.>). By <Ref>, we have the following result. For the nonparametric regression function over class T_r(α;C), the minimax rate of convergence under the squared L_2 loss for estimating the regression function is n^-2α/2α+r for all α≥ 1 and 1≤ r≤ d. From the result above, we see that the minimax rate of convergence is much slower when the interaction order is high. Also, for the additive model, the rate of convergence is just decided by the marginal smoothness, which can significantly increase the estimation accuracy compared with the models that contain high-order interactions. Instead of Sobolev classes considered above, similar results can be readily stated for Hölder classes, in which the smoothness parameter α is no longer restricted to be a positive integer, but can be any positive number. §.§ Multiple Index Models Differently from the additive or low interaction order modeling for the regression function, we may consider multiple index models instead in the location-scale model setting. For recent research on multiple index models, see e.g, <cit.> and <cit.>. We here focus on estimation of the whole regression function instead of the linear space parameters or the one dimensional functions. For p≥ 1, the regression function is of the form λ_1(β_(1)^T x)+λ_2(β_(2)^T x)+⋯ +λ_p(β_(p)^T x), where x∈ [0,1]^d and λ_i,1≤ i≤ p are non-constant one dimensional functions (the details will be given later). Moreover, β_(i)∈ℝ^d,1≤ i≤ p are distinct d-dimensional vectors satisfying ‖β_(i)‖_2=1,1≤ i≤ p. When p=1, it corresponds to the single index model. Since we are interested in the convergence rate of minimax estimation of the whole regression function, we are not concerned with the identifiability issues and do not specify the identifiability conditions for the model we study. Assume λ_i,1≤ i≤ p, are non-constant and belong to a Hölder class Λ^s,γ(L) including all of the functions satisfying γ-Hölder continuous condition on the s-th derivative of λ: |λ^(s)(z_1)-λ^(s)(z_2)|≤ L| z_1-z_2|^γ, where z_1,z_2∈ℝ, L is a constant, s is a nonnegative integer and 0<γ≤ 1, and assume for each k, 0≤ k≤ s, the k-th derivative of λ is uniformly upper bounded. Let α=s+γ. We know that the metric entropy of Λ^s,γ(L) is M_d(ε;Λ)≍ε^-1/α under the L_2 distance as shown in <cit.>. If the regression function is assumed to belong to the multiple index model over the Hölder classes Λ^s,γ(L), the minimax rate of convergence under the squared L_2 loss for estimating the regression function is n^-2α/2α+1 for all α> 0. See <Ref> for the proof of <Ref>. Through this example, we see that for the multiple index model with fixed p, the minimax rate of convergence is just decided by the overall smoothness parameters α. This can significantly improve the convergence speed compared with models with d-variate smooth function λ^*(x_1,⋯,x_d) with overall smoothness α, in which case the convergence rate is only n^-2α/2α+d as seen in the previous subsection. This reflects the well-known curse of dimensionality. § DISCUSSIONS In this section, we discuss on relation of our work to the literature, our assumption on size of the scale function class relative to that of the target regression function class, and feasible estimation to achieve the minimax rate of convergence. §.§ Minimax lower rates under non-normal error distributions As pointed out in the introduction, in the literature, for regression problems, estimators have been developed for the estimation of the ψ-tile (mean, quantile or expectile) and shown to archive a certain rate of convergence. For claiming the estimators are minimax-rate optimal, typically the argument that the established upper rate matches the known minimax lower rate under the Gaussian assumption on the regression error is used. Such a claim is certainly valid in the case of the error distribution assumed in a class that includes the normal distribution. It is helpful to emphasize here that the worst-case performance in the minimax evaluation includes that under the normal error. However, this type of results have two limitations. First, it leaves the problem unresolved if the lower bound can actually be improved in rate under a class of error distributions that do not contain the normal distribution. For instance, consider a light tail distribution f(y)=c_1exp(-c_2y^4), -∞ <y< ∞, where c_1 and c_2 are positive constants. Clearly, its tail is much lighter compared to the normal distribution. The lower bounding results in the literature, to our knowledge, does not answer the question if the minimax rate of convergence for expectile or quantile regression under this error distribution is faster than the normal distribution. <Ref> in this paper is applicable, which shows that the rates of convergence is identical to the normal regression case. Second, nor is it known that a severely heavy tailed error distribution would damage the minimax rate of convergence. In particular, to the best of our knowledge, for example, it is unknown if the minimax-rate of mean/expectile regression (or quantile regression) under an asymmetric Laplace distribution (or Cauchy distribution for quantile regression) stays the same as under the normal error. Our results in this paper fill in the aforementioned gaps in the literature and offer a general understanding on optimal ψ-tile regression in the location-scale framework. Another interesting point is the following. While our results show that heavy tail of the error distribution does not affect the minimax rate of convergence for quantile estimation, it should not be interpreted as that the error distribution does not really matter for optimal-rate convergence. It actually does! For instance, suppose the analyst knows that the error distribution is symmetric about 0 and wants to estimate the median (and mean if it exists). If the error is Cauchy but the analyst mistakenly treats the error as normal and consequently applies a normal mean regression tool to estimate the median function. Then, the estimators (e.g., histogram/kernel regression) typically may not converge at all due to the simple fact that the sample mean of Cauchy random variables has the same Cauchy distribution and thus local averaging of the responses does not even produce a consistent estimator. In contrast, an estimator that properly targets the median instead of the non-existing mean (such as our estimator in the derivation of the minimax upper bound, albeit difficult to implement) achieves the minimax rate of convergence. Therefore, in presence of heavy-tailed random errors, unlike the light-tail situation, it is critically important to choose the right regression method for optimal rate of convergence. §.§ Massiveness of the scale functions relative to that of the regression functions For our main results, <Ref> is needed, which requires that M_h_0(ε;Σ)=O(M_h_0(ε;𝒰)) as ε→ 0. It means that the class of the scale functions is no larger than that of the regression functions in terms of the metric entropy order. In our view, the condition is sensible for ψ-tile regression for the following two reasons. First, for mean, expectile or quantile regressions, when the errors are suspected to be heteroskedastic, a common approach is to model the variance function parametrically for feasibility and interpretability purposes, and it has been successfully applied in various applications (see, e.g., <cit.>). This approach is also natural because in ψ-tile regression, the regression function is of primary interest and the error standard deviation (or scale) function is a nuisance and simplicity in its modeling is preferred. With a parametric family for the scale functions, M_h_0(ε;Σ)=O(M_h_0(ε;𝒰)) certainly holds. Second, when one is interested in ψ-tile regression at multiple τ levels, the condition M_h_0(ε;Σ)=O(M_h_0(ε;𝒰)) automatically holds. Given model (<ref>) with an error distribution with density f, the conditional quantile, expectile, momentile and ψ-tile at arbitrarily level τ∈ (0, 1) are all of the form of μ̃(x)=μ(x)+cσ(x) for some constant c that depends on τ, f and ψ. Unless c=0, which can happen for at most a single τ value, the metric entropy of the associated class of the regression function μ̃(x) is of order max(M_h_0(ε; U), M_h_0(ε; Σ)) as ε→ 0. Therefore, as long as one is interested in at least two τ levels, the regression function classes are automatically as large as the class of the scale functions. It should be pointed out that the case M_h_0(ε;Σ) is much larger than O(M_h_0(ε;𝒰)) can still be of interest. As a reviewer pointed out, parametric regression with independent errors with standard deviation σ (x) uniformly bounded enjoys the usual parametric rate of convergence for estimation of the regression function. In this case, clearly the class of scale functions is really large (in fact with infinite metric entropy for a small enough packing radius). It is intriguing that the tremendous complexity of the class of scale functions actually does not damage the rate of convergence for the estimation of the regression function. On the other hand, as soon as one intends to estimate another ψ-tile, even if it is of the same nature but at a slightly different τ level, the rate of convergence is completely different. For example, under model (<ref>) with the strong assumption of normal error and Eε =0, suppose U is a parametric function class and Σ consists of all functions σ≤σ (x)≤σ for some positive constants σ < σ. Then for mean regression, the parametric rate 1/n is achieved by standard methods such as least squares. However, for expectile or quantile regression with τ≠ 1/2, the regression function is u(x)+cσ (x) for some constant c≠ 0. Clearly, the resulting regression function class is as complex as Σ in terms of metric entropy order and consequently we know there cannot exist any convergent estimator of the regression function in the uniform sense no matter how τ is close to 1/2. In this example, the estimand of the parametric mean function is an “outlier” in the grand scheme of expectile or quantile (or more generally ψ-tile) regression that intends to offer a broader understanding than a single sliced view at a fixed and unusually lucky τ. The above said, the study of minimax estimation at an “outlier” τ is certainly of interest on its own, which we do not pursue in this work. §.§ Estimators with minimax optimality In this paper, our focus is on developing a general theoretical understanding on unifying minimax mean, expectile, and quantile regressions in the location-scale nonparametric framework. As such, we highlight the most essential aspects in their generality on the determination of the minimax rate of convergence of regression estimation. In particular, as far as the nature of the regression function class is concerned, it turns out that its metric entropy order alone is enough and other characteristics of the function class are irrelevant as far as the minimax rate of convergence is concerned. For establishing the minimax rate of convergence, we need to develop estimators of the regression function whose worst-case risks match the minimax lower bounds in order. To this end, given the generality of our results that do not require knowledge of the regression function class beyond the metric entropy order, the construction of the estimators is of a theoretical nature that is in tune with the abstract metric entropy characterization of the class U. Therefore, not surprisingly, these theory-oriented ϵ-net based estimators are not easy for real implementation. In a specific application, typically one works with a concrete function class (or classes) with known helpful characterizations of the regression functions (such as smoothness, monotonicity, additivity, etc.). Then while our theory provides a precise understanding of the minimax rate of convergence, the knowledge of the specific function classes can be naturally taken into account in deriving practical estimators that achieve the minimax rate of convergence. An example is given below to illustrate this point. Consider expectile regression under the asymmetric Laplace error distribution, with the expectile function f assumed to be bounded and belong to the Besov class B_2,∞^α on [-1,1]^d, whose metric entropy order is ϵ^-d/α, where α >0 is the smoothness parameter (see, e.g., <cit.>). Based on <Ref> and <Ref> in this paper, we know the minimax rate of convergence for estimating the regression function is n^-2α/2α+d (and the same rate also holds under the other error distributions studied in <Ref>). As pointed out earlier, our ϵ-net based estimators in deriving the minimax result is not practical in its implementation. <cit.> proposed a support vector machine (SVM) method for estimation. Given a regularization parameter λ>0, a given τ in (0,1) and a reproducing kernel Hilbert space H on [-1,1]^d with a bounded, measurable kernel k: [-1,1]^d× [-,1]^d→ℝ, the SVM method builds an estimate u_λ by solving an optimization problem of the form u_λ=u∈ Hmin ( λ‖ u‖_H^2 + 1/n∑_i=1^nL (y_i,u(x_i)), where L is the ALS for ER. They obtained an algorithm for solving (<ref>) using Gaussian radial basis kernels and established the learning rate n^-2α/2α+dlog n of the above estimator for any error distribution decaying exponentially fast. Therefore, their practically feasible estimator achieves the minimax rate of convergence up to an extra logarithmic factor. It remains to be seen how to come up with a computationally feasible algorithm to achieve the minimax rate of convergence exactly, especially when α is unknown. § CONCLUSION Location-scale models cover familiar mean, quantile and expectile regression problems that are widely used in econometric and statistical applications. While a general understanding on minimax rate for normal mean regression is available in the literature, prior to our work, little was given on the minimax optimal rates for estimating a conditional quantile or expectile function in a general function class under non-Gaussian errors. In this paper, in the location-scale model framework, we have derived the minimax rates of convergence for regression learning under the square L_2 loss for nonparametric function classes. Not surprisingly, metric entropy continues to play a central role in the determination of the optimal rate of convergence. The results are readily applicable for various function classes that are commonly considered in data science even when the error distribution does not have a finite mean. A limitation of our work is that it deals with i.i.d. observations. <cit.> showed that in some settings similar regression results hold under weakly dependent random errors. Under Gaussianity, effects of short- and long-range dependences of the errors are well understood for minimax nonparametric regression with homoscedastic errors (see, e.g., <cit.>, <cit.>). It remains open to understand if a general error distribution with heteroscedasticity can be handled in the same spirit that short-range dependences do not damage the minimax rate of convergence for ψ-tile regression but long-range dependence of the errors, when strong enough relative to the massiveness of the regression function class, can substantially slow down the rate of convergence as shown in the aforementioned articles. Another limitation of our work is that unlike results in <cit.> and <cit.>, our framework does not deal with endogeneity in the regression relationship. It remains to be seen how to modify our minimax upper and lower bounding techniques for identifying the minimax rate of convergence in a generality similar to the present work when instrumental variables are included. equationsection § APPENDIX §.§ A lemma on Kullback-Leibler divergence, Hellinger distance and L_2 distance We need a lemma for the proof of <Ref>. For the following result, μ is taken to be a constant and f_μ,σ (y)=1/σf(y-μ/σ). For the error distribution family, under <Ref> and <Ref> respectively, we have the following characterizations of the K-L divergence and Hellinger distance. * Under <Ref>, for any J>0, σ>1 and σ=σ^-1, we can find a constant C>0 (which may depend on f, J and σ) such that d^2_H(f_μ_1,σ_1,f_μ_2,σ_2)≥C(μ_1-μ_2)^2 holds for all μ_1,μ_2∈ [-J,J] and σ_1,σ_2∈ [σ,σ]. * Under <Ref>, for any J>0, σ>1 and σ=σ^-1, there exists a constant C>0 (which may depend on f, J and σ) such that D(f_μ_1,σ_1||f_μ_2,σ_2)≤C[(μ_1-μ_2)^2+(σ_1-σ_2)^2] holds for all μ_1,μ_2∈ [-J,J] and σ_1,σ_2∈ [σ,σ]. We first prove part 1. Suppose the conclusion does not hold. Then, there exists a sequence C_n→ 0, and we can find μ_1n,μ_2n in [-J,J] and σ_1n,σ_2n∈ [σ,σ] such that d^2_H(f_μ_1n,σ_1n,f_μ_2n,σ_2n)< C_n(μ_1n-μ_2n)^2. Since Hellinger distance is location and scale invariant, we have d^2_H(f_μ_1n,σ_1n,f_μ_2n,σ_2n)=d^2_H(f_0,1,f_κ_n,ϖ_n), where κ_n=μ_2n-μ_1n/σ_1n and ϖ_n=σ_2n/σ_1n. Since {(κ_n,ϖ_n), n≥ 1} are bounded, we must have a convergent subsequence n_k such that κ_n_k→κ and ϖ_n_k→ϖ for some κ∈ [-2J/σ,2J/σ] and ϖ∈ [σ/σ,σ/σ]. If κ_n_k→κ≠ 0 or ϖ_n_k→ϖ≠ 1, since our location-scale family is continuous under the Hellinger distance, together with that our location-scale family is identifiable, we have n→∞limd^2_H(f_0,1,f_κ_n_k,ϖ_n_k)=d^2_H(f_0,1,f_κ,ϖ)>0, which contradicts (<ref>) since C_n→ 0. Thus we must have κ_n_k→κ=0 and ϖ_n_k→ϖ=1. In the following, we denote √(f_κ,ϖ) as ξ_κ,ϖ. Based on <Ref>, taking Hellinger derivative of ξ_κ_n_k,ϖ_n_k at ξ_0,1 we have ξ_κ_n_k,ϖ_n_k=ξ_0,1+(κ_n_k,ϖ_n_k-1)ξ̇_0,1+r_κ_n_k,ϖ_n_k, where ξ̇_0,1=(ξ̇_(0,1)_κ,ξ̇_(0,1)_ϖ)^T is a two dimensional vector, r_κ_n_k,ϖ_n_k/√(κ_n_k^2+(ϖ_n_k-1)^2)=o(1), ‖ξ̇_(0,1)_κ‖>0 and ‖ξ̇_(0,1)_ϖ‖>0. Then we obtain d_H^2(f_0,1,f_κ_n_k,ϖ_n_k) =∫(ξ_0,1-ξ_κ_n_k,ϖ_n_k)^2dy =∫(κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ+r_κ_n_k,ϖ_n_k)^2dy =∫(κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ)^2dy+∫(r_κ_n_k,ϖ_n_k(y))^2dy     +2∫(κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ)r_κ_n_k,ϖ_n_kdy. Since ξ̇_0,1_κ(y) and ξ̇_0,1_ϖ(y) are not linear related, as k →∞, we get ∫(κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ)^2dy ≍κ_n_k^2+(ϖ_n_k-1)^2 ≽κ_n_k^2. By Cauchy-Schwarz inequality, we have 2∫(κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ)r_κ_n_k,ϖ_n_kdy ≤ 2 ‖κ_n_kξ̇_(0,1)_κ+(ϖ_n_k-1)ξ̇_(0,1)_ϖ‖‖ r_κ_n_k,ϖ_n_k‖ =O(√(κ_n_k^2+(ϖ_n_k-1)^2))*o((√(κ_n_k^2+(ϖ_n_k-1)^2))) =o(κ_n_k^2+(ϖ_n_k-1)^2). Therefore, d_H^2(f_0,1,f_κ_n_k,ϖ_n_k) is at least of order κ_n_k^2, which contradicts <ref>. This proves the first part of the lemma. Now, we prove the second part. Suppose the conclusion does not hold. Then, there exists a sequence C_n→∞, and we can find μ_1n,μ_2n in [-J,J] and σ_1n,σ_2n∈ [σ,σ] such that D(f_μ_1n,σ_1n||f_μ_2n,σ_2n)> C_n[(μ_1n-μ_2n)^2+(σ_1n-σ_2n)^2]. We also have D(f_μ_1n,σ_1n||f_μ_2n,σ_2n)=D(f_0,1||f_κ_n,ϖ_n), where κ_n=μ_2n-μ_1n/σ_1n and ϖ_n=σ_2n/σ_1n. Since {(κ_n,ϖ_n), n≥ 1} are bounded, we must have a convergent subsequence n_l such that D(f_0,1||f_κ_n_l,ϖ_n_l)>C_n_l[κ_n_l^2+(1-ϖ_n_l^2)], which contradicts <Ref>, based on the earlier arguments. §.§ Proof of <Ref> To obtain the minimax rate of convergence, we shall derive lower and upper bounds that match in order under the squared L_2 distance. Let V_K(ε) denote the covering ε-entropy of {p_μ,σ,h:μ∈𝒰,σ∈Σ} under the square root of K-L divergence (recall that the covering ε-entropy is the logarithm of the minimum cardinality of ε-nets), where p_μ,σ,h is the joint density of (X,Y) with marginal density h(x) for X, regression function μ(x) and scale function σ(x). Let M_h_0(ε;𝒰) be the ε-packing entropy of 𝒰 under the L_2(h_0) distance. Based on part 2 of <Ref>, under the assumptions that the regression functions in the class 𝒰 are uniformly bounded and the scale functions in the class Σ are uniformly upper and lower bounded, we know D(p_μ_1,σ_1,h|| p_μ_2,σ_2,h) =D(h(x)f_μ_1,σ_1(y)|| h(x)f_μ_2,σ_2(y)) ≤ c(μ_1-μ_2^2_2,h+σ_1-σ_2^2_2,h) ≤ cC(μ_1-μ_2^2_2,h_0+σ_1-σ_2^2_2,h_0), where the constant c depends on the density f, constant J=μ∈𝒰sup‖μ‖_∞ and σ (recall 1/σ≤σ(x)≤σ for all σ∈Σ), and C is the constant in <Ref>. Therefore, we know V_K(ε) is upper bounded by order of M_h_0(c̃ε;𝒰)+M_h_0(c̃ε;Σ) for some constant c̃>0, which has the same order as M_h_0(ε;𝒰) by <Ref>. In addition, we can construct a covering set for {𝒰,Σ} under h_0. Let ε_n and ε_n be determined by V_K(ε_n)=nε_n^2, M_h_0(ε_n;𝒰)=4nε_n^2+2log 2. Below we first give a risk upper bound of relevant estimation in terms of the Hellinger distance and then show the squared L_2(h) risk on estimation of μ is actually upper bounded by a multiple of the Hellinger risk. We connect the joint density estimation with regression. Suppose we have a density estimator p̂_n(x,y) of the joint density of ( X, Y), which is of the form h(x)q̂_n(y|x). Let (μ̂,σ̂) be obtained by minimizing d_H(q̂_n(y|x),1/σ̃(x)f(y-μ̃(x)/σ̃(x))) over σ≤σ̃(x)≤σ and -J≤μ̃(x)≤ J at each x∈ X. Then because d_H(p_μ,σ,h,p̂_n)≥ d_H(p_μ̂,σ̂, h,p̂_n), we have d_H(p_μ,σ,h,p_μ̂,σ̂, h)≤ d_H(p_μ,σ,h,p̂_n)+d_H(p_μ̂,σ̂,h,p̂_n) ≤ 2d_H(p_μ,σ,h,p̂_n). We can construct an estimator p̂_n satisfying sup_μ∈𝒰,σ∈Σ,h∈ℋ𝔼D(p_μ,σ,h||p̂_n)≤ 2ε_n^2. To be more specific on how to construct the estimator p̂_n, let A_n and B_n be (c^*ε_n)-nets of 𝒰 and Σ, respectively, under ‖·‖_2,h_0, where c^* is a constant such that for any (μ,σ)∈𝒰×Σ, there exists μ̃∈ A_n and σ̃∈ B_n s.t. D(p_μ,σ,h||p_μ̃,σ̃,h)≤ε^2_n based on (<ref>). Then, G_ε_n=A_n× B_n is an ε_n-net of {𝒰,Σ} under the square root of K-L divergence. For each pair of (μ,σ)∈𝒰×Σ, there exists p_j:=p_μ_j,σ_j,h with (μ_j,σ_j)∈ G_ε_n such that D(p_μ,σ,h||p_j)≤ε_n^2. Let p^n:=p(x_1,y_1,x_2,y_2,⋯,x_n,y_n)=1/| G_ε_n|∑_i=1^| G_ε_n| p_i(x_1,y_1)p_i(x_2,y_2)⋯ p_i(x_n,y_n), we have D(p^n_μ,σ,h||p^n) =∫ p_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)logp_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)/1/| G_ε_n|∑_i=1^| G_ε_n| p_i(x_1,y_1)⋯ p_i(x_n,y_n)dx_1dy_1⋯ dx_ndy_n ≤∫ p_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)logp_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)/1/| G_ε_n| p_j(x_1,y_1)⋯ p_j(x_n,y_n)dx_1dy_1⋯ dx_ndy_n =log| G_ε_n|     +∫ p_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)logp_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)/p_j(x_1,y_1)⋯ p_j(x_n,y_n)dx_1dy_1⋯ dx_ndy_n =log| G_ε_n|+nD(p_μ,σ,h||p_j) ≤ V_K(ε_n)+nε_n^2. Note p(x_1,y_1,⋯,x_n,y_n)=p̃_1(x_1,y_1)p̃_2(x_2,y_2|x_1,y_1)⋯p̃_n(x_n,y_n|x^n-1,y^n-1), where x^k-1=(x_1,⋯,x_k-1) and y^k-1=(y_1,⋯,y_k-1) for 2≤ k≤ n. Let p̂_n=1/n[p̃_1(x,y)+p̃_2(x,y|x_1,y_1)+⋯ +p̃_n(x,y|x^n-1,y^n-1)], we have 𝔼D(p_μ,σ,h||p̂_n) =𝔼∫ p_μ,σ,hlogp_μ,σ,h/p̂_ndxdy ≤𝔼1/n∑_i=1^n∫ p_μ,σ,h(x_i,y_i)logp_μ,σ,h(x_i,y_i)/p̃_i(x_i,y_i|x^i-1,y^i-1)dx_idy_i =1/n∑_i=1^n𝔼logp_μ,σ,h(x_i,y_i)/p̃_i(x_i,y_i|x^i-1,y^i-1) =1/n𝔼logp_μ,σ,h(x_1,y_1)⋯ p_μ,σ,h(x_n,y_n)/p̃_1(x_1,y_1)p̃_2(x_2,y_2|x_1,y_1)⋯p̃_n(x_n,y_n|x^n-1,y^n-1)) =1/nD(p^n_μ,σ,h||p^n) ≤V_K(ε_n)/n+ε_n^2 =2ε_n^2. It is important to note that p̂_n is of the form h(x)q̂_n(y|x) and h needs not to be known in the construction of q̂_n(y|x). Since the Hellinger distance is upper bounded by the square root of K-L divergence, we have sup_μ∈𝒰,σ∈Σ,h∈ℋ𝔼d_H^2(p_μ,σ,h,p̂_n)≤ 2ε_n^2. It follows that sup_μ∈𝒰,σ∈Σ,h∈ℋ𝔼d_H^2(p_μ,σ,h,p_μ̂,σ̂,h) ≤ 4sup_μ∈𝒰,σ∈Σ,h∈ℋ𝔼d_H^2(p_μ,σ,h,p̂_n)≤ 8ε_n^2. Now we show that the squared L_2 distance is upper bounded by a multiple of the Hellinger distance through Hellinger differentiability. By part 1 of <Ref> we have for some C>0, d_H^2(f_μ (x),σ(x),f_μ̂ (x),σ̂(x))≥C(μ(x)-μ̂_n(x))^2. Furthermore, we have d_H^2(p_μ,σ,h,p̂_n) ≥1/4d_H^2(p_μ,σ,h,p_μ̂,σ̂, h) =1/4∫ d_H^2(f_μ (x),σ(x),f_μ̂ (x),σ̂(x))h(x)dx ≥1/4C∫ (μ(x)-μ̂_n(x))^2h(x)dx =1/4C‖μ-μ̂‖_2,h^2. From all above, we conclude 𝔼‖μ-μ̂‖_2,h^2≼ε_n^2. Then, let's derive the lower bound. It suffices to assume σ is a fixed constant assumed to be in Σ and focus on h_0 only, because the lower bound already matches the upper bound in order. With more choices in σ and h, the lower bound cannot be smaller. Let N_ε_n be the ε_n-packing set of 𝒰 under the L_2(h_0) distance. By Fano's inequality (See Theorem 1 in <cit.>), we have μ̂inf μ∈𝒰sup𝔼[‖μ-μ̂‖_2,h_0^2] ≥1/4(1-I(N_ε_n,Y^n)+log 2/log| N_ε_n|)ε_n^2, where I(N_ε_n,Y^n) is the Shannon's mutual information between the uniformly distributed random parameter on N_ε_n and the random sample. We have I(N_ε_n,Y^n)=∑_μ∈ N_ε_nω(μ)∫ p(y^n|μ)logp(y^n|μ)/p_ω(y^n)dy^n, in which ω(μ) is the uniform distribution on μ∈ N_ε_n and p_ω(y^n)=∑_μ∈ N_ε_nω(μ)p(y^n|μ) is the marginal distribution of y. Since the Bayes mixture density p_ω(y^n) minimize I(N_ε_n,Y^n) over other candidates of marginal distribution q(y^n), we have I(N_ε_n,Y^n)≤∑_μ∈ N_ε_nω(μ)∫ p(y^n|μ)logp(y^n|μ)/q(y^n)dy^n ≤max_μ∈ N_ε_n D(p_μ^n||q^n), where we denote p_μ,σ,h_0 as p_μ, because σ and h_0 are fixed. Let ω̃ be the uniform distribution on G_ε_n and let q(y^n)=p_ω̃(y^n)=∑_μ∈ N_ε_nω̃(μ)p(y^n|μ), we have that for any μ∈𝒰, D(p_μ^n||q^n) = Elogp(y^n|μ)/1/| G_ε_n|∑_μ^'∈ G_ε_np(y^n|μ^') ≤log| G_ε_n| + Elogp(y^n|μ)/p(y^n|μ^') = log| G_ε_n| + D(p_μ^n||p_μ^'^n) ≤ V_K(ε_n)+nε_n^2. Hence, we have 1-I(N_ε_n,Y^n)+log 2/log| N_ε_n|≥1/2, which leads to μ̂inf μ∈𝒰sup𝔼[‖μ-μ̂‖_2,h_0^2] ≥1/8ε_n^2. It follows μ̂inf μ∈𝒰,σ∈Σ, h∈ℋsup𝔼[‖μ-μ̂‖_2,h^2] ≥1/8ε_n^2. Under Conditions 1, 2 and 4, we know V_K(ε_n) and M_h_0(ε_n; 𝒰) are actually of the same order and the resulting ε_n and ε_n converge at the same rate. This completes the proof of <Ref>. §.§ Proof of Corollary 4 We define the parametric class ℬ={β^T x | x∈ [0,1]^d, β∈ℝ^d, ‖β‖_2=1}. Let 𝒮 be an ε/2p-net for the Hölder class Λ^α,γ(L) under the L_∞ distance with respect to the Lebesgue measure. Also, let 𝒯 be an (ε/2pL_*)^1/γ-net for the parametric class ℬ under the L_2 distance with respect to the probability measure on x, in which L_* is a constant and will be given later. Consider the class Q={λ_1(β_(1)^T x)+λ_2(β_(2)^T x)+⋯ +λ_p(β_(p)^T x): λ_i∈Λ^α,γ(L), β_(i)∈ℬ, 1≤ i≤ p}, for the pairs (λ̃_i,β̃_(i)), in which λ̃_i∈𝒮 and β̃_(i)∈𝒯 with 1≤ i≤ p, they form an ε-net for the class, as shown below. For 1≤ i≤ p, for any λ_i∈Λ^α,γ(L), there exists λ̃_i∈𝒮 so that ∫_[0,1]^d|λ_i(β_(i)^T x)-λ̃_i(β_(i)^T x)|^2q_x( x)d x ≤ε^2/4p^2, where q_x( x) is the joint density of x. Since the s-th derivative of λ̃_i satisfy γ-Hölder condition, we can easily verify that |λ̃_i(β_(i)^T x)-λ̃_i(β̃_(i)^T x)|≤ L_*|β_(i)^T x-β̃_(i)^T x|^γ, where L_* is a constant depending on L, s and d. For 1≤ i≤ p, since β_(i)∈ℬ and β̃_(i)∈𝒯, we obtain ∫_[0,1]^d|λ̃_i(β_(i)^T x)-λ̃_i(β̃_(i)^T x)|^2q_x( x)d x ≤ L_*^2∫_[0,1]^d|β_(i)^T x-β̃_(i)^T x|^2γq_x( x)λ(d x) ≤ L_*^2(∫_[0,1]^d|β_(i)^T x-β̃_(i)^T x|^2q_x( x)λ(d x))^γ ≤ε^2/4p^2. Hence, for 1≤ i≤ p, we have ∫_[0,1]^d|λ_i(β_(i)^T x)-λ̃_i(β̃_(i)^T x)|^2q_x( x)d x ≤ 2∫_[0,1]^d|λ_i(β_(i)^T x)-λ̃_i(β_(i)^T x)|^2q_x( x)λ(d x) +2∫_[0,1]^d|λ̃_i(β_(i)^T x)-λ̃_i(β̃_(i)^T x)|^2q_x( x)d x ≤ε^2/p^2. Therefore, we get ∫_[0,1]^d|λ_1(β_(1)^T x)+λ_2(β_(2)^T x)+⋯+ λ_p(β_(p)^T x)-λ̃_1(β̃_(1)^T x)-λ̃_2(β̃_(2)^T x)-⋯ -λ̃_p(β̃_(p)^T x)|^2q_x( x)d x ≤ p∫_[0,1]^d|λ_1(β_(1)^T x)-λ̃_1(β̃_(1)^T x)|^2q_x( x)d x+p∫_[0,1]^d|λ_2(β_(2)^T x)-λ̃_2(β̃_(2)^T x)|^2q_x( x)d x     +⋯+p∫_[0,1]^d|λ_p(β_(p)^T x)-λ̃_p(β̃_(p)^T x)|^2q_x( x)d x ≤ε^2. Based on the known metric entropy order of the Hölder class, we know the size of 𝒮 is of order (ε/2p)^-1/α+γ≍ε^-1/α+γ and the size of 𝒯 is of order log (2pL_*/ε)^1/γ≍log1/ε. It follows that the product of 𝒮 and 𝒯 is of size of order ε^-1/α+γ. Therefore the metric entropy of Q is of order ε^-1/α+γ. By <Ref>, we have the minimax rate of convergence under the squared L_2 loss for estimating the regression function is n^-2(α+γ)/2(α+γ)+1 for all α≥ 0 and 0<γ≤ 1. §.§ A useful proposition on Hellinger differentiability and its extension The following result is helpful for showing Hellinger differentiability. (<cit.>) Let ℒ={l_β(x):|β|<J} be a subset of nonnegative functions that are integrable with respect to Lebesgue measure λ for some J>0. Suppose that (i) the map (x,β)↦ l_β(x) is product measurable; (ii) the function β↦ l_β(x) is absolutely continuous on [-J,J] for almost all x; (iii) the function β↦ l_β(x) has almost sure derivative l̇_β(x); (iv) for each β the function ξ̇_β(x):=1/2l̇_β(x)I(l_β(x)>0)/√(l_β(x)) is square integrable and 𝔼ξ̇_β^2 ↦𝔼ξ̇_0^2 as β↦ 0. Then ℒ has Hellinger derivative ξ̇(x) with respect to β at 0. We need the following lemma, which is an extension of <Ref>, to prove Corollaries 1 and 2. Let g(y) denote the function from the location-scale family (<ref>) with location parameter μ=0 and scale parameter σ=1. Suppose that (i) the function g(y) is absolutely continuous for almost all y; (ii) the function g(y) has almost sure derivative ġ(y); (iii) the function ξ̇_(μ,1)_μ(y):=-1/2ġ(y-μ)I(g(y-μ)>0)/√(g(y-μ)) is square integrable and 𝔼ξ̇_(μ,1)_μ^2(y)→𝔼ξ̇_(0,1)_μ^2(y) as μ→ 0; (iv) yξ̇_(0,1)_μ(y) is square integrable; (v) ‖ξ̇_(0,σ)_μ(y)-ξ̇_(0,1)_μ(y)‖=O(|σ-1|) for σ near 1. Then ℱ has Hellinger derivative (ξ̇_(0,1)_μ(y),ξ̇_(0,1)_σ(y))^T with respect to (μ,σ) at (0,1), where ξ̇_(0,1)_σ(y) is defined as -1/2√(g(y))-y/2ġ(y)I(g(y)>0)/√(g(y)). For the regular differentiability, we have ∂ f_μ,σ/∂μ =-1/σ^2ġ(y-μ/σ), ∂ f_μ,σ/∂σ =-1/σ^2g(y-μ/σ)-y-μ/σ^3ġ(y-μ/σ). If the regular differentiation formulas were valid, the Hellinger derivative of f_μ,σ(y) with respect to μ would be ξ̇_(μ,σ)_μ=-1/2σ^3/2ġ(y-μ/σ)/√(g(y-μ/σ))I(g(y-μ/σ)>0), and the Hellinger derivative of f_μ,σ(y) with respect to σ would be ξ̇_(μ,σ)_σ=-1/2σ^3/2√(g(y-μ/σ))-y-μ/2σ^5/2ġ(y-μ/σ)/√(g(y-μ/σ))I(g(y-μ/σ)>0). Hence, if f_μ,σ(y) is Hellinger differentiable with respect to (μ,σ) at (0,1), we have (recall ξ_μ,σ=1/√(σ)√(g(y-μ/σ))) ξ_μ,σ =ξ_0,1+(μ,σ-1)(ξ̇_(0,1)_μ,ξ̇_(0,1)_σ)^T+r_μ,σ. Next, we verify that the above equations indeed are the Hellinger derivatives. We first verify that the Hellinger derivatives are square integrable. Given (i)-(iii), by <Ref>, we have f_μ,σ(y) is Hellinger differentiable with respect to μ at 0, and we know that ∫ξ̇^2_(0,1)_μdy<∞. Also, we have ∫ξ̇^2_(0,1)_σdy =∫ (-1/2√(g(y))+yξ̇_(0,1)_μ)^2dy =∫ [1/4g(y)+y^2ξ̇^2_(0,1)_μ-y√(g(y))ξ̇_(0,1)_μ]dy. By (iv) and Cauchy-Schwarz inequality, the above integral is finite. Hence, we have obtained that ξ̇_(0,1)_μ and ξ̇_(0,1)_σ are square integrable. Next, we verify that the residual term satisfies r_μ,σ=o(√(μ^2+(σ-1)^2)) near (0,1). We have ‖ r_μ,σ(y)‖ =‖ξ_μ,σ(y)-ξ_0,1(y)-μξ̇_(0,1)_μ(y)-(σ-1)ξ̇_(0,1)_σ(y)‖ =‖ξ_μ,1-ξ_0,1-μξ̇_(0,1)_μ+ξ_0,σ-ξ_0,1-(σ-1)ξ̇_(0,1)_σ+ξ_μ,σ-ξ_0,σ-ξ_μ,1+ξ_0,1‖ ≤‖ξ_μ,1-ξ_0,1-μξ̇_(0,1)_μ‖+‖ξ_0,σ-ξ_0,1-(σ-1)ξ̇_(0,1)_σ‖+‖ξ_μ,σ-ξ_0,σ-ξ_μ,1+ξ_0,1‖, where the upper bound follows from Minkowski inequality. Since f_μ,σ(y) is Hellinger differentiable with respect to μ at 0, we have the first term in the bound to be o(|μ|). Given (i) and (ii), the conditions (ii) and (iii) in <Ref> are satisfied for f_μ,σ(y) with respect to σ. Given (iii) and (iv), by <ref>, we have ξ̇_(0,1)_σ is square integrable and 𝔼ξ̇_(0,σ)_σ^2(y)→𝔼ξ̇_(0,1)_σ^2(y) as σ→ 1, which means (iv) in <Ref> holds (condition (i) in <Ref> is trivial). Hence, we obtain that f_μ,σ(y) is Hellinger differentiable with respect to σ at 1. Then we have the second term in the bound to be o(|σ-1|). Also, through (v), we have ‖ξ_μ,σ-ξ_0,σ-ξ_μ,1+ξ_0,1‖ =‖μξ̇_(0,σ)_μ+r_0,σ-μξ̇_(0,1)_μ-r_0,1‖ ≤‖μξ̇_(0,σ)_μ-μξ̇_(0,1)_μ‖ + ‖ r_0,σ‖+ ‖ r_0,1‖ = ‖μξ̇_(0,σ)_μ-μξ̇_(0,1)_μ‖+o(|μ|) =O(|μ(σ-1)|)+o(|μ|) =o(√(μ^2+(σ-1)^2)). Therefore, we obtain ‖ r_μ,σ(y)‖=o(√(μ^2+(σ-1)^2)), which completes the proof. chicago § SUPPLEMENTARY MATERIALS – PROOFS OF COROLLARIES 1 AND 2 §.§ Proof of <Ref> Based on <Ref>, we only need to show that <Ref> and <Ref> are satisfied for the error distribution location-scale family. §.§.§ Proof of <Ref> with ALD error distribution If the error distribution is ALD with location parameter η and scale parameter σ, our target location family has the density f_η,σ(y)=τ(1-τ)/σexp[-ρ_τ(y-η/σ)]. We first show <Ref> holds by showing D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). Given τ, we have D(f_0,1||f_η,σ)=1/σ𝔼(ρ_τ(Y-η)-ρ_τ(Y)), where Y∼ f_0,1. If η>0, by direct calculation, we have 𝔼[ρ_τ(Y-η)] =𝔼(τ(Y-η)I(Y>η)+(τ-1)(Y-η)I(Y≤η)) =τ(1-τ)/σ[σ^2/τ^2e^-τ/ση+σ/τη+σ^2/1-τ-(1-τ)σ^2/τ^2]. If η≤ 0, similarly we have 𝔼[ρ_τ(Y-η)] =𝔼(τ(Y-η)I(Y>η)+(τ-1)(Y-η)I(Y≤η)) =τ(1-τ)/σ[σ^2/(1-τ)^2e^1-τ/ση-σ/1-τη+σ^2/τ-τσ^2/(1-τ)^2]. In particular, 𝔼ρ_τ(Y) =τ(1-τ)/σ(σ^2/τ+σ^2/1-τ)=σ. It is seen that the K-L divergence D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). Hence, we have that <Ref> is satisfied. Next, we verify <Ref>. We check the Hellinger differentiability of the error distribution family by applying <Ref>. First, g(y) is absolutely continuous, which means <Ref>(i) holds. Second, for y≠ 0, g(y) has almost sure derivatives ġ(y)=τ(1-τ){-τexp[-τ yI(y>0)]+(1-τ)exp[(1-τ)yI(y<η)]}, which means <Ref>(ii) is satisfied. Also, for y≠η, ξ̇_(η,1)_η^2(y) is ξ̇_(η,1)_η^2(y) =τ(1-τ)/4{τ^2exp[-τ (y-η)I(y>η)]+(τ-1)^2exp[(1-τ)(y-η)I(y<η)]}, and ∫ξ̇_(η,1)_η^2(y)dy=τ(1-τ)/4, we have the function ξ̇_(η,1)_η(y):=-1/2ġ(y-η)I(g(y-η)>0)/√(g(y-η)) is squared integrable and 𝔼ξ̇_(η,1)_η^2(y)→𝔼ξ̇_(0,1)_η^2(y) as η→ 0, which implies <Ref>(iii). Moreover, since ∫ y^2ξ̇^2_(0,1)_η(y)dy =τ(1-τ)/4∫{τ^2y^2exp[-τ yI(y>0)]+(τ-1)^2y^2exp[(1-τ)yI(y<0)]}dy<∞, we have <Ref>(iv) holds. Finally, for each y, by Taylor expansion of the function s_σ(y)=ξ̇_(0,σ)_η(y) with respect to σ at 1, we have s_σ(y) = s_σ=1(y)+(σ-1)ṡ_σ=1(y)+o(|σ-1|), where ṡ_σ=1(y) is square integrable, we have ‖ξ̇_(0,σ)_η(y)-ξ̇_(0,1)_η(y)‖_2=‖ (σ-1)ṡ_σ=1(y)+o(|σ-1|)‖_2=O(|σ-1|). Hence, <Ref>(v) is satisfied and we have that the location-scale family {1/σf(x-η/σ),η∈ℝ,σ∈(0,+∞)} is Hellinger differentiable with respect to (η,σ) at (0,1). Therefore, <Ref> is satisfied. §.§.§ Proof of <Ref> with ACDTG error distribution Although we have the shape parameter α≥ 0, the following proof is valid only for α>0. Fortunately, when α=0, the ACDTG is the same as ALD, which has already been handled. For the error distribution ACDTG with location parameter η and scale parameter σ, our target location-scale family then has the density f_η,σ(y)=τ(1-τ)/σΓ(α+1,α)[α+ρ_τ(y-η/σ)]^αexp{-[α+ρ_τ(y-η/σ)]}. For the K-L divergence, given τ, we have D(f_0,1||f_η,σ)=α𝔼[log(α+ρ_τ(Y))-log(α+ρ_τ(Y-η/σ))]+𝔼[ρ_τ(Y-η/σ)-ρ_τ(Y)]+logσ, where Y∼ f_0,1. To show <Ref> is satisfied, it suffice to show that D(f_0,1||f_η,σ) is finite and twice differentiable with respect to (η,σ) at (0,1). If η>0, we have 𝔼log(α+ρ_τ(Y-η/σ)) =τ(1-τ)/Γ(α+1,α)[∫_η^∞log(α+τ (y-η/σ))(α+τ y)^αe^-(α+τ y)dy                    +∫^η_0log(α+(τ-1)(y-η/σ))(α+τ y)^αe^-(α+τ y)dy                    +∫_-∞^0log(α+(τ-1)(y-η/σ))(α+(τ-1) y)^αe^-(α+(τ-1)y)dy], and 𝔼log(α+ρ_τ(Y)) =τ(1-τ)/Γ(α+1,α)[∫_0^∞log(α+τ y)(α+τ y)^αe^-(α+τ y)dy                    +∫_-∞^0log(α+(τ-1)y)(α+(τ-1) y)^αe^-(α+(τ-1)y)dy]. Also, we have 𝔼ρ_τ(Y-η/σ) =𝔼(τ(Y-η/σ)I(Y>η)+(τ-1)(Y-η/σ)I(Y≤η)) =τ(1-τ)/Γ(α+1,α)[τ∫_η^∞(y-η/σ)(α+τ y)^α e^-(α+τ y)dy-(1-τ)∫_0^η(y-η/σ)(α+τ y)^α e^-(α+τ y)dy                        -(1-τ)∫_-∞^0(y-η/σ)(α+(τ-1) y)^α e^-(α+(τ-1) y)dy], and 𝔼ρ_τ(Y) =𝔼(τ YI(Y>0)+(τ-1)YI(Y≤ 0)) =τ(1-τ)/Γ(α+1,α)[τ∫_0^∞ y(α+τ y)^α e^-(α+τ y)dy              -(1-τ)∫_-∞^0 y(α+(τ-1)y)^α e^-(α+(τ-1)y)dy]. Thus, the K-L divergence between f_0,1 and f_η,σ is α𝔼[log(α+ρ_τ(Y))-log(α+ρ_τ(Y-η/σ))]+𝔼[ρ_τ(Y-η/σ)-ρ_τ(Y)]+logσ =ατ(1-τ)/Γ(α+1,α)[(∫_0^∞log(α+τ y)(α+τ y)^αe^-(α+τ y)dy             +∫_-∞^0log(α+(τ-1)y)(α+(τ-1) y)^αe^-(α+(τ-1)y)dy             -∫_η^∞log(α+τ (y-η/σ))(α+τ y)^αe^-(α+τ y)dy             -∫^η_0log(α+(τ-1)(y-η/σ))(α+τ y)^αe^-(α+τ y)dy             -∫_-∞^0log(α+(τ-1)(y-η/σ))(α+(τ-1) y)^αe^-(α+(τ-1)y)dy             +τ/α∫_η^∞(y-η/σ)(α+τ y)^α e^-(α+τ y)dy-1-τ/α∫_0^η(y-η/σ)(α+τ y)^α e^-(α+τ y)dy             -1-τ/α∫_-∞^0(y-η/σ)(α+(τ-1) y)^α e^-(α+(τ-1) y)dy             -τ/α∫_0^∞ y(α+τ y)^α e^-(α+τ y)dy+1-τ/α∫_-∞^0 y(α+(τ-1)y)^α e^-(α+(τ-1)y)dy]+logσ. It is straightforward to verify that the above K-L divergence is finite for each (η,σ). By the Leibnitz integral rule, take the derivative of D(f_0,1||f_η,σ) with respect to η, we have dD(f_0,1||f_η,σ)/dη=ατ(1-τ)/Γ(α+1,α)[∫_η^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                                -∫_0^η1-τ/ασ+(τ-1)(y-η)(α+τ y)^αe^-(α+τ y)dy                                -∫_-∞^01-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                                -1/ασ∫_η^∞τ(α+τ y)^αe^-(α+τ y)dy+1/ασ∫_0^η(1-τ)(α+τ y)^αe^-(α+τ y)dy                                +1/ασ∫_-∞^0(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]. Take the derivative of dD(f_0,1||f_η,σ)/dη with respect to η, we have d^2D(f_0,1||f_η,σ)/dη^2=ατ(1-τ)/Γ(α+1,α)[∫_η^∞τ^2/(ασ+τ(y-η))^2(α+τ y)^αe^-(α+τ y)dy                                     +∫_0^η(1-τ)^2/(ασ+(τ-1)(y-η))^2(α+τ y)^αe^-(α+τ y)dy                                     +∫_-∞^0(1-τ)^2/(ασ+(τ-1)(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy. Take the derivative of dD(f_0,1||f_η,σ)/dη with respect to σ, we have d^2D(f_0,1||f_η,σ)/dη dσ=ατ(1-τ)/Γ(α+1,α)[-α∫_η^∞τ/(ασ+τ(y-η))^2(α+τ y)^αe^-(α+τ y)dy                               +α∫_0^η1-τ/(ασ+(τ-1)(y-η))^2(α+τ y)^αe^-(α+τ y)dy                               +α∫_-∞^01-τ/(ασ+(τ-1)(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                               +1/ασ^2∫_η^∞τ(α+τ y)^αe^-(α+τ y)dy-1/ασ^2∫_0^η(1-τ)(α+τ y)^αe^-(α+τ y)dy                               -1/ασ^2∫_-∞^0(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]. Take the derivative of D(f_0,1||f_η,σ) with respect to σ, we have dD(f_0,1||f_η,σ)/dσ=ατ(1-τ)/Γ(α+1,α)[y-η/σ∫_η^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                             -y-η/σ∫_0^η1-τ/ασ+(τ-1)(y-η)(α+τ y)^αe^-(α+τ y)dy                             -y-η/σ∫_-∞^01-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                             -1/ασ^2∫_η^∞τ(α+τ y)^αe^-(α+τ y)dy+1/ασ^2∫_0^η(1-τ)(α+τ y)^αe^-(α+τ y)dy                             +1/ασ^2∫_-∞^0(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]+1/σ. Take the derivative of dD(f_0,1||f_η,σ)/dσ with respect to σ, we have d^2D(f_0,1||f_η,σ)/dσ^2=ατ(1-τ)/Γ(α+1,α)[-y-η/σ^2∫_η^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                         -α(y-η)/σ∫_η^∞τ/(ασ+τ(y-η))^2(α+τ y)^αe^-(α+τ y)dy                         +y-η/σ^2∫_0^η1-τ/ασ+(τ-1)(y-η)(α+τ y)^αe^-(α+τ y)dy                         +α(y-η)/σ∫_0^η1-τ/(ασ+(τ-1)(y-η))^2(α+τ y)^αe^-(α+τ y)dy                         +y-η/σ^2∫_-∞^01-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         +α(y-η)/σ∫_-∞^01-τ/(ασ+(τ-1)(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         +2/ασ^3∫_η^∞τ(α+τ y)^αe^-(α+τ y)dy-2/ασ^3∫_0^η(1-τ)(α+τ y)^αe^-(α+τ y)dy                         -2/ασ^3∫_-∞^0(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]-1/σ^2. If η< 0, we obtain 𝔼log(α+ρ_τ(Y-η/σ)) =τ(1-τ)/Γ(α+1,α)[∫_0^∞log(α+τ (y-η/σ))(α+τ y)^αe^-(α+τ y)dy                    +∫_η^0log(α+τ(y-η/σ))(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                    +∫_-∞^ηlog(α+(τ-1)(y-η/σ))(α+(τ-1) y)^αe^-(α+(τ-1)y)dy], and 𝔼log(α+ρ_τ(Y)) =τ(1-τ)/Γ(α+1,α)[∫_0^∞log(α+τ y)(α+τ y)^αe^-(α+τ y)dy                    +∫_-∞^0log(α+(τ-1)y)(α+(τ-1) y)^αe^-(α+(τ-1)y)dy]. Also, we have 𝔼ρ_τ(Y-η/σ) =𝔼(τ(Y-η/σ)I(Y>η)+(τ-1)(Y-η/σ)I(Y≤η)) =τ(1-τ)/Γ(α+1,α)[τ∫_0^∞(y-η/σ)(α+τ y)^α e^-(α+τ y)dy+τ∫^0_η(y-η/σ)(α+(τ-1)y)^α e^-(α+(τ-1)y)dy                        -(1-τ)∫_-∞^η(y-η/σ)(α+(τ-1) y)^α e^-(α+(τ-1) y)dy], and 𝔼ρ_τ(Y) =𝔼(τ YI(Y>0)+(τ-1)YI(Y≤ 0)) =τ(1-τ)/Γ(α+1,α)[τ∫_0^∞ y(α+τ y)^α e^-(α+τ y)dy                    -(1-τ)∫_-∞^0 y(α+(τ-1)y)^α e^-(α+(τ-1)y)dy]. Thus, the K-L divergence between f_0,1 and f_η,σ is α𝔼[log(α+ρ_τ(Y))-log(α+ρ_τ(Y-η/σ))]+𝔼[ρ_τ(Y-η/σ)-ρ_τ(Y)] =ατ(1-τ)/Γ(α+1,α)[(∫_0^∞log(α+τ y)(α+τ y)^αe^-(α+τ y)dy       +∫_-∞^0log(α+(τ-1)y)(α+(τ-1) y)^αe^-(α+(τ-1)y)dy       -∫_0^∞log(α+τ (y-η/σ))(α+τ y)^αe^-(α+τ y)dy       -∫_η^0log(α+τ(y-η/σ))(α+(τ-1)y)^αe^-(α+(τ-1)y)dy       -∫_-∞^ηlog(α+(τ-1)(y-η/σ))(α+(τ-1) y)^αe^-(α+(τ-1)y)dy       +τ/α∫_0^∞(y-η/σ)(α+τ y)^α e^-(α+τ y)dy+τ/α∫^0_η(y-η/σ)(α+(τ-1)y)^α e^-(α+(τ-1)y)dy       -1-τ/α∫_-∞^η(y-η/σ)(α+(τ-1) y)^α e^-(α+(τ-1) y)dy       -τ/α∫_0^∞ y(α+τ y)^α e^-(α+τ y)dy+1-τ/α∫_-∞^0 y(α+(τ-1)y)^α e^-(α+(τ-1)y)dy]+logσ. Similarly as before, take the derivative of D(f_0,1||f_η,σ) with respect to η, we have dD(f_0,1||f_η,σ)/dη=ατ(1-τ)/Γ(α+1,α)[∫_0^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                           +∫^0_ητ/ασ+τ(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           -∫_-∞^η1-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           -1/ασ∫_0^∞τ(α+τ y)^αe^-(α+τ y)dy-1/ασ∫^0_ητ(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           +1/ασ∫_-∞^η(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]. Take the derivative of dD(f_0,1||f_η,σ)/dη with respect to η, we have d^2D(f_0||f_η)/dη^2=ατ(1-τ)/Γ(α+1,α)[∫_0^∞τ^2/(ασ+τ(y-η))^2(α+τ y)^αe^-(α+τ y)dy                                         +∫^0_ητ^2/(ασ+τ(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                                         +∫_-∞^η(1-τ)^2/(ασ+(τ-1)(y-ζ))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy. Take the derivative of dD(f_0,1||f_η,σ)/dη with respect to σ, we have d^2D(f_0,1||f_η,σ)/dη dσ=ατ(1-τ)/Γ(α+1,α)[-α∫_0^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                           -α∫^0_ητ/ασ+τ(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           +α∫_-∞^η1-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           +1/ασ^2∫_0^∞τ(α+τ y)^αe^-(α+τ y)dy+1/ασ^2∫^0_ητ(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                           -1/ασ^2∫_-∞^η(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]. Take the derivative of D(f_0,1||f_η,σ) with respect to σ, we have dD(f_0,1||f_η,σ)/dσ=ατ(1-τ)/Γ(α+1,α)[y-η/σ∫_0^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                          +y-η/σ∫^0_ητ/ασ+τ(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                          -y-η/σ∫_-∞^η1-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                          -1/ασ^2∫_0^∞τ(α+τ y)^αe^-(α+τ y)dy-1/ασ^2∫^0_ητ(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                          +1/ασ^2∫_-∞^η(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]-1/σ. Take the derivative of dD(f_0,1||f_η,σ)/dσ with respect to σ, we have d^2D(f_0,1||f_η,σ)/dσ^2=ατ(1-τ)/Γ(α+1,α)[-y-η/σ^2∫_0^∞τ/ασ+τ(y-η)(α+τ y)^αe^-(α+τ y)dy                         -α(y-η)/σ∫_0^∞τ/(ασ+τ(y-η))^2(α+τ y)^αe^-(α+τ y)dy                         -y-η/σ^2∫^0_ητ/ασ+τ(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         -α(y-η)/σ∫^0_ητ/(ασ+τ(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         +y-η/σ^2∫_-∞^η1-τ/ασ+(τ-1)(y-η)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         +α(y-η)/σ∫_-∞^η1-τ/(ασ+(τ-1)(y-η))^2(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         +2/ασ^3∫_0^∞τ(α+τ y)^αe^-(α+τ y)dy+2/ασ^3∫^0_ητ(α+(τ-1)y)^αe^-(α+(τ-1)y)dy                         -2/ασ^3∫_-∞^η(1-τ)(α+(τ-1)y)^αe^-(α+(τ-1)y)dy]-1/σ^2. Since d^2D(f_0,1||f_η,σ)/dη^2, d^2D(f_0,1||f_η,σ)/dη dσ and d^2D(f_0,1||f_η,σ)/dσ^2 are well defined and continuous with respect to (η,σ) near (0,1), we have D(f_0,1||f_η,σ) is second differentiable with respect to (η,σ) at (0,1), which means <Ref> is satisfied. Next, we verify <Ref>. By <Ref>, we check the Hellinger differentiability of the error distribution family. First, g(y) is absolutely continuous, which means <Ref>(i) holds. Second, for y≠ 0, g(y) has almost sure derivatives ġ(y) =τ(1-τ)/Γ(α+1,α){ατ[α+τ y]^α-1exp[-(α+τ y)]I(y>0)                         -τ[α+τ y]^αexp[-(α+τ y)]I(y>0)                         +α(τ-1)[α+(τ-1)y]^α-1exp[-(α+(τ-1)y]I(y<0)                         -(τ-1)[α+(τ-1)y]^αexp[-(α+(τ-1)y)]I(y<0)}, which means <Ref>(ii) is satisfied. For y≠η, ξ̇_(η,1)_η^2(y) is as following ξ̇_(η,1)_η^2(y) =τ(1-τ)/Γ(α+1,α){α^2τ^2/4[α+τ (y-η)]^α-2exp[-(α+τ (y-η))]I(y>η)         +τ^2/4[α+τ (y-η)]^αexp[-(α+τ (y-η)]I(y>η)         -ατ^2/2[α+τ (y-η)]^α-1exp[-(α+τ (y-η))]I(y>η)         +α^2(τ-1)^2/4[α+(τ-1) (y-η)]^α-2exp[-(α+(τ-1)(y-η))]I(y< η)         +(τ-1)^2/4[α+(τ-1)(y-η)]^αexp[-(α+(τ-1)(y-η))]I(y< η)         -α(τ-1)^2/2[α+(τ-1)(y-η)]^α-1exp[-(α+(τ-1)(y-η))]I(y< η)}. Through a direct calculation, we can get the upper bound of integral of ξ̇_(η,1)_η^2(y) ∫ξ̇_(η,1)_η^2(y)dy ≤τ(1-τ)/Γ(α+1,α){max{α^2τ^2/4,α^2(τ-1)^2/4}∫_-∞^∞[α+ρ(y-η)]^α-2exp[-(α+ρ(y-η))]dy                        +max{τ^2/4,(τ-1)^2/4}∫_-∞^∞[α+ρ(y-η)]^αexp[-(α+ρ(y-η)]dy                        -min{ατ^2/2,α(τ-1)^2/2}∫_-∞^∞[α+ρ(y-η)]^α-1exp[-(α+ρ(y-η))]dy}. Also, we have ∫_-∞^∞[α+ρ(y-η)]^αexp[-(α+ρ(y-η))]dy=Γ(α+1,α)/τ(1-τ), ∫_-∞^∞[α+ρ(y-η)]^α-1exp[-(α+ρ(y-η))]dy≤Γ(α+1,α)/ατ(1-τ), ∫_-∞^∞[α+ρ(y-η)]^α-2exp[-(α+ρ(y-η))]dy≤Γ(α+1,α)/α^2τ(1-τ). Therefore, we obtain ∫ξ̇_(η,1)_η^2(y)dy≤max{τ^2/2,(τ-1)^2/2}, which is a constant for given α and τ. Hence, we know that the function ξ̇_(η,1)_η(y):=-1/2ġ(y-η)I(g(y-η)>0)/√(g(y-η)) is square integrable and 𝔼ξ̇_(η,1)_η^2(y)→𝔼ξ̇_(0,1)_η^2(y) as η→ 0, which implies <Ref>(iii). Moreover, since ∫ y^2ξ̇^2_(0,1)_η(y)dy =τ(1-τ)/Γ(α+1,α)∫ y^2{α^2τ^2/4(α+τ y)^α-2exp[-(α+τ y)]I(y>0)        +τ^2/4(α+τ y)^αexp[-(α+τ y)]I(y>0)        -ατ^2/2(α+τ y)^α-1exp[-(α+τ y)]I(y>0)        +α^2(τ-1)^2/4[α+(τ-1)y]^α-2exp[-(α+(τ-1)y)]I(y<0)        +(τ-1)^2/4[α+(τ-1)y]^αexp[-(α+(τ-1)y)]I(y<0)        -α(τ-1)^2/2[α+(τ-1)y]^α-1exp[-(α+(τ-1)y)]I(y<0)}dy<∞, we have that <Ref>(iv) holds. Finally, for each y, by Taylor expansion of the function s_σ(y)=ξ̇_(0,σ)_η(y) with respect to σ at 1, we have s_σ(y) = s_σ=1(y)+(σ-1)ṡ_σ=1(y)+o(|σ-1|), where ṡ_σ=1(y) is square integrable, we have ‖ξ̇_(0,σ)_η(y)-ξ̇_(0,1)_η(y)‖_2=‖ (σ-1)ṡ_σ=1(y)+o(|σ-1|)‖_2=O(|σ-1|). Hence, <Ref>(v) is satisfied and we obtain that the location-scale family {1/σf(x-η/σ),η∈ℝ,σ∈(0,+∞)} is Hellinger differentiable with respect to (η,σ) at (0,1). Therefore, <Ref> is satisfied. §.§.§ Proof of <Ref> with CNL error distribution Our location-scale family has density f_η,σ(y)=1-τ/βσexp[-1/βσ(y-η)]I(y> η)+2τ/√(2π)σexp[-(y-η)^2/2σ^2]I(y≤η),-∞<y<∞. We first show <Ref> holds. If η>0, we have D(f_0,1||f_η,σ) =𝔼logf_0,1/f_η,σ =𝔼log1-τ/βe^-1/βyI(y>0)+2τ/√(2π)e^-y^2/2I(y≤ 0)/1-τ/βσe^-1/βσ(y-η)I(y>η)+2τ/√(2π)σe^-(y-η)^2/2σ^2I(y≤η) =1-τ/β∫_η^∞ (1/βσ(y-η)-1/βy+logσ)e^-1/βydy     +1-τ/β∫_0^η[log((1-τ)√(2π)σ/2βτ)-1/βy+(y-η)^2/2σ^2]e^-1/βydy     +2τ/√(2π)∫_-∞^0(logσ-y^2/2+(y-η)^2/2σ^2)e^-y^2/2dy. By Leibniz rule, we have dD(f_0,1||f_η,σ)/dη =-1-τ/βσ e^-1/βη+1-τ/βlog((1-τ)√(2π)/2βτ)e^-1/βη      +1-τ/βσ^2∫_0^η(η-y)e^-1/βydy+2τ/√(2π)σ^2∫_-∞^0(η-y)e^-y^2/2dy, d^2D(f_0||f_η)/dη^2 =1-τ/β^2σe^-1/βη-1-τ/β^2log((1-τ)√(2π)/2βτ)e^-1/βη     +1-τ/βσ^2∫_0^η e^-1/βydy+2τ/√(2π)σ^2∫_-∞^0 e^-y^2/2dy, d^2D(f_0,1||f_η,σ)/dη dσ =1-τ/βσ^2 e^-1/βη-2(1-τ)/βσ^3∫_0^η(η-y)e^-1/βydy-4τ/√(2π)σ^3∫_-∞^0(η-y)e^-y^2/2dy, dD(f_0,1||f_η,σ)/dσ =1-τ/β∫_η^∞[1/σ-1/βσ^2(y-η)]e^-1/βydy +1-τ/β∫_0^η[1/σ-(y-η)^2/σ^3]e^-1/βydy +2τ/√(2π)∫_-∞^0[1/σ-(y-η)^2/σ^3]e^-y^2/2dy, and d^2D(f_0,1||f_η,σ)/dσ^2 =1-τ/β∫_η^∞[-1/σ^2+2/βσ^3(y-η)]e^-1/βydy +1-τ/β∫_0^η[-1/σ^2+3(y-η)^2/σ^4]e^-1/βydy +2τ/√(2π)∫_-∞^0[-1/σ^2+3(y-η)^2/σ^4]e^-y^2/2dy. Hence, if η>0, take the second derivative with respect to (η,σ) at (0,1) and the Hessian matrix at (η,σ)=(0,1) is ( [ 1-τ/β^2-1-τ/β^2log(1-τ)√(2π)/2βτ+τ 1-τ/β+4τ/√(2π); 1-τ/β+4τ/√(2π) 1+τ; ]) If η<0, we have D(f_0,1||f_η,σ) =𝔼logf_0,1/f_η,σ =𝔼log1-τ/βe^-1/βyI(y>0)+2τ/√(2π)e^-y^2/2I(y≤ 0)/1-τ/βσe^-1/βσ(y-η)I(y>η)+2τ/√(2π)σe^-(y-η)^2/2σ^2I(y≤η) =1-τ/β∫_0^∞ [logσ-1/βy+1/βσ(y-η)] e^-1/βydy     +2τ/√(2π)∫_η^0[log(2τβσ/(1-τ)√(2π))-y^2/2+1/βσ(y-η)]e^-y^2/2dy     +2τ/√(2π)∫_-∞^η[logσ-y^2/2+(y-η)^2/2σ^2]e^-y^2/2dy. By Leibniz rule, we have dD(f_0,1||f_η,σ)/dη =-1-τ/βσ-log(2τβ/(1-τ)√(2π))2τ/√(2π)e^-η^2/2-2τ/√(2π)βσ∫_η^0e^-y^2/2dy      +2τ/√(2π)σ^2∫_-∞^η(η-y)e^-y^2/2dy, d^2D(f_0,1||f_η,σ)/dη^2 =log(2τβ/(1-τ)√(2π))2τη/√(2π)e^-η^2/2+2τ/√(2π)βσe^-η^2/2      +2τ/√(2π)σ^2∫_-∞^η e^-y^2/2dy, d^2D(f_0,1||f_η,σ)/dη dσ =1-τ/βσ^2+2τ/√(2π)βσ^2∫_η^0e^-y^2/2dy      -4τ/√(2π)σ^3∫_-∞^η(η-y)e^-y^2/2dy, dD(f_0,1||f_η,σ)/dσ =1-τ/β∫_0^∞ [1/σ-1/βσ^2(y-η)] e^-1/βydy     +2τ/√(2π)∫_η^0[1/σ-1/βσ^2(y-η)]e^-y^2/2dy     +2τ/√(2π)∫_-∞^η[1/σ-(y-η)^2/σ^3]e^-y^2/2dy, and d^2D(f_0,1||f_η,σ)/dσ^2 =1-τ/β∫_0^∞ [-1/σ^2+2/βσ^3(y-η)] e^-1/βydy     +2τ/√(2π)∫_η^0[-1/σ^2+2/βσ^3(y-η)]e^-y^2/2dy     +2τ/√(2π)∫_-∞^η[-1/σ^2+3(y-η)^2/σ^4]e^-y^2/2dy. Hence, if η<0, take the second derivative with respect to (η,σ) at (0,1) and the Hessian matrix at (η,σ)=(0,1) is ( [ 2τ/√(2π)β+τ 1-τ/β+4τ/√(2π); 1-τ/β +4τ/√(2π) 1+τ; ]) If 2τ/√(2π)=1-τ/β, we have the Hessian matrix for η>0 and η<0 are equivalent. Also, since d^2D(f_0,1||f_η,σ)/dη^2, d^2D(f_0,1||f_η,σ)/dη dσ and d^2D(f_0,1||f_η,σ)/dσ^2 are well defined and continuous with respect to (η,σ) near (0,1), we have D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). Hence, <Ref> is satisfied. Next, we verify <Ref>. By <Ref>, we check the Hellinger differentiability of the error distribution. First, g(y) is absolutely continuous, which implies <Ref>(i) Second, for y≠ 0, g(y) has almost sure derivatives ġ(y)=-1-τ/β^2exp[-1/βyI(y>0)]-2τ y/√(2π)exp[-y^2/2I(y<0)], which means <Ref>(ii) holds. Moreover, for y≠η, ξ̇_(η,1)_η^2(y) is ξ̇_(η,1)_η^2(y) =1-τ/4β^3exp[-1/β(y-η)I(y>η)]+τ(y-η)^2/2√(2π)exp[-(y-η)^2/2I(y<η)], and ∫ξ̇_(η,1)_η^2(y)dy=1-τ/4β^2+τ/4, we have that the function ξ̇_(η,1)_η(y):=-1/2ġ(y-η)I(g(y-η)>0)/√(g(y-η)) is squared integrable and 𝔼ξ̇_(η,1)_η^2(y)→𝔼ξ̇_(0,1)_η^2(y) as η→ 0, which means <Ref>(iii) is satisfied. Moreover, since ∫ y^2ξ̇_(0,1)_η^2(y)dy=1-τ/4β^3∫ y^2exp[-1/βyI(y>0)]dy+τ/2√(2π)∫ y^4exp[-y^2/2I(y<η)]dy<∞, we have <Ref>(iv) holds. Finally, for each y, by Taylor expansion of the function s_σ(y)=ξ̇_(0,σ)_η(y) with respect to σ at 1, we have s_σ(y) = s_σ=1(y)+(σ-1)ṡ_σ=1(y)+o(|σ-1|), where ṡ_σ=1(y) is square integrable, we obtain ‖ξ̇_(0,σ)_η(y)-ξ̇_(0,1)_η(y)‖_2=‖ (σ-1)ṡ_σ=1(y)+o(|σ-1|)‖_2=O(|σ-1|). Hence, <Ref>(v) is satisfied and the location-scale family {1/σf(x-η/σ),η∈ℝ,σ∈ (0,+∞)} is Hellinger differentiable with respect to (η,σ) at (0,1). Therefore, <Ref> is satisfied. §.§.§ Proof of <Ref> with Cauchy error distribution We first show <Ref> holds by showing D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). By <cit.>, we have D(f_0,1||f_η,σ)=log[(1+σ)^2+η^2/4σ]. It is seen that the K-L divergence D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). Hence, we have that <Ref> is satisfied. Next, by <Ref>, we verify <Ref>. It is obvious that Cauchy density is absolutely continuous and differentiable, hence <Ref>(i) and <Ref>(ii) hold. Also, since ∫ξ̇_(η,1)_η^2(y)dy=1/π∫y^2/(1+y^2)^3dy=1/π[(1/8arctan y +y^3-y/8(y^2+1)^2)|^∞_-∞]=1/8, and ∫ y^2ξ̇_(η,1)_η^2(y)dy=1/π^2∫y^4/(1+y^2)^3dy=1/π^2[(3/8arctan y -5y^3+3y/8(y^2+1)^2)|^∞_-∞]=3/8π, we have that <Ref>(iii) and <Ref>(iv) are satisfied. Finally, for each y, by Taylor expansion of the function s_σ(y)=ξ̇_(0,σ)_η(y) at σ=1, we have s_σ(y)=s_σ=1(y)+(σ-1)ṡ_σ=1(y)+o(|σ-1|), where ṡ_σ=1(y) is square integrable, we have ‖ξ̇_(0,σ)_η(y)- ξ̇_(0,1)_η(y)‖_2=‖ (σ-1)ṡ_σ=1(y)+o(|σ-1|)‖=O(|σ-1|), which means <Ref>(v) holds. Therefore, we have verified <Ref>. §.§.§ Proof of <Ref> Our location-scale family has density f_μ,σ(y)=2/√(π)σ√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ)exp[-ϕ/σ^2(y-η)^2I(y≥η)-1-ϕ/σ^2(y-η)^2I(y<η)], -∞<y<∞, We first show <Ref> holds. If η > 0, we have D(f_0,1||f_μ,σ) =𝔼logf_0,1/f_μ,σ =𝔼logσexp [-ϕ y^2I(y≥ 0)-(1-ϕ)y^2I(y<0)]/exp [-ϕ/σ^2(y-η)^2I(y≥η)-1-ϕ/σ^2(y-η)^2I(y< η)] =logσ +2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞ [ϕ/σ^2(y-η)^2-ϕ y^2]e^-ϕ y^2dy      +∫_0^η [1-ϕ/σ^2(y-η)^2-ϕ y^2]e^-ϕ y^2dy      +∫_-∞^0 [1-ϕ/σ^2(y-η)^2-(1-ϕ) y^2]e^-(1-ϕ) y^2dy}. By Leibniz rule, we have dD(f_0,1||f_μ,σ)/dη = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞ [-2ϕ/σ^2(y-η)]e^-ϕ y^2dy      +∫^η_0 [-2(1-ϕ)/σ^2(y-η)]e^-ϕ y^2dy+∫_-∞^0 [-2(1-ϕ)/σ^2(y-η)]e^-(1-ϕ) y^2dy}, d^2D(f_0,1||f_μ,σ)/dη^2 = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞2ϕ/σ^2e^-ϕ y^2dy      +∫^η_0 2(1-ϕ)/σ^2e^-ϕ y^2dy+∫_-∞^0 2(1-ϕ)/σ^2e^-(1-ϕ) y^2dy}, d^2D(f_0,1||f_μ,σ)/dη dσ = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞ [4ϕ/σ^3(y-η)]e^-ϕ y^2dy      +∫^η_0 [4(1-ϕ)/σ^3(y-η)]e^-ϕ y^2dy+∫_-∞^0 [4(1-ϕ)/σ^3(y-η)]e^-(1-ϕ) y^2dy}, dD(f_0,1||f_μ,σ)/dσ =1/σ+2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞ [-2ϕ/σ^3(y-η)^2]e^-ϕ y^2dy      +∫^η_0 [-2(1-ϕ)/σ^3(y-η)^2]e^-ϕ y^2dy+∫_-∞^0 [-2(1-ϕ)/σ^3(y-η)^2]e^-(1-ϕ) y^2dy}, and d^2D(f_0,1||f_μ,σ)/dσ^2 =-1/σ^2+2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_η^∞ [6ϕ/σ^4(y-η)^2]e^-ϕ y^2dy      +∫_0^η [6(1-ϕ)/σ^4(y-η)^2]e^-ϕ y^2dy+∫_-∞^0 [6(1-ϕ)/σ^4(y-η)^2]e^-(1-ϕ) y^2dy}. Hence, if η>0, take the second derivative with respect to (η,σ) at (0,1) and we get the Hessian matrix at (η,σ)=(0,1) is ( [ 2√(ϕ(1-ϕ)) 4/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ); 4/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ) 2; ]) If η<0, we have D(f_0,1||f_μ,σ) =𝔼logf_0,1/f_μ,σ =𝔼logσexp [-ϕ y^2I(y≥ 0)-(1-ϕ)y^2I(y<0)]/exp [-ϕ/σ^2(y-η)^2I(y≥η)-1-ϕ/σ^2(y-η)^2I(y< η)] =logσ +2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞ [ϕ/σ^2(y-η)^2-ϕ y^2]e^-ϕ y^2dy      +∫^0_η [ϕ/σ^2(y-η)^2-(1-ϕ) y^2]e^-(1-ϕ) y^2dy      +∫_-∞^η [1-ϕ/σ^2(y-η)^2-(1-ϕ) y^2]e^-(1-ϕ) y^2dy}. By Leibniz rule, we have dD(f_0,1||f_μ,σ)/dη = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞ [-2ϕ/σ^2(y-η)]e^-ϕ y^2dy      +∫_η^0 [-2ϕ/σ^2(y-η)]e^-(1-ϕ) y^2dy+∫_-∞^η [-2(1-ϕ)/σ^2(y-η)]e^-(1-ϕ) y^2dy}, d^2D(f_0,1||f_μ,σ)/dη^2 = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞2ϕ/σ^2e^-ϕ y^2dy      +∫_η^0 2ϕ/σ^2e^-(1-ϕ) y^2dy+∫_-∞^η2(1-ϕ)/σ^2e^-(1-ϕ) y^2dy}, d^2D(f_0,1||f_μ,σ)/dη dσ = 2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞ [4ϕ/σ^3(y-η)]e^-ϕ y^2dy      +∫_η^0 [4ϕ/σ^3(y-η)]e^-(1-ϕ) y^2dy+∫_-∞^η [4(1-ϕ)/σ^3(y-η)]e^-(1-ϕ) y^2dy}, dD(f_0,1||f_μ,σ)/dσ =1/σ+2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞ [-2ϕ/σ^3(y-η)^2]e^-ϕ y^2dy      +∫_η^0 [-2ϕ/σ^3(y-η)^2]e^-(1-ϕ) y^2dy+∫_-∞^η [-2(1-ϕ)/σ^3(y-η)^2]e^-(1-ϕ) y^2dy}, and d^2D(f_0,1||f_μ,σ)/dσ^2 =-1/σ^2+2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ){∫_0^∞ [6ϕ/σ^4(y-η)^2]e^-ϕ y^2dy      +∫_η^0 [6ϕ/σ^4(y-η)^2]e^-(1-ϕ) y^2dy+∫_-∞^η [6(1-ϕ)/σ^4(y-η)^2]e^-(1-ϕ) y^2dy}. Hence, if η>0, take the second derivative with respect to (η,σ) at (0,1) and we get the Hessian matrix at (η,σ)=(0,1) is ( [ 2√(ϕ(1-ϕ)) 4/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ); 4/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ) 2; ]) We have the Hessian matrix for η>0 and η<0 are equivalent. Also, since d^2D(f_0,1||f_η,σ)/dη^2, d^2D(f_0,1||f_η,σ)/dη dσ and d^2D(f_0,1||f_η,σ)/dσ^2 are well defined and continuous with respect to (η,σ) near (0,1), we have D(f_0,1||f_η,σ) is twice differentiable with respect to (η,σ) at (0,1). Hence, <Ref> is satisfied. Next, we verify <Ref>. By <Ref>, we check the Hellinger differentiability of the error distribution. First, g(y) is absolutely continuous, which means <Ref>(i) is satisfied. Second, g(y) has almost sure derivatives ġ(y)=-4/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ)[ϕ ye^-ϕ y^2I(y≥ 0)+(1-ϕ)ye^-(1-ϕ)y^2I(y<0)], which means <Ref>(ii) holds. Moreover, for y≠η, ξ̇_(η,1)_η^2(y) is ξ̇_(η,1)_η^2(y) =2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ)[ϕ (y-η)e^-ϕ (y-η)^2I(y≥η)+(1-ϕ)(y-η)e^-(1-ϕ)(y-η)^2I(y<η)], and ∫ξ̇_(η,1)_η^2(y)dy=2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ), we have that the function ξ̇_(η,1)_η(y):=-1/2ġ(y-η)I(g(y-η)>0)/√(g(y-η)) is squared integrable and 𝔼ξ̇_(η,1)_η^2(y)→𝔼ξ̇_(0,1)_η^2(y) as η→ 0, which implies <Ref>(iii). Moreover, since ∫ y^2ξ̇_(0,1)_η^2(y)dy=2/√(π)√(ϕ(1-ϕ))/√(ϕ)+√(1-ϕ)∫[ϕ y^3e^-ϕ y^2I(y≥ 0)+(1-ϕ)y^3e^-(1-ϕ)y^2I(y<0)]dy<∞, we have <Ref>(iv) holds. Finally, for each y, by Taylor expansion of the function s_σ(y)=ξ̇_(0,σ)_η(y) with respect to σ at 1, we have s_σ(y) = s_σ=1(y)+(σ-1)ṡ_σ=1(y)+o(|σ-1|), where ṡ_σ=1(y) is square integrable, we have ‖ξ̇_(0,σ)_η(y)-ξ̇_(0,1)_η(y)‖_2=‖ (σ-1)ṡ_σ=1(y)+o(|σ-1|)‖_2=O(|σ-1|). Hence, <Ref>(v) holds and the location-scale family {1/σf(x-η/σ),η∈ℝ,σ∈ (0,+∞)} is Hellinger differentiable with respect to (η,σ) at (0,1). Therefore, <Ref> is satisfied.
http://arxiv.org/abs/2307.01253v1
20230703180001
Geometric Stiffness in Interlayer Exciton Condensates
[ "Nishchhal Verma", "Daniele Guerci", "Raquel Queiroz" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
These authors contributed equally Department of Physics, Columbia University, New York, NY 10027, USA These authors contributed equally Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA [email protected] Department of Physics, Columbia University, New York, NY 10027, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Recent experiments have confirmed the presence of interlayer excitons in the ground state of transition metal dichalcogenide (TMD) bilayers. The interlayer excitons are expected to show remarkable transport properties when they undergo Bose condensation. In this work, we demonstrate that quantum geometry of Bloch wavefunctions plays an important role in the phase stiffness of the Interlayer Exciton Condensate (IEC). Notably, we identify a geometric contribution that amplifies the stiffness, leading to the formation of a robust condensate with an increased BKT temperature. Our results have direct implications for the ongoing experimental efforts on interlayer excitons in materials that have non-trivial geometry. We provide quantitative estimates for the geometric contribution in TMD bilayers through a realistic continuum model with gated Coulomb interaction, and find that the substantially increased stiffness allows for an IEC to be realized at amenable experimental conditions. Geometric Stiffness in Interlayer Exciton Condensates Raquel Queiroz August 1, 2023 ===================================================== *Introduction.— Advancements in topological quantum matter have drawn attention to the crucial role of Bloch wavefunctions in diverse condensed matter systems. While the influence of Berry curvature on non-interacting electrons is well-understood as an anomalous velocity <cit.>, a closely related quantity known as the quantum metric has recently gained significant attention, particularly in the context of flat-band superconductivity and related experiments in moiré heterostructures <cit.>. Derived from the geometric properties of Bloch wavefunctions, the quantum metric has profound effects on various facets of superconductivity. Notably, it modifies the mass of Cooper pairs <cit.>, phase stiffness <cit.>, spectral weight <cit.>, and potentially the critical temperature <cit.>. Other than superconductivity, quantum geometry is known to appear in current noise spectrum <cit.>, dielectric response <cit.>, electron-phonon coupling <cit.>, plasmons <cit.> and nonlinear response <cit.>. In this work, we target Interlayer Exciton Condensates (IECs), and reveal a significant geometric contribution to the phase stiffness that results in a more robust condensate characterized by a higher Berezinskii-Kosterlitz-Thouless (BKT) transition temperature. Excitons, bound states of electrons and holes in semiconductors, are bosons that have long been proposed to form a Bose-Einstein condensate (BEC) at low temperatures <cit.>. Unlike conventional BECs, exciton condensates conserve total particle number and instead break a U(1)_e × U(1)_h symmetry that corresponds to separate conservation of electrons and holes <cit.>. This symmetry is experimentally realizable in a bilayer system with a spacer that suppresses single-particle tunneling between the layers. If the electrons reside on the top (t) and holes in bottom (b) layers, IEC is formed by the spontaneous breaking of U(1)_t × U(1)_b symmetry <cit.>. Its superfluid properties have been observed in quantum Hall systems <cit.>. Although an exciton condensate arising intrinsically in a real material has been a challenge, there has been quite significant progress in three-dimensional semimetal 1T-TiSe_2 <cit.> and TMD bilayer WSe_2/ MoSe_2 <cit.>. In particular, a recent experiment <cit.> has established the existence of interlayer excitons in the ground state by capacitance measurements and characterized the exciton Mott transition <cit.> as a function of the density of electron-hole pairs. These interlayer excitons have finite dipole moment and interact via dipole-dipole interaction. Thus, as a virtue of interacting bosons in 2D, it is possible that there is condensation at low enough temperatures <cit.>. The question of whether the condensate exists in the intrinsic ground state of the system requires transport experiments to be settled <cit.>. IECs display fascinating transport properties, most notably dissipationless counterflow transport. When equal and opposite fields are applied to the two layers, excitons flow without resistance in the condensate <cit.>. The longitudinal counterflow conductivity, σ_ CF(ω), diverges in the dc limit with σ_ CF(ω) = D_s δ(ω) + ⋯, where the weight of the delta function, D_s, represents the phase stiffness that governs the free energy cost of phase fluctuations in the condensate <cit.>. In this study, we demonstrate that D_s, in addition to a conventional contribution from band dispersion, has a significant geometric contribution that arises from the wavefunctions of the non-interacting electron and hole bands. The critical role of wavefunctions in exciton condensation is evident from quantum Hall bilayers. There, excitons exhibit macroscopic coherence and condense <cit.> despite the non-interacting bands having flat dispersion and infinite mass. In the absence of any other scale in the problem, the coherence of the condensate necessarily arises from the non-trivial wavefunctions of the Landau level that endow mobility to excitons even when the constituent electrons and holes are immobile. While the mapping from Landau levels to Bloch bands can be complex <cit.>, it is anticipated that similar phenomena can be observed in materials without a magnetic field, such as WSe_2/ MoSe_2, that exhibit non-trivial wavefunctions <cit.>. However, there are key distinctions due to finite dispersion and asymmetric electron-hole bands, and it remains to be seen if such effects preserve or diminish the geometric contribution. We address these fundamental questions in this paper. We begin by deriving the phase stiffness of a general exciton Hamiltonian, separating it into conventional and geometric components. The latter arises primarily from the non-interacting electron/hole wavefunctions. To demonstrate this effect, we utilize a tight-binding model with tunable quantum geometry, introduced in Ref. <cit.>, and minimal on-site interactions. We explore conditions under which the geometric contribution simplifies to the quantum metric, establishing its connection to flat-band superconductivity. Transitioning to a realistic continuum model with screened Coulomb interaction applicable to bilayer TMD devices, we provide estimates for the geometric contribution to stiffness in MoTe_2 homobilayers <cit.> as well as WSe_2/ MoSe_2 heterobilayers <cit.>. Finally, we propose experimental setups that can serve as validation platforms for our theory. *Phase Stiffness.— A condensate is characterized by a macroscopic many-body wavefunction with a uniform phase, θ, that spans the entire system. The stability of the condensate depends on the free energy associated with spatial fluctuations in the phase, θ→θ( r), which is quantified as ℱ = (1/2) D_s ∫ d r|∇θ( r)|^2, where D_s represents the phase stiffness. For IECs, the phase stiffness is calculated as a linear response coefficient when equal and opposite vector potentials are applied to the two layers <cit.>. The antisymmetric vector potential couples symmetrically to the exciton, since electron and holes have opposite charges, inducing infinitesimal phase fluctuations in the condensate. We now work towards a concrete formulation of phase stiffness. A general model Hamiltonian for the IEC is given by ℋ_ ex = ℋ_0 + ℋ_ int, where ℋ_0 = ℋ_t + ℋ_b describes the non-interacting properties with Bloch Hamiltonians {ℋ_ν} for the two layers ν = {t,b}, and ℋ_ int is a density-density interaction that gives rise to excitons. The Bloch Hamiltonian can be represented as ℋ_0 = ∑_ kΨ^†_ k H_0( k) Ψ^†_ k where H_0( k) = [H_t( k) ⊕ H_b( k)] + V_b τ_z. Here Ψ^†_ k is a spinor that has internal labels such as orbitals and spin (omitted for brevity) along with layer label ν: Ψ^†_ k = (c^_ k, α, t , ⋯, c^_ k, β, b , ⋯ )^T, and k is crystal momenta that runs over the Brillouin zone. The last term describes a bias voltage V_b that tunes the gap between the conduction and valence bands (see Fig. <ref>). With vanishing single-particle tunneling between the layers, the non-interacting model has a U(1)_t × U(1)_b symmetry that is spontaneously broken by the IEC. We now move to a Kubo formula for phase stiffness by applying an external vector potential. Density-density interactions remain unaffected by the external vector potential. Consequently, when equal and opposite A = Ax̂ are applied to the two layers, the current operators depend solely on the Bloch Hamiltonian as H_0( k, A) = H_t( k-eA/ħ) ⊕ H_b( k+eA/ħ). We assume spatially isotropic systems to suppress the tensor nature of currents and stiffness and leave the extension to anisotropic systems to App. <ref>. We find that the current operator can be expressed as j = -δℋ_0/δ A = j_P + A j_D, where the paramagnetic and diamagnetic currents are j_P = (e/ħ) ∑_ kΨ^†_ k[ τ^z ∂_kℋ_0( k) ] Ψ^_ k and j_D = -(e/ħ)^2 ∑_ kΨ^†_ k∂_k^2 ℋ_0( k) Ψ^_ k. The stiffness is determined by the Kubo formula, given by D_s = -[ ⟨ j_D ⟩ - χ_ j_P j_P( q_⊥→ 0, ω=0)]/4, where χ_ j_P j_P is the longitudinal current-current correlator <cit.>. Calculating the stiffness requires a complete enumeration of the eigenstates of the full interacting Hamiltonian ℋ_ ex. To make progress beyond the formal definition, we confine ourselves to mean-field theories where the interaction term breaks down into fermionic bilinears. We can then utilize the eigenspectrum E_m, k and eigenstates |u_m, k⟩ of the mean-field Hamiltonian to express the stiffness as: D_s = ħ^24e^2 A[ ∑_ k, m f_m, k⟨ u_m, k| ∂^2_k H_0( k) | u_m, k⟩ - ∑_ k, m, nf_m, k-f_n, kE_n, k-E_m, k|⟨ u_m, k| τ^z ∂_k H_0( k)| u_m, k⟩|^2] where f_m, k≡ f[E_m, k] is the Fermi occupation factor and A is the volume normalization factor which is the area of the sample in 2D. The stiffness probes interlayer coherence and is finite only when the interaction admits an off-diagonal exciton term Ψ^†_ k [τ^i ϕ̂( k)] Ψ^_ k with i= x, y. The matrix function ϕ̂( k) requires a self-consistent solution. Although the dependence of the phase stiffness on wavefunctions is evident in Eq. (<ref>) from the matrix structure of the current operators, isolating the geometric contribution requires careful consideration. The geometric and energetic terms are intertwined in Eq. (<ref>) with no clear route to separation. To tackle this challenge, we introduce a projected low-energy model, which is applicable when excitons predominantly arise from the lowest lying hole band (h) and the highest electron band (e) <cit.>. The proposed low-energy Hamiltonian is H_ ex = ∑_ kψ^†_ k[ ϵ_e( k) - Σ_e( k) φ( k); φ( k)^* ϵ_h( k) + Σ_h( k) ]ψ^†_ k where ϵ_e/h are the bare electron/hole dispersions, ψ^_ k = ( c^_ k, e, c^_ k, h )^T is the low-energy basis state, Σ_e/h( k) are the self-energies (including V_b) and φ( k) is a shorthand for φ( k) = ∑_α∈ t, β∈ b [U_t( k)]^*_e, αϕ_αβ( k) [U_b( k)]_β, h . A crucial aspect to highlight is that U_t( k) and U_b( k) are independent unitary matrices that diagonalize H_t( k) and H_b( k) respectively, which are connected through the exciton pairing function. The subscripts e and h indicate the electron and hole bands involved in the exciton pairing. It is important to note that, unlike in BCS theory where the Nambu basis introduces a particle-hole symmetry in the BdG Hamiltonian, U_t( k) and U_b( k) do not have such constraints. Hence our framework is a generalization of phase stiffness, specifically relevant to WSe_2/ MoSe_2 bilayers where the electron and hole bands originate from distinct materials. We further assume ϵ_e( k) = -ϵ_h( k) = ϵ( k), while allowing U_t( k) to differ from U_b( k). This assumption is justified as the effect of asymmetric dispersion on exciton is a well understood textbook problem <cit.> that only complicates our analysis without offering additional insight (see also Appendix <ref>). On the other hand, the presence of U_t( k) and U_b( k) has a non-trivial consequence for φ( k) in the presence of vector potential: φ( k, A) ≈φ( k) - eAħ𝒫( k) - e^2 A^22ħ^2𝒟( k), where 𝒫( k) and 𝒟( k) are the paramagnetic and diamagnetic terms that involve derivatives of U_ν( k) (see App. <ref> for the full expression). These terms are in addition to the energetic terms arising from ∂_kϵ( k) and ∂_k^2ϵ( k) in the current operator and are key to our analysis. After performing a lengthy but straightforward calculation that is detailed in Appendix <ref>, we find two main contributions to the stiffness D_s = D_s^c + D_s^g with a clear separation: D_s = 12A∑_ k∂_k_i^2 ϵ( k) v_ k^2 + 1 4A∑_ k1E( k)𝒢( k) where v_ k^2 = (1- ξ( k)/E( k))/2 is the coherence factor with ξ( k) = ϵ( k) - Σ( k), and E( k) = √(ξ( k)^2 + |φ( k)|^2) is the mean-field quasiparticle dispersion. The second part of the equation includes the geometric quantity 𝒢( k) = Re[𝒟̃( k)] - | ξ( k) E( k) Re[𝒫̃( k)] - i Im[𝒫̃( k)] |^2 where D̃( k) = 𝒟( k) φ( k)^* and 𝒫̃( k) = 𝒫( k) φ( k)/|φ( k)|. Eq. (<ref>) is one of the main results of our paper. A few comments are in order. The geometric contribution in Eq. (<ref>) arises explicitly from the Taylor expansion of the off-diagonal term φ( k, A). It is important to note that if the wavefunctions were trivial, say independent of k, this contribution would vanish since 𝒫( k) = 𝒟( k) = 0. To further gain some understanding of its origin, we can consider the limit of identical particle-hole bands. While this assumption may not be applicable to most real materials, if the electron and hole wavefunctions were indeed identical, the term 𝒢( k) reduces precisely to the quantum metric, g( k), which has been discussed in the context of flat-band superconductivity. There, it has been established that D_s in attractive Hubbard models with flat band is proportional to the magnitude of the attractive interaction |V| and the trace of the quantum metric g( k), D_s ∝ |V| ∑_ k g( k)/A <cit.>. In addition, our result establishes a generalization of excitonic phase stiffness discussed in the context of quantum Hall bilayers <cit.>. Having established the geometric enhancement of stiffness, we now illustrate this effect using two models. *Hubbard Model with tunable quantum metric.— We consider a tunable quantum metric model that has two orbitals on a square lattice with the Bloch Hamiltonian H(ζ, k) = 2t(2-p_ k) + t_F cos(ζ p_ k) σ^x + t_F sin(ζ p_ k) σ^y where the Pauli matrices σ^i act in orbital space, p_ k is a periodic function given by p_ k = cos k_x + cos k_y and the parameter ζ controls the quantum geometry by introducing long range hoppings <cit.>. More precisely, ζ is an overall scaling factor for the quantum geometric tensor 𝒬_ζ( k ) = ζ^24[ sin^2 k_x sin k_x sin k_y; sin k_x sin k_y sin^2 k_y ]. Note that this tensor is a real tensor because both inversion and time-reversal symmetries are preserved. The Berry curvature F( k) = - Im[𝒬( k)]_xy/2 vanishes identically while the quantum metric g_μν( k) = Re[𝒬_ k]_μν is finite. Another convenient aspect of this model is that the band dispersions ϵ_±, k = 2t(2-p_ k) ± t_F are independent of ζ. This permits the use of ζ to tune quantum geometry without affecting the band dispersion, capturing the discussion around Eq. (<ref>). We consider an interlayer on-site orbital-diagonal Hubbard interaction ℋ_ int = V ∑_ i,αn̂_ i, α, tn̂_ i, α, b. The labels i, α pertaining to unit cell index and orbitals, are not crucial for our discussion as long as the interaction is interlayer in character. Next, we decouple the four-fermion interaction into uniform Hartree and Fock (exciton) channels and arrive at the mean-field Hamiltonian ℋ_ MF = ∑_ kΨ^†_ k H_ MF( k) Ψ^†_ k where H_ MF( k) = [H(ζ_t, k) ⊕ (-H(ζ_b, k))] + τ^z V_b + τ^x ϕ . Here the tilde H̃ denotes the modified Bloch Hamiltonian that includes Hartree shifts coming from V c^†_t c^_t c^†_b c^_b ≈ V ⟨ c^†_t c^_t ⟩ c^†_b c^_b + V c^†_t c^_t ⟨ c^†_b c^_b ⟩ with appropiate internal labels (see App. <ref>). The last term breaks the U(1)_t × U(1)_b symmetry as discussed before. The exciton field ϕ̂ is defined by the gap equation ϕ = - (V/2N) ∑_ k, α⟨ c^†_ k, α, t c^†_ k, α, b⟩ where N is the number of unit cells (details in App. <ref>). The relatively simple form of the exciton field ϕ_αβ( k) = ϕδ_αβ in the toy model can be attributed to three key factors. The on-site nature makes momentum independent ϕ̂ a natural ansatz. The orbital-diagonal aspect n̂_α,tn̂_α,b removes the possibility of off-diagonal elements in ϕ̂. Lastly, the two orbitals in the toy model are related by an in-plane inversion which ultimately reduces the exciton field to be proportional to identity with a single scalar ϕ̂ = ϕ1. With this significant reduction in complexity, we arrive at φ( k) = ϕθ( k) in the projected model in Eq. (<ref>) where θ( k) is the contraction of electron and hole wavefunctions, defined as θ( k) = ∑_α [U_t( k)]^*_e, α [U_b( k)]_α,h. This overlap function influences both, the order parameter ϕ and the phase stiffness D_s. We start by considering symmetric bilayers, where both layers are described by the same parameter ζ. Upon tuning the bias voltage V_b at a fixed interaction strength (chosen to be U/t=6 for illustration), excitons arise spontaneously characterized by a finite value of ϕ (see Fig.<ref>a). The resulting stiffness D_s has a clear correlation with exciton formation, but what stands out is the significant enhancement contributed by the geometric term D_s^g. The extra contribution is indeed proportional to the trace of the quantum metric in this particular case, as Eq.(<ref>) simplifies to D_s^g = (ϕ^2/4A) ∑_ k g( k)/E( k) <cit.>. While the geometric contribution dominates at higher bias voltage |V_b|, it is important to note that a high |V_b| results in higher densities of electron-hole pairs and may ultimately drive the system to the exciton Mott transition <cit.>. We now consider asymmetric wavefunctions, with ζ_t = 1 for the top layer and ζ_b = ζ in the bottom layer. This configuration results in a particle-hole symmetric spectrum but with distinct electron and hole wavefunctions. The overlap θ( k) = cos[(ζ-1) p_ k/2] exhibits significant variation across the Brillouin zone (see Fig. <ref>c). This variation plays a crucial role in the quasiparticle energy E( k) = √(ξ( k)^2 + ϕ^2 |θ( k)|^2), which, in turn, affects the gap equation 1/V = (1/N)∑_ k1/[E( k)]. The reduction in |θ( k)| from unity leads to a decrease in ϕ compared to the particle-hole symmetric case where θ( k)=1 (see Fig. <ref>d and Fig. <ref>a). The stiffness is also affected, as shown in Fig. <ref>e. Two factors contribute to this alteration. First, there is the pre-factor ϕ, and secondly, there is 𝒢( k), which explicitly depends on ζ through θ( k), 𝒫( k), and 𝒟( k). Although it may be model specific, we find that the first contribution is symmetric around ζ=1 while the geometric contibution is skewed towards higher ζ (see Fig. <ref>e). Additionally, it is important to highlight that the geometric contribution is not necessarily positive definite, see Fig. <ref>b. This stands in contrast to the geometric contributions reported in superfluid stiffness <cit.> and electron-phonon coupling <cit.>. Motivated by the valuable insights from the minimal model, particularly in isolating the wavefunction dependence, we now turn our attention to TMD bilayers. There are certain limitations of the model that hinder its direct applicability. Specifically, we need to include a gated Coulomb interaction with momentum dependence, V_ q, which will make the exciton field momentum dependent. Additionally, we need to account for inter-layer polarization along with a realistic orbital structure in the interaction. *Continuum Model for TMDs.— To quantify the role of quantum geometric effects in TMD exciton bilayer <cit.> we consider the continuum Hamiltonian following Ref. <cit.>. For the sake of simplicity we assume that spin and valley are locked, which is justified for the valence band where the Ising spin-orbit coupling (SOC) gap is of the order of 180meV in MoX_2 and 440meV in WX_2 <cit.>. We note that it does not apply to the conduction band where the Ising SOC is only ≈ 20meV <cit.>. Within these assumptions, we focus on the continuum k.p Hamiltonian for a given spin and valley which to first order in k reads H_ν() = Δ_νσ_z + v_ν𝐤·σ, with σ^i Pauli matrices in the internal conduction and valence band degrees of freedom and dispersion ϵ_ν±=±√(Δ^2_ν+(v_ν k)^2) with k=| k|. The model consists of a gapped Dirac cone which describes the valley optical selection rules <cit.>. We neglect the quadratic trigonal warping term and the quadratic particle-hole mass imbalance since it does not play an important role in our description. The geometric properties of the model are encoded in the form factors F^ν_,=U^†_ν()U_ν(+) where U_ν() is the unitary transformation that diagonalizes the single particle Hamiltonian of layer ν=t,b. In the following we compare the quadratic band calculation obtained by approximating ϵ_ν±≈±Δ_ν± k^2/2m_ν with m_ν=v^2_ν/m_ν and the gapped Dirac cone in Eq. (<ref>). We overlay this model with a gated Coulomb interaction ℋ_ int = 12A∑^Λ_∑_ν,ν^'=t,bV^νν^'_n̂_,νn̂_-,ν^' where n̂_,ν is the layer resolved density operator n̂_,ν=∑^Λ_∑_a=K,K^'ψ^†_,aνψ_+,aν and V^tt_=V^bb_= e^2/ϵϵ_0tanhqξ/2/2q, V^tb_=V^bt_=e^-dqV^tt_. Here d is the inter-layer distance, Λ is the UV cutoff, and ξ is the screening length of the bilayer which is defined as the distance between the bilayer and the metallic gates. As representative values <cit.> we set ξ=12nm, and the relative dielectric constant of the environment ϵ≈6 and interlayer distance d≈1nm. We focused on the regime of low-density of electron-hole pairs n_ e-h=ϕ_z/2≤ 0.075 to reduce screening effects leading to the exciton Mott transition <cit.>. We now proceed with a Hartree-Fock calculation. We find that while the Hartree term vanishes as a result of the charge neutrality condition, the Fock term gives rise to the self energy correction to the single-particle Hamiltonian. The self-consistency equations for the Fock self-energy are solved employing an iterative scheme, for more details on the Hartree-Fock calculation we refer to App. <ref>. The Coulomb interaction induces the formation of intravalley excitons as well as charge transfer between the two layers. These two were described in the previous section as ϕ and ϕ_z respectively. The observable ϕ_z=⟨τ^zσ^0⟩ is the charge unbalance quantifying the number of electron-hole pairs n_ e-h and the operators τ^±σ^a probe spontaneous interlayer coherence where τ^± = (τ^x± iτ^y)/2 are raising/lowering operators in the layer degree of freedom and σ^a are Pauli matrices in the band space. A finite expectation value ⟨τ^±σ^a⟩≠ 0 corresponds to an interlayer intravalley exciton condensate which breaks the U(1)_e/h symmetry. The first and second panels of Fig.<ref> show the evolution of the quantities ϕ which choosing the gauge where the interlayer symmetry is broken along τ^x is defined as ϕ=√(∑_a=x,y,z⟨τ^xσ^a⟩^2) and ϕ_z as a function of V_b for the homobilayer (Fig.<ref>a) and heterobilayer (Fig.<ref>b) case. We notice that for the continuum theory the energy V_b is given by V_b=(Δ_t+Δ_b-2E_z)/2 with E_z electric displacement field applied to the bilayer. Above a critical value of the electrostatic potential V_b^*≈ 70/75meV the system turns into a semiconductor where the energy gap E_gap grows linearly with the applied bias. The transition from the excitonic condensate to the semiconductor takes place without gap closure, see the middle panel of Fig.<ref>a-b, and is characterized by a first order jump of the exciton order parameter. Finally, the phase stiffness is calculated using Eq. (<ref>) where the expectation value is taken with the Hartree-Fock wavefunctions. The results are shown in the right panel of Fig. <ref>a-b blue data for parabolic bands and green ones for a gapped Dirac cone. We emphasize that the non-trivial structure of the wavefunctions appears in the geometric contribution of the superfluid stiffness, while it does not change significantly the exciton binding energy (energy gap) and the size of the order parameter. Remarkably, the geometric component increases D_s and, correspondingly, the BKT transition temperature (T_ BKT) related to the superfluid stiffness by the Nelson-Kosterlitz relation k_BT_ BKT=π D_s(T_ BKT^-)/2 <cit.>. Approximating D_s(T_ BKT^-) with its T=0 value (from last panel in Fig. <ref>b), we infer that geometric contribution increases T_ BKT roughly from 12 to 36 Kelvin at V_b ≈ 40meV. This threefold increase should be observable in experiments. *Conclusion.— Phase stiffness in IECs is influenced by the underlying electron and hole wavefunctions. In the first section with the minimal model, we demonstrated the significant impact of wavefunctions on both the exciton order parameter, ϕ, and the phase stiffness, D_s. This effect is particularly pronounced when considering asymmetric bands. To delve further, we investigated two limiting scenarios: one involving local Hubbard interaction and the other with gated Coulomb interaction projected onto the low-energy bands. Remarkably, in both cases, we find geometric contributions that play a vital role. Our findings complement existing research on the effects of quantum geometry in exciton spectrum <cit.>, exciton wavefunction <cit.> and its possible realizations in moire materials <cit.>. Importantly, we highlight a previously overlooked geometric component that differentiates semi-conductors from parabolic bands. Bilayer TMD devices offer an ideal platform to validate our predictions. Although exciton phase stiffness has been measured using tunneling experiments in quantum Hall bilayers, it remains to be explored whether this setup can be extended to TMD bilayers. Critical counterflow current from non-linear transport may also provide clues for the said geometric contribution. *Acknowledgment.— Research on geometric properties of exciton condensates is supported as part of Programmable Quantum Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under award DE-SC0019443. We benefited from discussions held at the 2023 Quantum Geometry Working Group meeting that took place at the Flatiron institute. We acknowledge discussions with Kin Fai Mak, Pavel Volkov and Yongxin Zeng. D.G. and R.Q. acknowledge support from the Flatiron Institute, a division of the Simons Foundation. Supplementary material for “title ” Nishchhal Verma,^1,* Daniele Guerci,^2,* and Raquel Queiroz^1,2 ^1Department of Physics, Columbia University, New York, NY 10027, USA ^2Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA § MEAN-FIELD PHASE STIFFNESS §.§ Kubo formula Exciton condensates exhibit the remarkable phenomenon of dissipationless counterflow transport when equal and opposite fields are applied to the two layers. The asymmetric field drives excitons in a coherent fashion. The response kernel to layer anti-symmetric vector potential is defined as <cit.> j_ CF = -4e^2ħ^2 D_s A where D_s is the phase stiffness and A=A x̂ is the external vector potential that is applied along +x̂ in top layer and -x̂ in bottom layer. As a matter of fact, this can be an in-plane magnetic field as A=zAx̂ gives B_y = ∂_z A_x where z is the direction normal to the interface <cit.>. The stiffness is the weight of the delta function in counterflow conductivity Re[σ_ CF(ω)] ∼ D_s δ(ω). By utilizing the Kramers-Kronig relation, D_s can be extracted from the 1/ω tail of Im[σ_CF(ω)], which follows Im[σ_CF(ω)] ∼ D_s/ω. Stiffness controls the cost of phase fluctuations of the condensate and is formally defined by the Kubo formula <cit.> D_s = -14( ⟨ j_D ⟩ - χ_ j_P j_P( q_⊥ = 0, ω=0) ) . The diamagnetic (j_D) and paramagnetic (j_P) currents are given by a Taylor expansion of the Hamiltonian ℋ_ ex(A) = ∑_ kΨ^†_ k[ [ H_t( k - eA/ħ) 0; 0 H_b( k + eA/ħ) ] + V_b τ^z ] Ψ^†_ k + ℋ_ int = ℋ_ ex(0) - eAħ∑_ kΨ^†_ k[ ∂_k H_t( k) 0; 0 -∂_k H_b( k) ]Ψ^_ k + e^2A^22ħ^2∑_ kΨ^†_ k[ ∂_k^2 H_t( k) 0; 0 ∂_k^2 H_b( k) ]Ψ^_ k which gives j_P = eħ∑_ kΨ^†_ k[ ∂_k H_t( k) 0; 0 -∂_k H_b( k) ]Ψ^_ k , j_D = -e^2ħ^2∑_ kΨ^†_ k[ ∂_k^2 H_t( k) 0; 0 ∂_k^2 H_b( k) ]ψ^_ k. Next, in order to calculate the expectation values in Eq. (<ref>), we assume a mean-field Hamiltonian with eigenspectrum E_m, k and eigenstates | u_m, k⟩} to find ⟨ j_D ⟩ = 1A∑_ k, m f[E_m, k] ⟨ u_m, k| ∂^2_k H_0( k) | u_m, k⟩, χ_ j_P j_P = 1A∑_ k, m≠ nf[E_m, k]-f[E_n, k]E_n, k-E_m, k|⟨ u_m, k| τ^z ∂_k H_0( k) | u_n, k⟩|^2 which gives Eq. (<ref>) of the main text. The expression not only captures the conventional contribution coming from energies but also includes all geometric contributions that are hidden inside the matrix elements. These geometric factors arise from the underlying electron and hole wavefunctions that are not directly visible in this equation. We will now describe a projected mean-field model which systematically uncovers these geometric contributions. §.§ Projected Phase Stiffness If the electron and hole bands are sufficiently isolated from other bands, we can focus on the low-energy Hamiltonian and couple to the external vector potential ℋ_ ex(A) = ∑_ kψ^†_ k[ ϵ( k-eA/ħ) + V_b - Σ( k) φ( k,A); ϕ φ( k,A)^* V_b - ϵ( k+eA/ħ) + Σ( k) ]ψ^_ k . The vector potential dependence on the off-diagonal component is key to the geometric contribution. The factor φ( k,A) gives rise to additional terms in the current operator coming from the expansion around A=0: φ( k, A) = ∑_αβ [U_t( k-eA/ħ)]^*_e, αϕ_αβ( k) [U_b( k+eA/ħ)]_β,h = φ( k) - e Aħ𝒫( k) - e^2 A^22ħ^2𝒟( k) where 𝒫( k) and 𝒟( k) are the paramagnetic and diamagnetic parts of the current 𝒫( k) = ∑_αβ∂_k [U_t( k)]^*_e, αϕ_αβ( k) [U_b( k)]_β, h - [U_t( k)]^*_e, αϕ_αβ( k) ∂_k [U_b( k)]_β, h 𝒟( k) = ∑_αβ 2 ∂_k [U_t( k)]^*_e, αϕ_αβ( k) ∂_k [U_b( k)]_β, h - ∂_k^2[U_t( k)]^*_e, αϕ_αβ( k) [U_b( k)]_β, h - [U_t( k)]^*_e, αϕ_αβ( k) ∂_k^2 [U_b( k)]_β, h . The full current operator is obtained by the expansion (with ξ( k) = ϵ( k) +V_b - Σ( k)) ℋ_ k(A) ≈[ ξ( k) φ( k); φ( k)^* - ξ( k) ] - e Aħ[ ∂_kϵ( k) 𝒫( k); 𝒫( k)^* ∂_kϵ( k) ] - e^2A^22ħ^2[ -∂_k^2 ϵ( k) 𝒟( k); 𝒟( k)^* ∂_k^2ϵ( k) ] together with the identity j = -δℋ/δ A = j_P + A j_D that gives j_P = eħ∑_ kψ^†_ k[ ∂_kϵ( k) 1 + Re[𝒫( k)] σ_x - ϕ Im[𝒫( k)] σ_y ] ψ^_ k≡∑_ kψ^†_ kĵ_P( k) ψ^_ k j_D = e^2ħ^2∑_ kψ^†_ k[ Re[𝒟( k)] σ_x - Im[𝒟( k)] σ_y - ∂_k^2ϵ( k) σ_z ] ψ^_ k≡∑_ kψ^†_ kĵ_D( k) ψ^_ k . To calculate the stiffness, we have to insert the current operator in Eq. (<ref>) into the Kubo formula in Eq. (<ref>). However, before doing that, let us establish some useful relations. The low-energy Hamiltonian consists of two eigenstates |u_±, k⟩ with corresponding energies ± E( k) where E( k) = √(ξ( k)^2 + |φ( k)|^2 ). The Kubo formula then simplifies to D_s = -14A∑_ k( ⟨ u_-, k | ĵ_D( k) | u_-, k⟩ - |⟨ u_-, k | ĵ_P( k) | u_+, k⟩|^2 E( k)) The first term in stiffness involves the ground state expectation value of the diamagnetic current at zero temperature (T=0). Since the diamagnetic current operator in Eq. (<ref>) has all three Pauli matrices, we first determine the matrix elements useful for this calculation ⟨ u_-, k | σ_x | u_-, k⟩ = - Re[φ( k)] E( k) , ⟨ u_-, k | σ_y | u_-, k⟩ = Im[φ( k)] E( k) , ⟨ u_-, k | σ_z | u_-, k⟩ = - ξ( k) E( k) . and then use these matrix elements to get ⟨ u_-, k | ĵ_D | u_-, k⟩ = e^2ħ^2[ Re[𝒟( k)] ⟨ u_-, k | σ_x | u_-, k⟩ - Im[𝒟( k)] ⟨ u_-, k | σ_y | u_-, k⟩ - ∂_k^2ϵ( k) ⟨ u_-, k | σ_z | u_-, k⟩] = -e^2ħ^2( 1E( k) Re[𝒟( k)φ( k)^*] - ∂_k^2ϵ( k) ξ( k) E( k) ). Next, we move to the current-current correlator. As is evident from Kubo formula in Eq. (<ref>) and current operator in Eq. (<ref>), we require the following matrix elements: ⟨ u_-, k | σ_x | u_+, k⟩ = Re[φ( k)] |φ( k)| ξ( k) E( k) - i Im[φ( k)] |φ( k)| , ⟨ u_-, k | σ_y | u_+, k⟩ = Im[φ( k)] |φ( k)| ξ( k) E( k) + i Re[φ( k)] |φ( k)| which yield the expression ⟨ u_-, k | ĵ_P | u_+, k⟩ = eħ( Re[𝒫( k)] ⟨ u_-, k | σ_x | u_+, k⟩ - Im[𝒫( k)] ⟨ u_-, k | σ_y | u_+, k⟩) = eħ[ ξ( k) E( k) Re[φ( k) 𝒫( k)]|φ( k)| - i Im[ φ( k) 𝒫( k) ]|φ( k)|] Substituting this into the current-current correlator, we obtain χ_jj = ∑_ k |⟨ u_-, k | ĵ_P | u_+, k⟩|^2 E( k) = e^2ħ^2∑_ k1E( k)| ξ( k) E( k) Re[φ( k) 𝒫( k)]|φ( k)| - i Im[ φ( k) 𝒫( k) ] φ( k)|^2 . We emphasize that this piece survives only because of geometric terms. Had it been a conventional exciton condensate with no quantum geometry in electron or hole wavefunction, this term would identically be zero. With all the pieces in place, we insert eqns.(<ref>) and (<ref>) into Eq. (<ref>) to find D_s = 14A∑_ k( Re[𝒟( k)φ( k)^*] E( k) - ∂_k^2ϵ( k) ξ( k) E( k) ) - 14A∑_ k1 E( k)| ξ( k) E( k) Re[φ( k) 𝒫( k)]|φ( k)| - i Im[ φ( k) 𝒫( k) ] |φ( k)| |^2 = 14A∑_ k∂_k^2ϵ( k) ( 1- ξ( k) E( k) ) + 1 E( k)( Re[𝒟( k)φ( k)^*] - | ξ( k) E( k) Re[φ( k) 𝒫( k)]|φ( k)| - i Im[ φ( k) 𝒫( k) ] |φ( k)| |^2 ) The first term depends only on the energies, while the second term involves a complex geometric quantity that is different from a metric. §.§ Reduction to Quantum Metric We can validate our calculations by making two assumptions: first, ϕ̂( k) = ϕ1 and second, the two layers are identical, giving [U_t( k)]_e,α = [U_b( k)]_h,α ∀ α, k. We can express the matrix elements in a more concise form (dropping e, h labels): θ( k) = 1, 𝒫( k) = 2ϕ∑_α∂_k U^*_α( k) U_α( k) ≡ 2ϕ⟨∂_k u_ k | u_ k⟩, 𝒟( k) = 4 ϕ∑_α∂_k U^*_α( k) ∂_k U_α( k) ≡ 4ϕ⟨∂_k u_ k | ∂_k u_ k⟩ and the resulting geometric contribution to stiffness greatly simplifies D_s^g = 14A∑_ k1 E( k)( Re[𝒟( k)] - | Im[ 𝒫( k) ] |^2 ) = 1A∑_ kϕ^2 E( k)( Re[⟨∂_k u_ k | ∂_k u_ k⟩] - | ⟨∂_k u_ k | u_ k⟩|^2 ) = 1A∑_ kϕ^2 E( k) Re[𝒬( k)]. Here Re[𝒬( k)] = g( k) is the quantum metric of the electron (or hole) band. §.§ Spatially Anisotropic Systems Underneath all our discussion so far, we have assumed isotropic systems where the vector potential is applied along x̂ with A = A x̂ and the longitudinal response is calculated along x̂ while suppressing all spatial indices. In this section, we will outline the extension to spatially anisotropic systems. Introducing indices i,j = { x, y}, we find that the phase stiffness in Eq. (<ref>) becomes a tensor [D_s]_ij = -14A( ⟨ [j_D]_ij⟩ - χ_ [j_P]_i [j_P]_j( q_⊥=0, ω=0) ) = ħ^24e^2A[ ∑_ k, m f[E_m, k] ⟨ u_m, k| [ĵ_D( k)]_ij| u_m, k⟩ . - ∑_ k, m≠ nf[E_m, k]-f[E_n, k]E_n, k-E_m, k⟨ u_m, k| [ĵ_P( k)]_i | u_n, k⟩⟨ u_n, k| [ĵ_P( k)]_j | u_m, k⟩] where the current operators are [ĵ_P( k)]_i = eħ[ ∂_k_iH_t( k) 0; 0 -∂_k_iH_b( k) ], [ĵ_D( k)]_ij = -e^2ħ^2[ ∂_k_i∂_k_jH_t( k) 0; 0 ∂_k_i∂_k_jH_b( k) ]. The projected stiffness in Eq. (<ref>) follows along similar lines [D_s]_ij = 14A∑_ k[ ∂_k_i∂_k_jϵ( k) ( 1- ξ( k) E( k) ) + ∑_ k1 E( k)( Re[[𝒟( k)]_ijφ( k)^*] - [ℙ( k)]_i [ℙ( k)]_j^* ) ] where [ℙ( k)]_i is [ℙ( k)]_i = ξ( k) E( k) Re[φ( k) [𝒫( k)]_i]|φ( k)| - i Im[ φ( k) [𝒫( k)]_i ] |φ( k)| and the functions [𝒫( k)]_i and [𝒟( k)]_ij are [𝒫( k)]_i = ∑_αβ[ ∂_k_i [U_t( k)]^*_e, αϕ_αβ( k) [U_b( k)]_β, h - [U_t( k)]^*_e, αϕ_αβ( k) ∂_k_i [U_b( k)]_β, h] [𝒟( k)]_ij = ∑_αβ[ ∂_k_i [U_t( k)]^*_e, αϕ_αβ( k) ∂_k_j [U_b( k)]_β, h + ∂_k_j [U_t( k)]^*_e, αϕ_αβ( k) ∂_k_i [U_b( k)]_β, h - ∂_k_i∂_k_j [U_t( k)]^*_e, αϕ_αβ( k) [U_b( k)]_β, h - [U_t( k)]^*_e, αϕ_αβ( k) ∂_k_i∂_k_j [U_b( k)]_β, h]. Lastly, the BKT temperature is determined by the determinant of the stiffness tensor k_B T_c = (π/2) √( det [D_s(T_c^-)]_ij). § MEAN-FIELD THEORY FOR THE HUBBARD INTERACTION We decouple the four-fermion interaction with a spatially uniform ansatz V_αβ c^†_ i, α, t c^_ i, α, t c^†_ i, β, b c^_ i, β, b ≈ V_αβ n_α, t c^†_ i, β, b c^_ i, β, b + V_αβ c^†_ i, α, t c^_ i, α, t n_β, b + ϕ_αβ c^†_ i, β, b c^_ i, α, t + ϕ_αβ^† c^†_ i, α, t c^_ i, β, b where { n_α, ν} and {ϕ_αβ} are given by n_α, ν = 1A∑_ i⟨ c^†_ i, α, ν c^_ i, α, ν⟩ = 1N∑_ k⟨ c^†_ k, α, ν c^_ k, α, ν⟩, ϕ_αβ = - V_αβA∑_ i⟨ c^†_ i, α, t c^†_ i, β, b⟩ = - V_αβN∑_ k⟨ c^†_ k, α, t c^†_ k, β, b⟩. Here A = N Ω is the area of the system, N is the number of unit cells and Ω is the area of unit cell. This pre-factor serves as a normalization factor. When we substitute this result back into the full Hamiltonian, we obtain the Hartree shifts [H_t( k)]_αα^' = [H_t( k)]_αα^' + δ_α,α^' V_αβ n_β, b, [H_b( k)]_ββ^' = [H_b( k)]_ββ^' + δ_β,β^' V_αβ n_α, t in addition to the interlayer exciton term ∝ϕ_αβ c^†_ i, β, b c^_ i, α, t. Combining the two, we get Eq. (<ref>) of the main text ℋ_ MF = ∑_ k , ν = {t,b}Ψ^†_ k,ν[ H_ν( k) δ_ν,ν^' + V_b τ^z_ν,ν^' + ϕ̂(1-δ_ν,ν^') ] Ψ^_ k,ν^'. To establish the notation, let us review the simplest model consisting of two identical bands with mass m: one particle-like and one hole-like, that belong to two different layers. We consider the system at charge neutrality, which means that the sum of the particle density n_t and the hole density n_b equals 1, and introduce an inter-layer bias V_b. The interaction term is U n̂_t (n̂_b-1), specifically chosen to vanish when the lower band is fully occupied (n_b=1). We then decouple the interaction into two components: a real exciton field denoted by ϕ, which can be expressed as ϕ = -U (⟨ c^_b c^_t⟩ + ⟨ c^†_t c^†_b⟩)/2, and an inter-layer polarization field denoted by ϕ_z which is ϕ_z = U ⟨ c^†_t c^_t⟩. Combining these, we obtain the mean-field Hamiltonian ℋ_MF = ∑_ kψ^†_ k[ ( k^22m + V_b - ϕ_z ) τ_z + ϕτ_x ] ψ^_ k. where ψ^_ k = ( c^_e, k, c^_h, k )^T (since electron/hole labels are identical to the layer labels). Next, we define ξ_ k = k^2/2m + h - ϕ_z and E_ k = √(ξ_ k^2 + ϕ^2) to write the self-consistent equations (at T=0) for ϕ and ϕ_z 1V = 1N∑_ k12E_ k, ϕ_z = V2N∑_ k(1 - ξ_ kE_ k). The solutions can be obtained numerically with a lattice regularization k^2/2m →ϵ_ k = 2t(2-cos k_x-cos k_y) where t can be thought of as a nearest-neighbor hopping amplitude on a square lattice. The multi-band generalization follows along similar lines. First we compute the eigenvalues and eigenvectors { E_m( k), | u_m, k⟩} of the mean-field Hamiltonian. The mean-field states follow the thermal occupation ⟨γ^†_ k, m γ^_ k, m^'⟩= f[E_m( k)]δ_m,m^' where f is the Fermi function and the operators {γ_ k} are related to {c_ k} by a unitary transformation. We use the new operators {γ_ k} to rewrite Eq. (<ref>) which turns out to have a concise matrix structure ϕ_αβ = - V_αβN∑_ k f[E_m( k)] ⟨ u_m, k| ∂ℋ_MF∂ϕ_αβ^*| u_m, k⟩, n_α,t = V_αβN∑_ k f[E_m( k)] ⟨ u_m, k| ∂ℋ_MF∂ n_β,b| u_m, k⟩. The equation is dependent on {n_α,t, ϕ_αβ} on both sides, thus requiring an iterative approach. We start with a guess and use function in Mathematica to find the solution. The solutions are shown in Fig. <ref>. § EXCITONS WITH ASYMMETRIC BANDS Ignoring wavefunctions, we examine the case of asymmetric mass which is a common scenario in most semiconductors where the conduction and valence bands exhibit different dispersions, characterized by m_e ≠ m_h. It leads to a slightly generalized form of Eq. (<ref>) with the mean-field Hamiltonian: ℋ_ MF = ∑_ kψ^†_ k[ k^22M1 + ( k^22m + V_b - ϕ_z ) τ_z + ϕτ_x ] ψ^_ k. Here, M^-1= (m_e-m_h)/2m_em_h and m^-1 = (m_e+m_h)/2m_em_h are effective masses. The inclusion of the identity term modifies the mean-field dispersion relation E_±( k) = ξ_M( k) ± E( k), where E( k) = √(ξ( k)^2 + ϕ^2) represents the previous quasiparticle dispersion and ξ_M( k) = k^2/2M. The self-consistent order parameter equation changes as well, mainly via the thermal factors (which were suppressed in Eq. (<ref>)) 1V = 1N∑_ k f[E_-( k)]- f[E_+( k)] 2 E( k) . It is easy to check that the equation reduces to Eq. (<ref>) if E_+( k) = - E_-( k)=E( k). The effects stemming from the asymmetry can be visualized as a momentum-dependent Zeeman term (in analogy to the one encountered in BCS theory). It follows the Chandrasekhar-Clogston route to destroy the condensate if the asymmetry is significant <cit.>. Phase stiffness is modified as well. Visualizing it as convolution of ϕ with the counterflow current operator ĵ_ CF, it is expected that D_s is enhanced if one of the bands is lighter. § CONTINUUM MODEL FOR EXCITON CONDENSATION IN AA-STACKED TMDS To start with we consider spinless electrons described by the effective Hamiltonian: H=∑^Λ_ ,aψ^†_, a[ H_t,a()-μ+E_z 0; 0 H_b,a()-μ-E_z ]ψ_, a +1/2A∑^Λ_∑_ν,ν^'=t,bV^νν^'_n_,νn_-,ν^', where ψ_k,a is a 4-dim spinor in the sublattice and layer indeces, ψ_k,a=[ψ_k,aAt,ψ_k,aBt,ψ_k,aAb,ψ_k,aBb], K and K' refers to the two different valleys, Λ is the UV cutoff and E_z the electric displacement field controlling the energy offset between the two layers. The interaction is obtained by taking the Fourier transform of V^tt_r=V^bb_r= e^2/(4πϵ_0ϵ r) and V^tb_r=V^bt_r= e^2/(4πϵ_0ϵ√(r^2+d^2)) with d interlayer distance. Knowing that ∫d^2 q/2πe^-dq+i q· r/q =∮_|z|=12dz/2π r1/z^2+1-2dz/(ir)=1/√(r^2+d^2), we readily find: V^tt_=V^bb_=V_= e^2/2ϵ_0ϵ q, V^tb_=e^-qdV_. Accounting for the presence of the metallic gates at distance ξ from the sample we replace 1/q→tanh(qξ/2)/q, which regularizes the singularity at q=0. We use σ for sublattice while τ for layer. The chemical potential μ is fixed by the charge neutrality condition: 1/N∑^Λ_⟨ψ^†_,aψ_,a⟩ - n_ CNP= 0, which corresponds of having fully filled valence bands in both layers. §.§.§ Model for TMDs We assume that the conduction (CB) and valence band (VB) of each layer around K and K' points are spin polarized. This assumption is realized under the assumption of large Ising spin-orbit coupling τΔ_vbs^z and τΔ_cbs^z with τ=± for the two valleys. For MX_2 the SOC gap is 2Δ_vb=148,186,219meV with X={S_2,Se_2,Te_2}. For WX_2 the gap is larger due to the larger SOC 2Δ_vb=429,466,484meV for VB and 2Δ_cb=-32,-37,-53meV for CB with X={S_2,Se_2,Te_2}. In the conduction band the SOC is small and corrections from the nearby conduction band might play an important role. Keeping terms up to the second order in momentum the k.p Hamiltonian reads: H_ν,K()=[ Δ_ν+α_ν k^2 v_ν k_-; v_ν k_+ -m_ν-γ_ν k^2 ], H_t/b,K'()= H^*_t/b,K(-), where the latter relation enforces the time reversal symmetry. The typical values of the parameters of the model are given in Table <ref> giving rise to a typical value of the ratio v/m≈ 2Å. In the main text we compare results obtained with gapped Dirac cones and parabolic bands replacing with mass ±√(Δ^2+v^2 k^2)≃Δ + k^2/2 m with m = Δ /v^2 and the same gap 2Δ. The quadratic band approximation neglects quantum geometric contribution to the superfluid stiffness which results in a reduced value of the BKT temperature of the exciton condensate. §.§.§ Symmetries The model is invariant under the transformation: ψ_,at→ e^iϕ_tψ_,at, ψ_,at→ e^iϕ_bψ_,at. Notice that ϕ_b+ϕ_t is the total phase associate to U(1)_ charge while the relative phase ϕ_t-ϕ_b is related to the U(1)_ layer symmetry. The latter is spontaneously broken in the excitonic state which is characterized by interlayer coherence. In addition, we observe that the model in Eq. <ref> breaks the particle-hole symmetry for α≠γ. The low-energy model is also invariant under the continuous space rotational rotational symmetry → R_θ and U_θ=diag(e^-iθ/2,e^iθ/2) which acts on the sublattice degree of freedom: U^†_θ H_ν,K/K'(R_θ ) U_θ= H_ν,K/K'( ). Including higher order terms in the k.p expansion lower the aforementioned symmetry to C_3z. We finally conclude observing that an additional crystalline symmetry is a spin-space mirror ℳ transformation which acts as (x,y)→(-x,y) and ↑↔↓. §.§.§ Diagonalizing the single-particle Hamiltonian The eigenvalues of the non-interacting Hamiltonian are given by E_±,t/b(k)=±ϵ_t/b(k), ϵ_t/b(k)=√(v^2_t/bk^2+Δ_t/b^2) while the matrix of eigenstates is: U()=1/√(2)[ 1 1; e^iϕ_ -e^iϕ_ ][ cosφ_k/2 -sinφ_k/2; sinφ_k/2 cosφ_k/2 ] , with tanφ_k=Δ/(vk). We now apply the unitary transformation: U() =[ U_t() 0; 0 U_b() ]. In this basis the kinetic part becomes diagonal and reads: H_ kin=∑_a=K,K'∑^Λ_Ψ^†_, a[ ε_t(k)σ^z+(V_b-μ)σ^0 0; 0 ε_b(k)σ^z+(-V_b-μ)σ^0 ]Ψ_, a, where Ψ_,a ν=U^†_a ν(k)ψ_,a ν with a=K/K' and ν=t,b. We now look at the interaction projected in the basis of the eigenstates of the kinetic term. The density operator for the layer ν=t/b reads: ρ^ν_=∑_ψ^†_νψ_+ν=∑_∑_αα'[F^ν_,]_αα'Ψ^†_ναΨ_+να', where we drop for the moment the valley index, and the form factor is obtained projecting the density operator in the Bloch basis F^ν_, = U^†_ν() U_ν(+), [F^ν_,]_αα' =⟨u^ν_,α|u^ν_+,α'⟩. The orthogonality condition implies that [F^ν_,0]_αα'=δ_αα'. As a result the interaction takes the form: H_ int = 1/2A∑^Λ_'∑_νν^'∑_aa'[F^ν,a_,]_αα'V^νν^'_[F^ν^',a'†_',]_γγ'Ψ^†_,aναΨ_+,aνα'Ψ^†_'+,a'ν^'γΨ_',a'ν^'γ', At mean-field level we decouple the interaction in the Hartree and Fock (exchange) terms that are derived in the following sections. For the sake of simplicity from now on we will denote k the two-dimensional momentum. §.§ Hartree term The Hartree term corresponds to: H_H=∑^Λ_k∑_a=K,K'∑_ν=t,bδμ_ν Ψ^†_k,aνΨ_k,aν-1/2A∑_νν^'V^νν^'_0∑^Λ_kk'∑_aa'=K,K'δρ^aν(k) δρ^a'ν^'(k'), where we have introduced the charge density measured with respect to charge neutrality: δρ^aν(k)=∑_α=±⟨Ψ^†_k,aναΨ_k,aνα⟩-Nδ_k,0. the contribution δμ_ν is given by: δμ_ν = 1/N∑^Λ_k∑_a=K,K'∑_α=±∑_ν^'δρ^aν^'_αα(k)V^ν^'ν_0. Before moving on we observe that 1/N∑^Λ_k∑_a=K,K'∑_α=±δρ^a_αα(k)=n_ν-1=sign(ν) ϕ_z, where ϕ_z measure the charge density transferred between the layers ϕ_z=(n_t-n_b)/2=n_t-1=1-n_b since at charge neutrality n_t+n_b=2 with this definition sign(t)=1 and sign(b)=-1. Thus, we have: δμ_t=ϕ_z(V^tt_0-V^bt_0), δμ_b=ϕ_z(V^tb_0-V^bb_0). Knowing that V^tt_q=V^bb_q and V^bt_q=V^tb_q we find: δμ_t=δμ , δμ_b=-δμ, δμ = ϕ_z(V^tt_0-V^bt_0), ϕ_z=(n_t-n_b)/2 . Physically this term corresponds to electrostatic repulsion. We notice that V^tt_0=V^tb_0 and δμ=0. This result holds for semiconductor at charge neutrality. §.§ Fock term The exchange term is given by: H_X=-∑^Λ_k∑_a=K,K'∑_νν^'Ψ^†_k,aναΣ^νν^',a_F,αβ(k)Ψ_k,aν^'β, where we have introduced the quantity: Σ^νν^',a_F,αβ(k)=1/A∑_k' V^νν^'_k'-k∑_α'β'[F^ν^',a†_k,k'-k]_β'β[F^ν,a_k,k'-k]_αα'δρ^ν^'ν,a_β'α'(k'), where we have introduced the quantity: δρ^ν^'ν,a_β'α'(k)=⟨Ψ^†_k,aν^'β'Ψ_k,aνα'⟩-δ_ν,ν^'δ_α'β'δ_α'-. The Fock self-energy is generically non-separable and the self-energy is k dependent. The Σ_F(k) is a 4-dim matrix in the layer and band basis. It includes an intralayer terms which renormalize the electronic band structure and an interlayer term giving rise to interlayer phase coherence and exciton condensation. The Hartree-Fock Hamiltonian takes the form: H_ HF=∑^Λ_kΨ^†_ k[H_ kin(k)-Σ_F(k)]Ψ_k. The self-consistency equation <ref> for the Fock self-energy is solved at every k-point in momentum space employing an iterative approach. Fig. <ref> shows the typical band structure and the interlayer gap function, defined as √(Δ^†(k)Δ(k)) with Δ( k) interlayer component of the Fock self-energy, obtained at the Hartree-Fock level for homobilayer TMDs.
http://arxiv.org/abs/2307.02459v1
20230705173251
Gaussian Database Alignment and Gaussian Planted Matching
[ "Osman Emre Dai", "Daniel Cullina", "Negar Kiyavash" ]
cs.IT
[ "cs.IT", "cs.DB", "cs.DS", "cs.LG", "math.IT", "stat.ML" ]
Landscape approximation of low energy solutions to binary optimization problems Dimitris G. Angelakis August 1, 2023 =============================================================================== Database alignment is a variant of the graph alignment problem: Given a pair of anonymized databases containing separate yet correlated features for a set of users, the problem is to identify the correspondence between the features and align the anonymized user sets based on correlation alone. This closely relates to planted matching, where given a bigraph with random weights, the goal is to identify the underlying matching that generated the given weights. We study an instance of the database alignment problem with multivariate Gaussian features and derive results that apply both for database alignment and for planted matching, demonstrating the connection between them. The performance thresholds for database alignment converge to that for planted matching when the dimensionality of the database features is ω(log n), where n is the size of the alignment, and no individual feature is too strong. The maximum likelihood algorithms for both planted matching and database alignment take the form of a linear program and we study relaxations to better understand the significance of various constraints under various conditions and present achievability and converse bounds. Our results show that the almost-exact alignment threshold for the relaxed algorithms coincide with that of maximum likelihood, while there is a gap between the exact alignment thresholds. Our analysis and results extend to the unbalanced case where one user set is not fully covered by the alignment. § INTRODUCTION The modern ubiquity of data collection has drawn interest to the problem of data alignment, which is described as follows: We have two data sets containing information regarding various anonymized users. Both sets might contain data associated with a particular user, in which case we observe correlation between data of given user. Alignment is the problem of identifying such correlated pairs. This enables data merging (e.g. in the field of computational biology <cit.> or computer vision <cit.>) or de-anonymization (with several high-profile instances, such as the 2006 Netflix Prize incident <cit.> or 2016 release of the MBS/PBS healthcare data <cit.>). Database alignment and graph alignment are two well studied versions of the alignment problem. In the former setting, the data consists of multi-dimensional features, each associated with an individual user. These features are correlated across the two databases only if there are associated with the same user. In the graph setting, features are associated with pairs of users, and they are correlated across the two graphs only if the pairs match. A line of work has studied alignment of correlated Erdős-Rényi graphs, identifying information theoretic bounds for exact alignment <cit.>, for partial alignment <cit.>, analyzing various efficient and nearly-efficient algorithms <cit.>. There are other results on variants of this problem, considering alignment with side information <cit.> or alignment of graphs with Gaussian edge weights <cit.>. For database alignment, the earliest result identified the sharp information theoretic condition for the exact alignment of databases with finite-alphabet features <cit.>. A later study identified tight bounds for almost-exact alignment when features are high-dimensional and each dimension is i.i.d with an arbitrary distribution <cit.>. Several works focused on Gaussian databases, presenting the sharp conditions of exact and almost-exact alignment of databases with Gaussian features <cit.> and identifying the elliptic boundary that show the order of magnitude of errors that is achievable within the almost-exact alignment region <cit.>. Another work studied the related problem of testing whether Gaussian databases are correlated <cit.>. Planted matching is a closely related problem. In this setting, we consider a bipartite graph, with random edge weights, over a pair of user sets with an underlying true matching. The weights of edges across true user pairs are different from those across users that are not matched under the true matching. In the original formulation <cit.>, it is motivated by the problem of tracking moving particles between two images. With particles in the two images forming the vertex set of the bipartite graph, each edge weight is some calculated measure of likelihood of two particles from different images correspond to each other. This can be considered as variant of the database alignment formulation where the features in each database is the information about the particle in each image. One significant difference between the two formulations is that in planted matching, edge weights are typically considered to be independent, while for database alignment, the actual likelihood measures between pairs are not truly independent. This follows from the observation that, in database alignment, the likelihood of the matched pair (u,v) and that of (u,v') both depend on the value of the feature associated with u. Earliest work studied planted matching in the case where the non-matched edges have uniform distribution while the distribution matched edges is folded Gaussian <cit.>. Later work studied the case where all edges are exponentially distributed, the matched ones having finite mean while the non-matched edges have mean on the order of the number of vertices in the bipartite graph <cit.>, which was then extended to consider a variant of the problem with side information, where only a subset of the vertex pairs are eligible to be included in the matching, i.e. the matching is planted in a non-complete bipartite graph <cit.>. This last study established the sharp information theoretic conditions for almost-exact alignment in the case where the matched edge has a distribution with fixed density (i.e. not dependent on the size of the vertex set) while that of the non-matched edges is scaled (i.e. stretched) by the average degree in the bipartite graph. The Gaussian case we study is distinct from the planted matching cases described above. We consider the case where all edges are Gaussian with unit variance and some distance between the means. In the regime of interest, the difference in means scales with the square root of log of the number of vertices. We derive sharp thresholds that dictate the order of magnitude of errors at any signal level. Furthermore, we extend our analysis to the unbalanced case where the two vertex sets differ in size and the true matching is injective but not surjective. We also consider the performance of various relaxations of the maximum likelihood estimator to understand the dynamics of this problem and the significance of various constraints at different signal levels. § MODEL §.§ Notation Random variables are denoted by upper-case Latin letters while their instances are denoted with the corresponding lowercase letter. Vectors and vector-valued functions are expressed by arrows (e.g. ) while bold font is used for matrices (e.g. ). The natural numbers and real numbers are denoted and respectively while calligraphic notation is used for others sets (e.g. ). §.§ Correlated Gaussian Databases Let and denote sets of users. Let M denote an underlying partial mapping between and : a bijection between some subset of and . We write uMv if u∈ and v∈ are mapped to each other by M. Any user from one database is mapped to at most one user from the other database. We use |M| to denote the number of pairs mapped by M. Let denote the matrix encoding of the mapping in {0,1}^× such that M_u,v=1 if u and v are mapped. Databases are represented by functions that return feature vectors for each user in the relevant user set. :→^_a and :→^_b are the databases associated with the two sets of users. The features in each database are indexed by elements in the sets _a and _b respectively. We say and are a pair of correlated Gaussian databases with covariance _a_ab_ab^⊤_b if * All entries in and are, together, jointly Gaussian. * (u) is independent and identically distributed with variance _a for every u∈. Similarly, (v) is independent and identically distributed with variance _b for every v∈. * Cov(u), (v) = _ab if uMv and Cov(u), (v) = 0 if uMv. Under this model, features in and may have arbitrary dimension |_a| and |_b| respectively. However, as shown in [sec:canonical]Appendix <ref>sec:canonical, knowledge of the statistic can be used to perform linear transformations on features from each database and eliminate degrees of freedom of the features that are not correlated with the other database. Problem setting: We consider the scenario where we observe a pair of correlated Gaussian databases and with an unknown partial mapping M between the sets of users and . The statistics and of the i.i.d. distribution of correlated feature pairs are known. We have no prior knowledge of the mapping M beyond its size. We say the problem is unbalanced if ||≠ ||. §.§ Planted Matching on Gaussian Bigraph , and M are defined as in [subsec:modelDatabase]Subsection <ref>subsec:modelDatabase. Given parameter μ∈, let , taking values in ^×, denote the weight matrix of bipartite graph over and such that, given M, has independent Gaussian entries with unit variance and mean [W_u,v]=μ if uMv and [W_u,v]=0 otherwise. Without loss of generality, we assume μ>0. Problem setting: We observe and want to identify the proper matching M. The parameter μ is known. We have no prior knowledge of M beyond its size. §.§ Algorithms For database alignment, let be the matrix such that G_u,v is the log-likelihood ratio of hypotheses uMv vs. uMv for any (u,v)∈×. ( can be calculated in polynomial time.) For planted matching, let denote the random weight matrix. * Maximum likelihood estimation: For both problems, the maximum likelihood estimator can be expressed as a linear program. Given ||=|M|, for arbitrary τ∈, find the maximizer ∈^× that maximizes -τ, or μ-τ, under the constraints: (a) ∑_u∈ m_u,v≤ 1, ∀ v∈ (b) ∑_v∈ m_u,v = 1, ∀ u∈ (c) m_u,v∈ [0,1], ∀ (u,v)∈× This is equivalent to the linear assignment problem and can be solved in polynomial time. * Maximum row estimation: Removing constraint (a) gives us an algorithm that individually considers each user in , blind to all other users in the set. This relaxation is relevant when the mapping of a small subset of users is of interest, and not that of the entire set. * Threshold testing: Removing (a) and (b) gives an algorithm that performs a likelihood ratio tests for each pair (u,v)∈× to decide whether the given pair is part of the true mapping. For each case, the constraint matrix is totally unimodular and therefore has an integer valued solution ∈{0,1}^×. So the solutions always give us a mapping → although not necessarily injective for maximum row estimation and not necessarily injective nor single-valued for threshold testing. A detailed description of algorithms as well as their computational complexity is given in Appendix A. § RESULTS In this section, we summarize our main results. For database alignment, let I_XY denote the total mutual information between a pair of correlated features from the two databases. For planted matching, let μ denote the difference of means. All the results are written in terms of `signal strength' ζ which refers to I_XY for database alignment, and to μ^2/2 for planted matching. We analyse the performance of the maximum likelihood estimator for M, which can be expressed as a linear program: For database alignment, let I_XY denote the total mutual information between a pair of correlated features from the two databases. For planted matching, let μ denote the difference of means. We consider the following three algorithms' performance in this work: * Maximum likelihood estimation: Finds the complete mapping that maximizes the likelihood function. * Maximum row estimation: For each user u∈, finds the user in v∈ that maximizes the likelihood of pair (u,v). It may return a `mapping' where users in are mapped to multiple users. * Threshold testing: Performs a likelihood ratio test on every pair (u,v)∈× to determine whether they exceed a threshold in which case they are matched. It may return a `mapping' where users in both and are mapped to multiple users. As we show in ADD REF [sec:algo]Section <ref>sec:algo, maximum likelihood estimation can be expressed as a linear program, and the other two algorithms as relaxations of this linear program. For database alignment, when per-feature correlation is low, measures of correlation relevant for our analysis can always be expressed in terms of mutual information I_XY. Therefore the statements of some of our results are only accurate in this setting. We formally define this regime, where a large number of dimensions each carry infinitessimally small information: [Low per-feature correlation in database alignment] The covariance matrix = _a_ab_ab^⊤_b is said to satisfy the low per-feature correlation condition if ||_a^-1/2_ab_b^-1/2||_2 ≤ o(1), where ||·||_2 denotes the ℓ_2 operator norm, i.e. largest singular value. Under this condition, mutual information is I_XY=1/2||_a^-1/2_ab_b^-1/2||_F^2(1+o(1)). The squared Frobenius norm is the sum of the singular values squared, so the condition implies I_XY = o(d) where d is the number of dimensions of features. Since I_XY = Ω(log n) in the regime where alignment is feasible, [cond:highDimensional]Condition <ref>cond:highDimensional implies dimensionality d≥ω(log n). The connection between this condition and the low per-feature correlation setting is shown in Appendix B. §.§ Achievability We say an algorithm achieves exact alignment if the expected number of misaligned users is o(1), and almost-exact alignment if the expected number of misaligned users is o(n) where n is the number of users covered by the true alignment. Let n = |M| = ||. Define α = log||-n/log n if ||>n. ζ≥ clog n + ω(1) is a sufficient condition for exact alignment, where the value of c is given below for the different algorithms and different sizes of ||: Size of ||=n n<|| ≤ n + o(n) ||≥ n+Ω(n) Threshold. 1+√(2)^2 1+√(2)^2 1+√(1+α)^2 Max row 4 4 1+√(α)^2 Max likelihood 2 2(α+1) 1+√(α)^2 For almost-exact alignment, ζ≥ (max{α,1})log n + ω√(log n) is a sufficient condition for all three algorithms. Then, as n→∞, the following are sufficient conditions for exact alignment: * Threshold testing: [8cm]ζ≥1+√(1+max{α,1})^2log n + ω(1) * Maximum row estimation: [8cm]ζ≥1+√(max{α,1})^2log n + ω(1) * Maximum likelihood estimation: [6cm]ζ≥ 2(α + 1)log n + ω(1) if 0≤α≤ 1 [6cm]ζ≥1+√(α)^2log n + ω(1) if 1≤α For almost-exact alignment, the condition below holds for all three aforementioned algorithms. * [8cm]I_XY≥ (max{α,1})log n + ω√(log n) Note that for threshold testing and maximum row estimation, the exact-alignment thresholds does not change with the size of || as long as || is on the order of n. For maximum likelihood estimation, the exact-alignment threshold increases linearly with α in this regime. For ||≥Ω(n), the exact-alignment thresholds for all three algorithms increase quadratically with √(α). Furthermore, the thresholds for maximum row estimation and maximum likelihood estimation coincide. The boundaries for exact and almost-exact alignment for the algorithms are illustrated in [fig:threshold]Fig. <ref>fig:threshold. Boundaries for maximum likelihood and maximum row algorithms completely overlap for I_XY/log n ≥ 4. Let n = |M| = || = ||. The following sufficient conditions guarantee no more than n^1-β+o(1)[Sufficient conditions of linear and vertical form for maximum likelihood estimation both achieve an error bound of n^1-β. For planted matching, the parabolic and elliptic ones, as well as the linear one for maximum row estimation achieve 2n^1-β. For database alignment, the parabolic and elliptic ones achieve 2n^1-β+o(1) asymptotically as n→∞, while the linear one for maximum row estimation achieves 2n^1-β.] errors in expectation. (The bounds that require [cond:highDimensional]Condition <ref>cond:highDimensional for database alignment are specified.) Sufficient cond. Range of β Form of boundary Req. [cond:highDimensional]Cond. <ref>cond:highDimensional 4|l|Threshold testing ζ≥√(β)+√(1+β)^2log n 0<β≤1/2. parabolic no 4|l|Maximum row estimation ζ≥1+√(β)^2log n 0<β≤ 1/2 parabolic yes ζ≥ 2(1+β)log n 1<β≤1/2. linear no 4|l|Maximum likelihood estimation ζ≥1+2√(β(1-β))log n 0<β≤ 1/2 elliptic yes ζ≥ 2log n + 2log√(5)-1/2 1/2<β≤ 1/2 vertical no ζ≥ (1+β)log n 1<β≤1/2. linear no * Threshold testing: [6cm]I_XY≥√(β)+√(1+β)^2log n * Maximum row estimation: [6cm]I_XY≥1+√(β)^2log n if 0<β≤1 [6cm]I_XY≥ 2(1+β)log n if 1≤β * Maximum likelihood estimation: [6cm]I_XY≥1+2√(β(1-β))log n if 0<β≤1/2 [6cm]I_XY≥ 2log n + ω(1) if 1/2≤β≤1 [6cm]I_XY≥ (1+β)log n if 1≤β The boundaries for the achievability regions of the algorithms are illustrated in [fig:expectation0]Fig. <ref>fig:expectation0. Let n = |M| = || and ||>n. Define α = log||-n/log n. The following sufficient conditions guarantee no more than n^1-β+o(1)[The constants are the same as those given in footnote for [thm:beta]Theorem <ref>thm:beta.] errors in expectation. (The bounds that require [cond:highDimensional]Cond. <ref>cond:highDimensional for database alignment are specified.) Sufficient cond. Range of β Boundary Req. [cond:highDimensional]Cond. <ref>cond:highDimensional 4|l|Threshold testing ζ≥√(max{α,1}+β)+√(β)^2log n 0<β parabolic no 4|l|Maximum row estimation ζ≥√(max{α,1})+√(β)^2log n 0<β≤max{α,1} parabolic yes ζ≥ 2max{α,1}+βlog n max{α,1}<β linear no 4|l|Maximum likelihood estimation ζ≥1+2√(β(1-β))log n 0<β≤min{1-α,1/2} elliptic yes ζ≥√(α)+√(β)^2log n 1-α<β≤α parabolic yes ζ≥ 2log n + 2log3+√(5)/2 1/2<β≤ 1-α vertical no ζ≥ 2(α+β)log n max{α,1-α}<β linear no The following are sufficient conditions to have expected number of errors bounded by 2 n^1-β: * Threshold testing: [6cm]I_XY≥√(max{α,1}+β)+√(β)^2log n * Maximum row estimation: [6cm]I_XY≥√(max{α,1})+√(β)^2log n if 0<β≤max{α,1} [6cm]I_XY≥ 2max{α,1}+βlog n if max{α,1}≤β * Maximum likelihood estimation: [6cm]I_XY≥1+2√(β(1-β))log n if 0<β≤min{1-α,1/2} [6cm]I_XY≥ 2log n + ω(1) if 1/2≤β≤α [6cm]I_XY≥√(α)+√(β)^2log n if 1-α≤β≤α [6cm]I_XY≥ 2(α+β)log n if α≤β The boundaries for the achievability region of the maximum likelihood estimator for various values of α are illustrated in [fig:expectationMLE]Fig. <ref>fig:expectationMLE. Boundaries for α=1.5 and α=1.25 are tangent to y=1 at x=1.5 and x=1.25 respectively, while all other boundaries are tangent to that line at x=1. These match the almost-exact alignment threshold. The boundaries for the achievability region of maximum likelihood/maximum row estimation, which coincide, and for thresholding at α = 1.5 are illustrated in [fig:expectation1]Fig. <ref>fig:expectation1. Both boundaries are tangent to y=1 at x=1.5. This matches the almost-exact alignment threshold. The proofs for Theorems <ref>, <ref>, <ref> are given in Appendix C. §.§ Converse results For the planted matching model, we have matching converse results in multiple regimes. Let n = |M| = || and α = log||-n/log n if ||>n. Each of the following conditions guarantee that any estimator makes at least Ω(n^1-β) errors with probability 1-o(1): Necessary cond. Range of β Boundary ζ≤1+2√(β(1-β))log n - ω(loglog n) 0<β≤min{1-α,1/2} elliptic ζ≤√(α)+√(β)^2 log n - ω(loglog n) 1-α<β parabolic ζ≤ 2 log n - ω(loglog n) 1/2<β vertical ζ≤1+2√(β(1-β))log n - ω(loglog n) 0 < β≤1/2 ζ≤√(α)+√(β)^2 log n - ω(loglog n) 0 < β≤ 1 ζ≤ 2 log n - ω(loglog n) 1/2≤β≤ 1. The proof is given in Appendix D. The maximum likelihood estimator is also the maximum a posteriori estimator and thus is the optimal estimator for exact recovery. It is not necessarily optimal for partial recover: it does not necessarily maximize the probability that it makes at most n^1-β errors for β < 1. However, this converse shows that the maximum likelihood estimator is asymptotically optimal for partial recovery: the conditions needed to ensure partial correctness match the converse conditions in the leading term. §.§ Interpretation and intuition behind results We present the intuition behind the various phase transitions of the achievability boundaries. §.§.§ Merging of boundary for maximum likelihood estimation and maximum row estimation As shown by [thm:alphabeta]Theorem <ref>thm:alphabeta and [fig:expectationMLE]Fig. <ref>fig:expectationMLE, the boundaries for the achievability regions of maximum likelihood estimation and maximum row estimation fully coincide if the number of unmatched users in is on the order of n or greater. As shown by [thm:alpha]Theorem <ref>thm:alpha and [fig:threshold]Fig. <ref>fig:threshold, this is also true for the almost-exact alignment threshold. Maximum row estimation corresponds to a relaxation of the maximum likelihood estimation by removing the constraint that every vertex in can have at most one match in . That is, when looking for the true mapping matrix ∈{0,1}^×, maximum likelihood estimation still only accepts a single non-zero entry on each row, but ignores the number of entries on each column. For α<1 bounded away from 1, the average non-zero entries in each column is n/n+n^α≥ 1-o(1). So the column constraint of maximum likelihood estimation is tight for almost every column. Therefore, for such α, the column constraint is relevant and its removal results in the introduction of a significant increase in the expected number of errors. This creates a gap between the boundaries of maximum likelihood estimation and maximum row estimation. On the other hand, for α > 1 bounded away from 1, the average non-zero entries in each column is n/n+n^α≤ o(1). So the column constraint is loose for almost every column. Then, for such α, relaxing the column constraint results in no significant loss in performance and the boundaries for the two algorithms coincide. §.§.§ Transition from the elliptic to the quadratic boundary for maximum likelihood estimation By [thm:alphabeta]Theorem <ref>thm:alphabeta, given α>1/2, there is a phase transition in the achievability boundary for maximum likelihood estimation as the bound on the expected number of errors goes beyond n^α. Recall that n=|M|=||, and n^α = ||-n. Consider a simplified model to generate an estimated mapping m: In step 1, make an independent decision for every true pair whether to include it in m or not. Each pair is failed to be included with probability n^1-β/n. In step 2, randomly assign each of the users in that haven't been mapped in step 1, to a random user in that also hasn't been mapped in step 1. In expectation, there are n^1-β and n^1-β+n^α users that haven't been mapped in step 1 in and respectively. Based on this process, every falsely-paired user in will be mapped to about n^1-β/n^1-β+n^α = 1+n^α+β-1^-1 users in expectation. If α+β < 1 bounded away from 1, then this value is 1-o(1). So the restriction on having each user from mapped to at most 1 user is tight for almost all users in . This implies that, at that point, this constraint is still able to contribute to the elimination of misalignments: An increase in signal strength leads to users previously not paired in step 1 to be correctly paired, which in turn would trigger the mentioned constraint, leading to chain reactions that might fix even more misaligned pairs. For α+β >1 bounded away from 1, the average number of users mapped to the falsely-paired users in is o(1). In this regime, the constraint is loose for almost every falsely-paired user in , and the restriction does not help in eliminating misalignments. The fact that this restriction stops being relevant around the point β = 1-α demonstrates itself on the boundary as an immediate decrease in the absolute value of the slope. At this point, increasing mutual information does not decrease the error exponent as quickly, as the algorithm can no longer leverage the constraint on the number of pairs of users from . Note that, for α <1 bounded away from 1, the gap between maximum likelihood estimation and maximum row estimation still persists whether α+β < 1 or α+β >1. This is due to the fact the correct pairing of the n-n^1-β users in step 1 does rely on the column constraint. §.§.§ Transition between quadratic boundary to linear boundary of maximum likelihood estimation and maximum row estimation When a true pair has a sufficiently low score in or , the expected number of errors involving that pair is larger than 1. Call this a "bad true pair." This phase transition between the parabolic boundary and the linear one corresponds to a change in the importance of the bad true pairs. In the parabolic region, most errors involve a bad true pair. In the linear region, most errors involve true pairs that are not bad. See the discussion in Appendix E of the cycle-path decomposition of a pair of matchings for the precise interpretation of "most errors" in these statements. §.§.§ Halving of slope of linear boundary of maximum likelihood estimation As shown by [thm:alphabeta]Theorem <ref>thm:alphabeta and [thm:alphabeta]Theorem <ref>thm:alphabeta, the slope of the linear boundary for maximum likelihood estimation is halved when going from ||=n for ||>n. This is illustrated by the last two curves in [fig:expectationMLE]Fig. <ref>fig:expectationMLE. For both cases, the linear region boundary is only relevant for β large, i.e. when the error exponent is very small. In this regime, the misalignment of any pair is very rare. For ||=n, a misalignment requires at least two pairs of users. Specifically, we need some pairs (u_0,v_0) and (u_1,v_1) such that, (u_0,v_1) and (u_1,v_0) jointly make a better pairing. This requires the occurrence of two relatively low score pairs as well as the corresponding misalignment to have two relatively high score false pairs. As such, this error event requires many unlikely things to coincide, making its likelihood very small as we increase signal strength beyond the point ζ=2log n. On the other hand, when ||≥ n+1, there is at least one user, say v'∈, that already has no pair. Therefore a misalignment may consist of a single misaligned pair, e.g. (u_0,v') instead of (u_0,v_0). This requires the occurrence of a single relatively low score pair as well as a relatively high score misalignment of the first pair with an with unpaired user, of which there are ||-n. The inclusion of even a single extra therefore makes the unlikely misalignment event somewhat less exceptional. §.§.§ Vertical segment of boundary of maximum likelihood estimation This phase transition involves a shift in the structure of the typical error. As explained in [subsec:halving]Subsection <ref>subsec:halving, in the linear region, the dominant type of error is either a 2-cycle error (if |𝒱| = n) or a (1,1)-path error (|𝒱| > n). In the elliptic boundary region, much longer cycles and paths are dominant. In the balanced case all errors come from cycles. When ζ = 2 log n, the expected contributions of each cycle length to the number of errors are equal. Each of these contributions has a linear dependence on ζ with a slope proportional to the cycle length. Thus the contribution of cycles of length ω(1) produces the vertical boundary. The top of the vertical boundary occurs due to an effect similar to that described in [subsec:bad-pairs]Subsection <ref>subsec:bad-pairs. In the elliptical boundary region, most long-cycle errors involve a bad true pair, so the number of bad true pairs controls the overall number of errors. §.§.§ Gap between maximum row estimation and threshold testing The difference between the maximum row estimators and the threshold testing estimator is constraint (b) in the linear program, which ensures that the estimated matrix has exactly one 1 in each row. This constraint can be included because of our assumption that |M| = |𝒰|, i.e. that every user is the first database has a match in the second database. The gap between the performance of the maximum row and threshold testing estimators means that this constraint corrects most errors in the threshold testing estimator. If we has a sufficiently large gap between |M| and |𝒰, we would see the performance of these two estimators converge, just and the ML and maximum row estimators converge in performance when |𝒱| is sufficiently larger than |𝒰|. § OPEN QUESTIONS AND FUTURE WORK low dimensional features: we think low dim case is strictly easier. This is suggested by our results that don't require the high dim assumption. Detailed structure of the zero-order phase transition More complete understanding of conditions for zero order transition We believe that Gaussian database alignment becomes information-theoretically easier as the feature dimensionality decreases. In other words, as the same amount of mutual information is concentrated in a smaller number of features, it is more valuable for alignment. Several of our achievability results do not require Condition 1 and thus provide evidence for this. Our new converse works only for the planted matching problem, i.e. in the infinite dimension limit of the database alignment problem. Showing this monotonic dependence on feature dimensionality is an open problem. Other interesting questions involve the finer structure of the phase transition at the exact recovery threshold. If ζ = 2 log n + c loglog n, how many errors does the ML estimator make? Finally, what are the precise conditions that cause a planted matching or database alignment problem to have a discontinuity in the number of errors at this threshold? abbrv 10 barak2019nearly B. Barak, C.-N. Chou, Z. Lei, T. Schramm, and Y. Sheng. (nearly) efficient algorithms for the graph matching problem on correlated random graphs. Advances in Neural Information Processing Systems, 32, 2019. chertkov2010inference M. Chertkov, L. Kroc, F. Krzakala, M. Vergassola, and L. Zdeborová. Inference in particle tracking experiments by passing messages between images. Proceedings of the National Academy of Sciences, 107(17):7663–7668, 2010. conte2004thirty D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. International journal of pattern recognition and artificial intelligence, 18(03):265–298, 2004. cullina2017exact D. Cullina and N. Kiyavash. Exact alignment recovery for correlated erdős rényi graphs. arXiv preprint arXiv:1711.06783, 2017. cullina2018fundamental D. Cullina, P. Mittal, and N. Kiyavash. Fundamental limits of database alignment. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 651–655. IEEE, 2018. culnane2017health C. Culnane, B. I. Rubinstein, and V. Teague. Health data in an open world. arXiv preprint arXiv:1712.05627, 2017. dai2019database O. E. Dai, D. Cullina, and N. Kiyavash. Database alignment with gaussian features. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3225–3233. PMLR, 2019. dai2020achievability O. E. Dai, D. Cullina, and N. Kiyavash. Achievability of nearly-exact alignment for correlated gaussian databases. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 1230–1235. IEEE, 2020. dai2019analysis O. E. Dai, D. Cullina, N. Kiyavash, and M. Grossglauser. Analysis of a canonical labeling algorithm for the alignment of correlated erdős-rényi graphs. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3(2):1–25, 2019. ding2021efficient J. Ding, Z. Ma, Y. Wu, and J. Xu. Efficient random graph matching via degree profiles. Probability Theory and Related Fields, 179:29–115, 2021. ding2021planted J. Ding, Y. Wu, J. Xu, and D. Yang. The planted matching problem: Sharp threshold and infinite-order phase transition. arXiv preprint arXiv:2103.09383, 2021. fan2022spectralA Z. Fan, C. Mao, Y. Wu, and J. Xu. Spectral graph matching and regularized quadratic relaxations i algorithm and gaussian analysis. Foundations of Computational Mathematics, pages 1–55, 2022. fan2022spectralB Z. Fan, C. Mao, Y. Wu, and J. Xu. Spectral graph matching and regularized quadratic relaxations ii: Erdős-rényi graphs and universality. Foundations of Computational Mathematics, pages 1–51, 2022. ganassali2022sharp L. Ganassali. Sharp threshold for alignment of graph databases with gaussian weights. In Mathematical and Scientific Machine Learning, pages 314–335. PMLR, 2022. ganassali2021impossibility L. Ganassali, L. Massoulié, and M. Lelarge. Impossibility of partial recovery in the graph alignment problem. In Conference on Learning Theory, pages 2080–2102. PMLR, 2021. hall2022partial G. Hall and L. Massoulié. Partial recovery in the graph alignment problem. Operations Research, 2022. lyzinski2014seeded V. Lyzinski, D. E. Fishkind, and C. E. Priebe. Seeded graph matching for correlated erdős-rényi graphs. J. Mach. Learn. Res., 15(1):3513–3540, 2014. mao2022random C. Mao, Y. Wu, J. Xu, and S. H. Yu. Random graph matching at otter's threshold via counting chandeliers. arXiv preprint arXiv:2209.12313, 2022. moharrami2021planted M. Moharrami, C. Moore, and J. Xu. The planted matching problem: Phase transitions and exact results. The Annals of Applied Probability, 31(6):2663–2720, 2021. mossel2020seeded E. Mossel and J. Xu. Seeded graph matching via large neighborhood statistics. Random Structures & Algorithms, 57(3):570–611, 2020. narayanan2008robust A. Narayanan and V. Shmatikov. Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy (sp 2008), pages 111–125. IEEE, 2008. pedarsani2011privacy P. Pedarsani and M. Grossglauser. On the privacy of anonymized networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1235–1243, 2011. ramshaw2012minimum L. Ramshaw and R. E. Tarjan. On minimum-cost assignments in unbalanced bipartite graphs. HP Labs, Palo Alto, CA, USA, Tech. Rep. HPL-2012-40R1, 2012. semerjian2020recovery G. Semerjian, G. Sicuro, and L. Zdeborová. Recovery thresholds in the sparse planted matching problem. Physical Review E, 102(2):022304, 2020. shirani2017seeded F. Shirani, S. Garg, and E. Erkip. Seeded graph matching: Efficient algorithms and theoretical guarantees. In 2017 51st Asilomar Conference on Signals, Systems, and Computers, pages 253–257. IEEE, 2017. shirani2019concentration F. Shirani, S. Garg, and E. Erkip. A concentration of measure approach to database de-anonymization. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 2748–2752. IEEE, 2019. singh2008global R. Singh, J. Xu, and B. Berger. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences, 105(35):12763–12768, 2008. wu2022settling Y. Wu, J. Xu, and H. Y. Sophie. Settling the sharp reconstruction thresholds of random graph matching. IEEE Transactions on Information Theory, 68(8):5391–5417, 2022. zeynep2022detecting K. Zeynep and B. Nazer. Detecting correlated gaussian databases. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 2064–2069. IEEE, 2022. § ALGORITHMS First we introduce the information density matrix in [subsec:algo1]Subsection <ref>subsec:algo1, which justifies the implementations of the algorithms that we present. Then in [subsec:algo2]Subsection <ref>subsec:algo2 we formulate these algorithms as linear programs with a clear hierarchy in their constraints. Finally [subsec:algo3]Subsection <ref>subsec:algo3 presents an analysis of the computational complexities of each algorithm. §.§ Information density matrix for database alignment Let , denote correlated Gaussian databases as described in [subsec:modelDatabase]Subsection <ref>subsec:modelDatabase. Let f_XY, f_X and f_Y denote the joint and marginal distributions for correlated features in and . Given any partial mapping m, let _m⊆ and _m⊆ denote the sets of users that have a mapping under m and _m⊆_m×_m denote the set of pairs mapped by m. Then the log-likelihood of observing and under the assumption that M=m is given by ∑_(u,v)∈_mlog f_XY((u),(v)) + ∑_u∈∖_mlog f_X((u)) + ∑_v∈∖_mlog f_Y((v)). Let ∈^× denote the information density matrix such that G_u,v = logf_XY((u),(v))/f_X((u))f_Y((v)). In other words, G_u,v is the log-likelihood ratio between hypotheses uMv vs. uMv. Let ∈{0,1}^× denote a matrix encoding of the mapping m such that m_u,v = 1 umv. Then, the inner product , equals ∑_(u,v)∈_mlog f_XY((u),(v)) - ∑_u∈_mlog f_X((u)) - ∑_v∈_mlog f_Y((v)). It then follows that , + ∑_u∈log f_X((u)) + ∑_v∈log f_Y((v)) exactly equals the expression given in (<ref>). The terms following , in (<ref>) do not depend on m. So, the choice of m that maximizes , is the same as the maximizer for log-likelihood, as given in (<ref>). Then, Then contains all information relevant to identifying the underlying mapping M. §.§ Log-likelihood for planted matching The log-likelihood of observation under the planted matching modeled described in [subsec:modelPlanted]Subsection <ref>subsec:modelPlanted. The pdf of given M is given by the expression 1/(2π)^|×|/2exp-1/2 - μ,-μ. Then, the log-likelihood is a constant factor away from -1/2 - μ,-μ = μ,-1/2||||_F^2 -μ^2/2||M||_F^2 where ||.||_F denotes the Frobenius norm. ||||_F^2 does not depend on and ||M||_F^2 is equal to the size of M. Then, maximizing over mappings with fixed size, that maximizes μ, also maximizes the likelihood of given M=m. Alternatively, we can optimize over _G, where _G = μ -μ^2/2 since _G, is a constant factor of |×|μ^2/2 away from μ,. §.§ Algorithms Maximum likelihood estimation For database alignment, , is a constant factor away from the log-likelihood of mapping m, as shown in [subsec:algo1]Subsection <ref>subsec:algo1. For planted mtaching, given _G μ -μ^2/2, _G, is a constant factor away from the log-likelihood of mapping m, as shown in [subsec:algo11]Subsection <ref>subsec:algo11. So, optimizing , or _G, over mapping matrices gives us the maximum likelihood estimate for the two problems. This is an instance of the linear assignment problem, and therefore can be solved by the Hungarian algorithm in polynomial time (<cit.>). Alternatively, this can be expressed as a linear problem as given in [subsec:model-algo]Subsection <ref>subsec:model-algo: maximize-τ, (a) ∑_u∈ m_u,v≤ 1, ∀ v∈ (b) ∑_v∈ m_u,v = 1, ∀ u∈ (c) m_u,v∈ [0,1], ∀ (u,v)∈× The value of τ∈ is irrelevant under constraint (b) in finding the maximizer : Given constraint (b), has fixed sum of entries, and therefore -τ, = ,-τ∑ m_u,v = , - τ||, so the objective function is shifted by a constant term that depends on τ but not on . Let ∈^(×) denote the vector representation of the matrix ∈^×. Then, the constraint matrix of the linear program above can be written as _ ≤ _ = ≤ ≥ where _∈{0,1}^×(×) and _∈{0,1}^×(×) are matrices with exactly one non-zero entry in each row. Then __ is the incidence matrix of a bipartite graph with and the two bipartite sets and (∪) the edge set. (Specifically, vertex w is incident to edge (u,v) if and only if w=u or w=v.) It is well known that the incidence matrix of a bipartite graph is totally unimodular. Furthermore, is totally unimodular if is totally unimodular. It then follows that the constraint matrix for this linear program is totally unimodular, and therefore it is guaranteed to have an integral solution. (The existence of an integral solution is trivial since the linear program is equivalent to the linear assignment problem.) Maximum row estimation The objective function -τ, can be broken down into its row-wise sums ∑_u (-τ)_u,*^⊤_u,* where (.)_u,* denotes the row of the matrix corresponding to user u∈. Then, removing (a), which is the only constraint that takes into account multiple rows at once, breaks down the optimization problem into the sum of row-wise optimization problems, where each row of can be optimized independently. That is, alignment is performed independently over each row. This gives us maximum row estimation. Given any u, the algorithm looks at the log likelihood scores of mappings (u,v) for each v∈ and picks v that has the highest likelihood. Users in may be mapped to multiple users if they happen to be the best match for multiple users in . The mapping of each user u∈ under this relaxation would be the maximum likelihood estimate for u if we were blind to the existence of other users in . Threshold testing The objective function -τ, can be broken down into entry-wise sums ∑_u,v (G_u,v-τ)m_u,v. Removing conditions (a) and (b) breaks down any dependence between entries of allows us to optimize all entries in the matrix independently from each other. Then we are left with an algorithm that independently considers each pair of users and makes a decision on whether they are true pairs or not. Specifically, (u,v) is estimated to be a true match if and only if G_u,v-τ is positive. Since G_u,v is defined to be the log-likelihood ratio between hypotheses uMv vs. uMv, the decision rule G_u,v-τ≥ 0 is equivalent to the likelihood-ratio test at some significance level determined by the log threshold τ. We refer to this relaxation as threshold testing. §.§ Computational complexity Let d denote the maximum of the number of features per user in the two database, n_M = || the size of the true mapping, and n_,n_ denote the sizes of the two user sets ,. Summary Let d ≤(n_M). The computational complexity of the algorithms are given as follows: * Maximum likelihood estimation: n_M· n_· n_ for entire set of users × * Maximum row estimation: d· n_· n_ for entire set of users ×, Maximum row estimation: d^2· n_ for a given user u∈ against entire set . * Threshold testing: d· n_· n_ for entire set of users ×, Threshold testing: d^2 for a given pair of users (u,v)∈×. Computing through the canonical form Identifying the affine transformations described in [sec:canonical]Appendix <ref>sec:canonical to transform features into canonical form takes a sequence of two Cholesky decompositions (_a = _a^⊤_a, _b=_b^⊤_b), two matrix multiplications with inverted triangular matrices (_a^-1_ab(_b^⊤)^-1), one singular value decomposition (_a^-1_ab(_b^⊤)^-1=^⊤) and two more matrix multiplications with inverted triangular matrices (^⊤_a^-1 and ^⊤_b^-1). This can be done in (d^3)-time. Performing the affine transformation to transform features into canonical form as described in consists of one vector addition and one matrix-vector multiplication ((^⊤_a^-1)(-_a) or (^⊤_b^-1)(-_b)). Then, transforming a single feature vector takes (d^2)-time and transforming the entire database takes (d^2· n_) and (d^2· n_) for and respectively. Given databases in canonical form, a single entry in can be computed in (d). This follows from the fact that, in canonical form, there is a one-to-one correspondence between feature entries from the two databases and therefore logf_XY((u),(v))/f_X((u))f_Y((v)) = ∑_i logf_X_iY_i(A_i(u),B_i(v))/f_X_i(A_i(u))f_Y_i(B_i(v)), where the summation is over indices i∈. Then it takes (d· n_· n_) to compute the entire matrix based on features in canonical form. Therefore, computing values of from the databases has complexity * d^3+d^2·(n_+n_)+d· n_· n_ for the entire matrix , * d^3+d^2· n_ for a row of and * d^3 for a single entry of . Computing without going through the canonical form Given in raw form (i.e. not necessarily canonical form), finding the likelihood requires calculating (), (_a) and (_b), which takes (d^3)-time, as well as [^⊤,^⊤] ^-1[^⊤,^⊤]^⊤, ^⊤_a^-1 and ^⊤_b^-1 for each feature pair, which takes (d^2) time per feature pair. Then it takes (d^3+d^2· n_· n_) to compute the entire matrix based on features in raw form. This is less efficient than doing the calculation through the canonical form which takes d^3+d^2·(n_+n_) to obtain features in canonical form and (d· n_· n_) to get based on features in canonical form. Maximum likelihood estimation The (unbalanced) linear assignment problem which can be solved by the Hungarian algorithm in (n_M· n_· n_) (<cit.>). Then, the total complexity of maximum likelihood estimation for database alignment, including the computation of , is d^3+d^2·(n_+n_)+d· n_· n_ + n_M· n_· n_. The complexity for planted matching is (n_M· n_· n_). Maximum row estimation For database alignment, given the corresponding row of , identifying the match of a user in takes (n_)-time. Then the total complexity to align a single user u∈, including the complexity of calculating row ()_u,*, is (d^3+d^2 n_). Consequently, aligning the entire set takes d^3+d^2·(n_+n_)+d· n_· n_-time. For planted matching, identifying the match of a user in takes (n_)-time, while aligning the entire set takes (n_· n_)-time. Threshold testing algorithm For database alignment, given the corresponding entry in , performing threshold testing over a pair of users takes (1)-time. Then the total complexity to test a single user pair (u,v), including the complexity of calculating G_u,v, is (d^3). Then performing the test over all pairs would take d^3+d^2·(n_+n_)+d· n_· n_-time. For planted matching, performing threshold testing over a pair of users takes (1)-time, while performing the test over all pairs would take (n_· n_)-time. § CANONICAL FORM OF THE CORRELATION STATISTICS For simplicity of computation as well as analysis, we would like the correlated feature indices in (u)∈^_a and (v)∈^_b to have a one-to-one correspondance. Specifically, we would like the index sets _a and _b to be identical, and for features across databases to be correlated only if they have the same index. So, given true pair uMv, the features A_i(u) and B_j(v) are correlated if and only if i=j. Given the correlation statistics and , it is possible to perform affine transformations on features to guarantee this type of correspondance between correlated feature vectors. We say a pair of databases with statistics of this desired form is in canonical form. §.§ Existence and construction of canonical transformation The generality of the canonical form is stated in the following lemma while the construction of the transformation that gives features in canonical form is described in the proof of the lemma. Let taking values in ^_a and taking values in ^_b be a pair of correlated Gaussian vectors. If the mean and joint variance is known, one can define a pair of affine transformations _a:^_a→^ and _b:^_b→^ for some set such that the mutual information between _a() and _b() equals the mutual information between and and _a()∈^ and _b()∈^ are a pair of correlated databases with mean and joint variance ()() for some ∈(-1,1)^. Let = _a_b and = _a_ab_ab^⊤_b denote the mean and variance of . If _a is not full rank, then there is some subset of _a that can be discarded without loss of information. That is, we can throw away some indices of to get a shorter vector which allows us to reconstruct the original vector . This follows from the fact that a multivariate Gaussian vector with covariance can be written as a linear combination of rank() i.i.d. Gaussian normal random variables. Then, without loss of generality, assume _a and _b are full rank. _a and _b are covariance matrices, therefore they are positive semi-definite. It then follows that these matrices have Cholesky decompositions: _a = _a_a^⊤ and _b = _b_b^⊤ where _a and _b are lower triangular matrices with non-negative diagonal entries. By the assumption that _a and _b are full rank, it follows that the Cholesky decomposition gives triangular matrix with strictly positive entries. Then _a and _b are invertible. Let d_a |_a| and d_b |_b|. Consider the singular value decomposition of _a^-1_ab_b^⊤^-1: ∈^_a×{1,2,⋯,d_a} and ∈^_b×{1,2,⋯,d_b} orthonormal matrices and ∈^d_a× d_b a diagonal matrix such that ^⊤ = _a^-1_ab_b^⊤^-1. Let _a:^_a→^d_a and _b:^_b→^d_b such that _a() = ^⊤_a^-1 - _a _b() = ^⊤_b^-1 - _b. Note that both these transformations are invertible. We can verify that ^⊤_a^-1^⊤_b^-1_a_ab_ab^⊤_b^⊤_a^-1^⊤_b^-1^⊤ = . Then (,)∼(,) _a(),_b()∼,^⊤. By the invertibility of the transformations, there has been no loss of mutual information. If has no empty rows or columns, then d_a=d_b must hold and we are done. If the i-th row of is all-zero, then the i-th entry of _a() is completely independent from or _b(). It then follows that we can drop this entry without any loss of mutual information. The same argument applies for columns of in relation to entries of _b(). Let d denote the number of non-zero entries in the diagonal matrix , be some arbitrary set of size d. Let _a∈{0,1}^×{1,2,⋯,d_a} such that rows of _a are the d standard basis vectors corresponding to the d non-empty rows of . Left multiplying a vector by _a gives us a shorter vector by `throwing away' all entries corresponding to empty rows of . Let _b∈{0,1}^×{1,2,⋯,d_b} be a matrix of the same kind that `throws away' entries corresponding to empty columns of . Then _a _b^⊤ is a diagonal matrix with no zeros on the diagonal. We use ∈^ to denote the vector formed by the diagonal entries of _a _b^⊤ (i.e. the non-zero diagonal entries of .) Let _a:^_a→^ and _b:^_b→^ such that _a'() = _a^⊤_a^-1 - _a _b'() = _b^⊤_b^-1 - _b. It can be verified that (,)∼(,) _a(),_b()∼,()(). §.§ Low per-feature correlation [lemma:svd2rho]Lemma <ref>lemma:svd2rho shows the significance of ||_a^-1/2_ab_b^-1/2||_2, which is used to characterize [cond:highDimensional]Condition <ref>cond:highDimensional. Let = _a_ab_ab^⊤_b be the covariance matrix between pairs of correlated feature vectors and let and the ∈(-1,1)^ correlation vector that characterizes the correlation in canonical form. Then ||_a^-1/2_ab_b^-1/2||_2 = max_i∈ |ρ_i|, where ||·||_2 denotes the ℓ_2 operator norm, i.e. largest singular value. Let ,_a,_b,, be as defined in the proof of [lemma:canonicalVariance]Lemma <ref>lemma:canonicalVariance. Specifically, let _a and _b be triangular matrices such that _a_a^⊤=_a and _b_b^⊤=_b, and diagonal and , orthonormal matrices such that ^⊤ = _a^-1_ab(_b^⊤)^-1. Define _0 _a^-1/2_ab_b^-1/2, _1 _a^-1/2_ab(_b^⊤)^-1 and _2 ^⊤_a^-1_ab(_b^⊤)^-1. First we show that _2 has the same singular values as _0: The singular values of some matrix can be found by finding the eigenvalues of ^⊤, or those of ^⊤. * Since _b_b^⊤ = _b and ^⊤ =, it follows that _0_0^⊤ = _1_1^⊤. Then, _0 and _1 must have the same singular values. * Since _a_a^⊤ = _a and ^⊤ =, it follows that _1^⊤_1 = _2^⊤_2. Then, _1 and _2 must have the same singular values. Then _0 and _2 have the same singular values. In the proof of [lemma:canonicalVariance]Lemma <ref>lemma:canonicalVariance, we are given that ^⊤ = _a^-1_ab(_b^⊤)^-1. Then, by the orthonormality of and , we have = ^⊤_a^-1_ab(_b^⊤)^-1 = _2. Since = is a diagonal matrix, its singular values are simply its diagonal entries in absolute value. Then the largest singular value is max |ρ_i| § ACHIEVABILITY PROOFS The proofs for threshold testing and maximum row estimation state inequalities using the variable νlog||/log n. These inequalities directly translate to the statements in the main results using the variable α by the fact that ν=1 if 1 if ||=n and ν=max1, log||-n/log n if ||>n. §.§ Threshold testing Quadratic boundary Let τ the threshold such that |τ|≤ζ. By [lemma:typicality]Lemma <ref>lemma:typicality ([lemma:typicalityPlanted]Lemma <ref>lemma:typicalityPlanted), the probability of a true pair failing the test is at most exp-(ζ -τ)^2/4ζ and by [lemma:FP]Lemma <ref>lemma:FP ([lemma:FPPlanted]Lemma <ref>lemma:FPPlanted) the probability of a false pair passing the test is at most exp-(ζ +τ)^2/4ζ. The number of true pairs and false pairs are |M| and ||·||-|M| respectively. We bound the latter by ||·||. So the expected number of false negatives is bounded by |M|exp-(ζ -τ)^2/4ζ and expected number of false positives is bounded by ||·||exp-(ζ +τ)^2/4ζ. The log of the ratio of these two bounds is log|M|exp-(ζ -τ)^2/4ζ/||·||exp-(ζ +τ)^2/4ζ = τ - log||·||/|M|. The choice of τ = log||·||/|M| makes the log of the ratio zero. Hence the bounds for the expected numbers of false positives and negatives are equal. Then the bound on the number of errors is twice the bound on the number of false negatives. Let n |M|, ||=n and ||=n^ν for some ν≥ 1. Then, τ = log||·||/|M| = νlog n. Let xζ/log n. Then the number of false negatives (which is half the total error bound) is given by |M|exp-(ζ -τ)^2/4ζ = n^1-(x-ν)^2/4x. This expression is bounded by n^1-β if x ≥√(ν+β)+√(β)^2. This gives us the following inequalities that form part of the main results: * [thm:alpha]Theorem <ref>thm:alpha Almost-exact alignment is achieved if (<ref>) is satisfied for some β such that n^-β≤ o(1), which is equivalent to β≥ω(1/log n). Such β exists if ζ≥νlog n + ω√(log n). Exact alignment is achieved if (<ref>) is satisfied for some β such that n^1-β≤ o(1), which is equivalent to β-1 ≥ω(1/log n). Such β exists if ζ≥1+√(1+ν)^2log n + ω1. * [thm:alphabeta]Theorem <ref>thm:alphabeta The number of errors is bounded by 2n^1-β if ζ≥√(ν+β)+√(β)^2log n. * [thm:beta]Theorem <ref>thm:beta This is a special case of [thm:alphabeta]Theorem <ref>thm:alphabeta with ν=1. §.§ Maximum row estimation Consider users u∈ and v,v'∈ such that uMv. We want to bound the probability of the error event where u is falsely mapped to v'. Under maximum row estimation, this corresponds to the event G_u,v≤ G_u,v'. Without loss of generality, assume the set consists of the single user u. Linear boundary - relevant for large β and small α By [lemma:misalignment]Lemma <ref>lemma:misalignment ([lemma:misalignmentPlanted]Lemma <ref>lemma:misalignmentPlanted), the probability of {G_u,v≤ G_u,v'} is upper bounded by exp-ζ/2. There are no more than|| vertices v'∈∖{v} that to which u can be falsely mapped. By the union bound, the probability that any of these events happens is upper bounded by ||exp-ζ/2. Then, the expected number of misalignments over all || of rows is upper bounded by ||·||exp-ζ/2. Let n |M|, ||=n, ||=n^ν for some ν≥ 1, and xζ/log n. Then, the bound on the expected number of misalignments is given by n^1+ν-x/2. This expression is bounded by n^1-β if x ≥ 2(ν+β). This gives us the following inequalities that form part of the main results: * [thm:alpha]Theorem <ref>thm:alpha Exact alignment is achieved if (<ref>) is satisfied for some β such that n^1-β≤ o(1), which is equivalent to β-1 ≥ω(1/log n). Such β exists if ζ≥ 2(1+ν)log n + ω(1). * [thm:alphabeta]Theorem <ref>thm:alphabeta The number of errors is bounded by 2n^1-β if ζ≥ 2(ν+β)log n. * [thm:beta]Theorem <ref>thm:beta This is a special case of [thm:alphabeta]Theorem <ref>thm:alphabeta with ν=1. Quadratic boundary - relevant for small β and large α Let τ a threshold such that 0≤τ≤ζ. For the purpose of analysis, let us consider the alignment of the row corresponding to u a failure if either {G_u,v < τ} or {G_u,v≥τ and G_u,v≤ G_u,v'} for some v'∈∖{v}. In other words, we say the the algorithm has failed on the given row if either the true pair has score atypically low, in which case many false pairs will beat the true pair, or if a false pair beats the true pair despite the true pair having sufficiently high score. These two events fully cover the actual error event {G_u,v≤ G_u,v'}. By [lemma:typicality]Lemma <ref>lemma:typicality ([lemma:typicalityPlanted]Lemma <ref>lemma:typicalityPlanted), the probability of {G_u,v≤τ} is bounded by exp-(ζ -τ)^2/4ζ. For database alignment, by [lemma:condMisalignment]Lemma <ref>lemma:condMisalignment, the probability of {G_u,v≥τ and G_u,v≤ G_u,v'} is bounded by exp-ζ ^2+τ^2/2ζ+6ρ_max^2τ. (For planted matching, by [lemma:condMisalignmentPlanted]Lemma <ref>lemma:condMisalignmentPlanted, our bound is exp-ζ ^2+τ^2/2ζ and there is additional term.) There are no more than || vertices v'∈∖{v} that to which u can be falsely mapped. Then, by the union bound, the probability of {G_u,v≥τ and G_u,v≤ G_u,v'} for some v' is bounded by ||exp-ζ ^2+τ^2/2ζ+6ρ_max^2τ. The log of the ratio of these two bounds is logexp-(ζ -τ)^2/4ζ/||exp-ζ ^2+τ^2/2ζ+6ρ_max^2τ = (ζ +τ)^2/4ζ - log|| - 6ρ_max^2τ for database alignment. (For planted matching, we drop the -6ρ_max^2τ term.) The choice of τ^* = 2√(ζlog||)-ζ makes the log of the ratio -6ρ_max^2τ^* for database alignment. Then the bound on the failure probability is no more than 1+exp6ρ_max^2τ^* times the atypicality bound. (For planted matching, the log of the ratio is zero, so the bounds on the two types of error are equal. Therefore, the bound on the total error probability is twice that of the atypicality bound.) Let n |M|, ||=n, ||=n^ν for some ν≥ 1, and xζ/log n. Then τ^* = 2√(ν· x)-xlog n = ν -(√(x)-√(ν))^2log n. There are n rows. Then, the bound on the expected number of atypicality errors is given by nexp-(ζ -τ)^2/4ζ = n^1-√(ν)-√(x)^2. This expression is bounded by n^1-β if x ≥√(ν)+√(β)^2. For such x, we get τ^*/log n = 2√(ν x)-x ≤ν-β. So the total number of errors is n^1-β·1+n^6ρ_max^2(ν-β). By [lemma:svd2rho]Lemma <ref>lemma:svd2rho, under [cond:highDimensional]Condition <ref>cond:highDimensional, ρ_max^2 ≤ o(1) so the error bound becomes n^1-β1+n^o(1) = n^1-β+o(1) for any finite value of β and ν = log ||/log n. This gives us the following inequalities that form part of the main results: * [thm:alpha]Theorem <ref>thm:alpha Almost-exact alignment is achieved if (<ref>) is satisfied for some β such that n^-β≤ o(1), which is equivalent to β≥ω(1/log n). Such β exists if ζ≥νlog n + ω√(log n). Exact alignment is achieved if (<ref>) is satisfied for some β such that n^1-β≤ o(1), which is equivalent to β-1 ≥ω(1/log n). Such β exists if ζ≥1+√(ν)^2log n + ω1. * [thm:alphabeta]Theorem <ref>thm:alphabeta The number of errors is bounded by 2n^1-β if β≤ν and ζ≥√(ν)+√(β)^2log n. * [thm:beta]Theorem <ref>thm:beta This is a special case of [thm:alphabeta]Theorem <ref>thm:alphabeta with ν=1. §.§ Maximum likelihood estimation Linear boundary - relevant for large β and small α Consider some elementary misalignment of size δ. (See [subsec:elementary]Subsection <ref>subsec:elementary for elementary misalignments.) By [lemma:misalignment]Lemma <ref>lemma:misalignment ([lemma:misalignmentPlanted]Lemma <ref>lemma:misalignmentPlanted), the probability of the given misalignment is at most exp-δζ/2. Let ||=n and ||=n+s, where n|M| is the size of the matching. By [lemma:countElementary]Lemma <ref>lemma:countElementary, there are at most n^δ/δ different type-I misalignments and at most sn^δ different type-II misalignments of size δ. Furthermore, the number of type-I misalignments is 0 if δ=1. Define ε = explog n - ζ/2. Then, the expected number of type-I and type-II misalignments of size δ are bounded by ε^δ/δ and sε^δ respectively. The contribution of a misalignment of size δ is equal to δ. The total total contribution of all elementary misalignments gives us the expected number of errors. This expectation is bounded by ∑_δ=2^n ε^δ + ∑_δ=1^n sδε^δ ≤ε^2/1-ε + sε/(1-ε)^2 Let us write xζ/log n and αlog s/log n if s≥ 1. Then the expression above can be written as n^2-x/1-n^1-x/2 + n^α+1-x/2/1-n^1-x/2^2. If x≥ 2+2log√(5)-1/2/log n and s=0, then this expression is bounded by 1. If x≥ 2+2log3+√(5)/2/log n and α> 0, then the expression is bounded by n^α(1+o(1)). Furthermore, given some β>1+Ω(1/log n), the expected number of errors is bounded by n^1-β if s = 0 and x ≥ 1+β bounded by n^1-β(1+o(1)) if s≥ 1 and x ≥ 2(α+β) * [thm:alpha]Theorem <ref>thm:alpha Exact alignment is achieved if (<ref>) or (<ref>) is satisfied for some β such that n^1-β≤ o(1), which is equivalent to β-1 ≥ω(1/log n). Such β exists if ζ≥ (1+α)log n + ω(1). * [thm:beta]Theorem <ref>thm:beta By (<ref>), the number of errors is bounded by n^1-β if ζ≥ (1+β)log n. * [thm:alphabeta]Theorem <ref>thm:alphabeta By (<ref>), the number of errors is bounded by n^1-β(1+o(1)) if ζ≥ 2(α+β)log n. Elliptic boundary - relevant for smallest β and small α Let τ a threshold such that 0≤τ≤ζ. Consider a specific misalignment of size δ. Such a misalignment occurs if and only if one of the following is true: * Atypicality event: the average information density scores of the δ true pairs is below τ, or * Misalignment-despite-Typicality event: the set of δ true pairs have average score greater than τ but nevertheless the set of δ false pairs have greater score than the corresponding set of true pairs. By [lemma:typicality]Lemma <ref>lemma:typicality ([lemma:typicalityPlanted]Lemma <ref>lemma:typicalityPlanted), the probability of the true pairs having average score below the threshold is bounded by exp-δ(ζ -τ)^2/4ζ. For database alignment, by [lemma:condMisalignment]Lemma <ref>lemma:condMisalignment, the probability of that the false pairs have score greater than the true pair despite the true pairs having high score is bounded by exp-δ·ζ ^2+τ^2/2ζ+6ρ^2_maxδτ. (For planted matching, by [lemma:condMisalignmentPlanted]Lemma <ref>lemma:condMisalignmentPlanted, our bound is exp-δ·ζ ^2+τ^2/2ζ and there is no extra term.) Let ||=n and ||=n+s, where n|M| is the size of the matching. Let αlog s/log n and let β∈(0,1/2) some number less than 1-α. Define δ^* n^1-β. Since β≤ 1-α, we have δ^* = n^1-β≥ n^α = s. Consider some δ≥δ^*. By [lemma:countMisalignment]Lemma <ref>lemma:countMisalignment, there are no more than expδ1+log n + log2 different misalignment-despite-typicality events of size δ. Then, in log-expectation, the number of such events is no more than δ1+log n + log2-ζ ^2+τ^2/2ζ+6ρ^2_maxτ. If τ = √(2ζlog n + logη +1+log2-ζ ^2), the expected misalignment-despite-typicality events of size δ is bounded by exp-δ (η-6ρ^2_maxτ). Define εexp-η+6ρ^2_maxτ. ε∈ (0,1) if η > 6ρ^2_max I_XY (which is greater than 6ρ^2_maxτ). Then the bound on the number of such events of size at least δ^* is bounded by ∑_δ≥δ^*ε^δ≤ε^δ^*/1-ε. By [lemma:svd2rho]Lemma <ref>lemma:svd2rho, under [cond:highDimensional]Condition <ref>cond:highDimensional, ρ_max^2 ≤ o(1). Then, there exists some choice for η that is o(log n) and satisfies η > 6ρ^2_max I_XY. If β≤ 1/2, then δ^* > n^1-β≥√(n). If, furthermore, η - 6ρ^2_max I_XY≥Ω(1), then ε^δ^*/1-ε = exp-Ω(√(n))/1-exp-Ω(1)≤ e^-Ω(√(n)). A misalignment can result in no more than n errors. Then, the expected number of errors caused by misalignment-despite-typicality errors of size at least δ^* is ne^-Ω(√(n))≤ o(1). Next we confirm the number of atypicality errors is small: There are no more than nδ different ways to get an atypicality event. This is bounded by expδ + δlogn/δ, which can further be bounded by expδ + δlogn/δ^*. Then, in expectation, there are no more than expδ + δlogn/δ^*-δ(ζ -τ)^2/4ζ atypicality events of size δ. The log of the ratio of the bound on expected number of atypicality errors versus the expected number of misalignment-despite-typicality errors is equal to δ(ζ +τ)^2/4ζ-logδ^*-2-6ρ_max^2τ. If τ≤ 2√(ζlogδ^* + log2)-ζ , then the log-ratio is at most -6ρ_max^2τ<0 and the bound on the expected number of atypicality events of size δ is bounded by that of misalignment-despite-typicality events of size δ. We identify the smallest δ^* such that our choice of τ in (<ref>) satisfies the inequality in (<ref>): Let us write xζ/log n. By (<ref>), for the appropriate choice of η on the order of o(log n) we have , τ/log n = √(2x1+o(1)-x). For x = 1+2√(β(1-β)), such choice of τ gives us τ/log n≤ |1-2β|+o(1). There exists some ε_1,ε_2≤ o(1) such that for any β∈ (ε_1,1/2-ε_2), the choices of τ≤ |1-2β|+o(1) and x = 1+2√(β(1-β)) satisfy (<ref>). By (<ref>), we require τ/log n≤ 2√(x1-β+ε+log 2/log n)-x. Picking x=1+2√(β(1-β)), there exists some ε≤ o(1) that satisfies τ/log n≤ |1-2β|+o(1)≤ 2√(x1-β+ε+log 2/log n)-x. We have shown that errors from misalignments of size greater than δ^* is o(1): Those due to misalignment-despite-typicality type errors is o(1) and those due to atypicality type errors is bounded by that of misalignment-despite-typicality type errors, so also o(1). Finally, the expected number of errors due to misalignments smaller than δ^* is at most δ^*. (This follows from the fact that only one of the misalignments can occur.) This gives us the following inequality that form part of the main results: * [thm:alpha]Theorem <ref>thm:alpha Almost-exact alignment is achieved if x≥ 1+2√(β(1-β)) is satisfied for some β such that n^-β≤ o(1), which is equivalent to β≥ω(1/log n). Such β exists if ζ≥νlog n + ω√(log n). * [thm:beta]Theorem <ref>thm:beta and [thm:alphabeta]Theorem <ref>thm:alphabeta The number of errors is bounded by n^1-β+o(1) if β∈(0,1/2), β≤ 1-α and ζ≥1+2√(β(1-β))log n. Quadratic boundary - relevant for small β and large α By the previous proof, we know that, for β̃ = 1-log s/log n, the expected number of errors due to misalignments of size at least s is o(1) if ζ≥1+√(β̃(1-β̃))log n. Here we show that, given a stronger bound on ζ, we can also bound the number of errors due to misalignments of size between δ^* and s for some δ^* = n^1-β+η≤ s where β∈[0,1/2] another constant strictly less than β̃ and η some non-negative function of n which is to be determined later. Let τ a threshold such that 0≤τ≤ζ. Consider a specific misalignment of size δ. Once again, we cover the misalignment event using two auxiliary events: * Atypicality event: the average information density scores of the δ true pairs is below τ. By [lemma:typicality]Lemma <ref>lemma:typicality ([lemma:typicalityPlanted]Lemma <ref>lemma:typicalityPlanted), the probability of this event for size δ is bounded by exp-δ(ζ -τ)^2/4ζ. * Misalignment-despite-Typicality event: the set of δ true pairs have average score greater than τ but nevertheless the set of δ false pairs have greater score than the corresponding set of true pairs. For database alignment, by [lemma:condMisalignment]Lemma <ref>lemma:condMisalignment, the probability of that the false pairs have score greater than the true pair despite the true pairs having high score is bounded by exp-δ·ζ ^2+τ^2/2ζ+6ρ^2_maxδτ. (For planted matching, by [lemma:condMisalignmentPlanted]Lemma <ref>lemma:condMisalignmentPlanted, our bound is exp-δ·ζ ^2+τ^2/2ζ and there is no extra term.) Let ||=n and ||=n+s, where n|M| is the size of the matching. The number of atypicality events of size δ is bounded by expδ + δlogn/δ. By [lemma:countMisalignment]Lemma <ref>lemma:countMisalignment, the number of misalignment-despite-typicality events of size δ is bounded by expδ1+logns/δ + log2. The log of the ratio of the bounds on the expected number of atypicality events and the expected number of misalignment-despite-typicality events is equal to δ(ζ +τ)^2/4ζ-log s - log 2 - 6ρ^2_maxτ. If τ = 2√(ζlog s + log2)-ζ , the log-ratio is -6ρ^2_maxτ<0 and the bound on the expected number of atypicality events of size δ is bounded by that of misalignment-despite-typicality events of size δ. Define x=ζ/log n and α = log s/log n. Then, given the value of τ in (<ref>), the expected number of misalignments-despite-typicality errors of size δ≥δ^* bounded by expδ(1+log 2) + δlogns/δ-δ·ζ ^2+τ^2/2ζ+6ρ^2_maxτ which is further bounded by expδ(1+log 2) + δlogns/δ^*-δ·ζ ^2+τ^2/2ζ+6ρ^2_maxτ = n^δβ-η - (√(x)-√(α))^2e^δ(1+log 2 + 6ρ^2_maxτ). Pick η to be 1+log 2 + 6ρ^2_maxτ/log n. Then the expression in the previous bound simplifies at n^δβ-(√(x)-√(α))^2. Define ε = n^δβ-(√(x)-√(α))^2. ε < 1 if x > √(α)+√(β)^2 and α≥β. The contribution of each event to the size of the misalignment is at most δ. (This is because at most one misalignment can occur at a time. So the contribution of a misalignment-despite-typicality event is either 0 or δ.) Then, the total number of errors due to misalignment-despite-typicality events is bounded as ∑_δ=0^∞δε^δ = ε/(1-ε)^2. The bound above is o(1) if ε≤ o(1). In that case, the number of errors due to misalignments of size δ∈n^1-β+η,n^1-β̃ as well as those of size δ∈n^1-β̃,n is o(1). The expected number of errors due to misalignments smaller than n^1-β+η is at most n^1-β+η. (This follows from the fact that only one of the misalignments can occur.) By [lemma:svd2rho]Lemma <ref>lemma:svd2rho, under [cond:highDimensional]Condition <ref>cond:highDimensional, ρ_max^2 ≤ o(1). Then, η = 1+log 2 + 6ρ^2_maxτ/log n≤ o(1). ε≤ o(1) is equivalent to x > (√(α)+√(β+ω(1/log n)))^2, which we can rewrite as x > (√(α)+√(β))^2 + 1+√(α/β)ω(1/log n) This gives us the following inequality that form part of the main results: * [thm:alpha]Theorem <ref>thm:alpha Almost-exact alignment is achieved if (<ref>) is satisfied for some β such that n^-β≤ o(1), which is equivalent to β≥ω(1/log n). Such β exists if ζ≥νlog n + ω√(log n). Exact alignment is achieved if (<ref>) is satisfied for some β such that n^1-β≤ o(1), which is equivalent to β-1 ≥ω(1/log n). Such β exists if α≥ 1 and ζ≥ (1+√(α))^2log n + ω(1). * [thm:alphabeta]Theorem <ref>thm:alphabeta The number of errors is bounded by n^1-β+o(1) if 1-α≤β≤α and ζ≥√(α)+√(β)^2log n. * [thm:beta]Theorem <ref>thm:beta This is a special case of [thm:alphabeta]Theorem <ref>thm:alphabeta with α=1. § CONVERSE PROOFS FOR PLANTED MATCHING Our achievability statements take the form of upper bounds on E[d(M̂,M] in terms of ζ. If E[d(M̂,M] ≤ n^1-β, then by Markov's inequality, these can be converted into upper bounds on [d(M̂,M) ≥n^1-β/ε] ≤ε. Note that n^1-β/ε = exp((1-β)log n + log1/ε), so ε appears only in a lower order term. For 0 ≤β < 1, we prove converse statements of the form [d(M̂,M) ≤ n^1-β] ≤ o(1). These imply bounds E[d(M̂,M)] ≥(1-o(1))n^1-β. For all β > 1, d(M̂,M) ≤ n^1-β is equivalent to M̂=M. Thus E[d(M̂,M)] ≥ 1 - [M̂=M]. If [M̂=M] ≤exp(-n^1-β) for β >1, then again we have E[d(M̂,M)] ≥ (1-o(1))n^1-β. Technical Lemmas: Throughout this section, let || = n+s and s = n^α. Recall that in the planted matching model, ζ = μ^2/2. Let 0 ≤β≤ 1 and α≤Ω(1). If for any estimator M̂ we have the bound loglog[M̂ = M]^-1≥ (1-β)log n + loglog n +ω(1), then for any estimator M̂' we have [d(M̂',M) ≤ n^1-β] ≤ o(1). Given an estimator M̂', define a second estimator M̂ as follows: select a matching uniformly from those within distance d of M̂' such that M̂ and the observation are conditionally independent given M̂'. Then [M̂ = M] ≥[d(M̂',M) ≤ n^1-β]/|{m : d(m,M̂') ≤ n^1-β}|. Combining this with (<ref>) and Lemma <ref>, we have log[d(M̂',M) ≤ n^1-β]^-1 ≥log[M̂' = M]^-1 - n^1-β(2 + max(1, α+β) log n) ≥ω(n^1-βlog n) - O(n^1-βlog n) ≥ω(n^1-βlog n) ≥ω(log n). For any estimator M̂ [M̂ = M] ≤inf_0 ≤γ≤ 1∑_k nks!/(k+s)! (1-Q((1-γ)μ))^n-k(1-Q(γμ))^k(k+s) e^k (2γ-1)μ^2/2 For any estimator M̂, [M̂ = M] ≤∫_ℝ^n × nmax_ms!/(n+s)!f(g-μ m) dg where f(g) = (2 π)^n^2/2exp(-1/2∑_i,j g_i,j^2), the density function of a n^2-dimensional standard Gaussian vector, and the maximization is over n × (n+s) matching matrices. Write |m| for ∑_i,j m_i,j, which is the number of ones in m and the number of pairs in the matching. Now we upper bound (<ref>) by ∫_ℝ^n × nmax_me^(n-|m|)θs!/(n+s)!f(g-μ m) dg where the maximization is now over all partial and full matching matrices. Let m' be a matching such that m' ≥ m and |m'| = |m|+1. Let (i,j) be the entry where m'_i,j = 1 and m_i,j = 0. Then f(x - μ m') ≥ f(x - μ m') when e^(n-|m|-1)θexp(-(x_i,j - μ)^2/2) ≥ e^(n-|m|)θexp(-x_i,j^2/2) -2θ - (x_i,j - μ)^2 ≥ -x_i,j^2 -2 θ + 2 μ x_i,j - μ^2 ≥ 0 x_i,j ≥θ/μ + μ/2 = μθ/μ^2+1/2 Let γ = θ/μ^2+1/2: the location of the boundary between the regions covered by m and m' as a fraction of the distance between the means. We will select γ and use θ = 2γ-1/2μ^2. A partial matching m of size |m| = n-k has n-k smaller neighboring matchings and k(k+s) larger neighbors. If m_i,j = 1, the gaussian centered at the neighboring matching with m'_i,j=0 has higher density in the region x_i,j≤γμ The measure of the density centered at m in that tail is Q((1-γ)μ). If m_i,j = 0 and there is a neighboring matching m' with m'_i,j = 1, the gaussian centered at m' has higher density in the region x_i,j≥γμ The measure in that tail is Q(γμ). These inequalities each involve one entry of the matrix, which are independent random variables. The measure not part of some tail is (1-Q((1-γ)μ))^n-k(1-Q(γμ))^k(k+s)e^k θs!/(n+s)!. There are nkn+sk+s(n-k)! = nk(n+s)!/(k+s)! partial matchings of size n-k. Thus (<ref>) is at most s!/(n+s)!∑_k nk(n+s)!/(k+s)! (1-Q((1-γ)μ))^n-k(1-Q(γμ))^k(k+s)e^k θ = ∑_k nks!/(k+s)! (1-Q((1-γ)μ))^n-k(1-Q(γμ))^k(k+s) e^k θ Let log n ≤μ^2/2 ≤ 2 log n - ωloglog n. For any estimator M̂, loglog[M̂ = M]^-1/log n≥1/2 + √(μ^2/4log n1 - Ologlog n/log n - μ^2/4log n) - Ologlog n/log n. Let x = Q(γμ), y = Q((1-γ)μ), and θ = 2γ-1/2μ^2. From Lemma <ref>, for any 0 ≤γ≤ 1, [M̂ = M] ≤∑_k nks!/(k+s)! (1-y)^n-k(1-x)^k(k+s)e^k θ ≤∑_k nk (1-y)^n-ke^-k^2 xe^k θ/k! ≤∑_k nk (1-y)^n-ke^-(2 ℓ k - ℓ^2)xe^k θ = e^ℓ^2 x(1-y+e^θ-2ℓ x)^n, where we used (k+s)! ≥ s! and k(k+s) ≥ k^2 ≥ 2ℓ k - ℓ^2, which holds for any ℓ. Suppose that can find values of θ and ℓ such that e^θ-2ℓ x≤ y/3 and ℓ^2 x ≤ ny/3. Then e^ℓ^2(1-y+e^θ-2ℓ x)^n ≤ e^ny/3(1-2y/3)^n ≤ e^-ny/3 and loglog[M̂=M]^-1≥log(ny/3). Let q(z) = exp(-z^2/2)/Q(z), so 1 ≤ q(z) ≤ O(z). Then x = Q(γμ) = exp(-γ^2μ^2/2)/q(γμ) and y = Q((1-γ)μ) = exp(-(1-γ)^2μ^2/2)/q((1-γ)μ). Now we find the value of ℓ that satisfies e^θ-2ℓ x = y/3: 2 ℓ x = θ - log(y/3) = (2γ-1)μ^2/2 + log(3 q((1-γ) μ)) + (1-γ)^2μ^2/2 = γ^2 μ^2/2 + log(3 q((1-γ) μ)). Now we find a condition on γ that ensures (ℓ x)^2 ≤ nxy/3. Expanding the definitions of x and y, we have nxy/3 = 1/3 q(γμ) q((1-γ)μ)explog n - γ^2μ^2/2-(1-γ)^2μ^2/2 so we get the condition log n ≥ (γ^2 + (1-γ)^2)μ^2/2 + 2 logγ^2 μ^2/4 + log(3 q((1-γ) μ))/2 + log (3 q(γμ) q((1-γ)μ)). Using 0 ≤γ≤ 1 and picking the worst possible γ in the lower order terms, we observe that a stronger condition is log n ≥ (γ^2 + (1-γ)^2)μ^2/2 + 2 logμ^2/4 + log(3 q(μ))/2 + log (3 q(μ)^2) = (γ^2 + (1-γ)^2)μ^2/2 + ϵlog n. Using μ^2 ≤ 4 log n, we see that 0 ≤ϵ≤ Ologlog n/log n. Because our final upper bound is e^-ny/3, we want to maximize y and thus maximize γ. Let a = μ^2/4 (1-ϵ) log n. The larger root of 1/2a = γ^2 + (1-γ)^2 is γ = 1+ √(1/a-1)/2, so we need a ≤ 1 to get a real root and a ≥1/2 to get γ≤ 1. Then (1-γ)^2 = 1/2a - √(1/a-1)/2, and 2a(1-γ)^2 = 1/2 - √(a-a^2), and When μ^2 ≤ 4 log n - ω(loglog n), a ≥ 1/2 for sufficiently large n. Let log(ny/3)/log n = log n + log Q(γμ)- log 3/log n = 1 - (1-γ)^2μ^2/2 log n- log q((1-γ)μ) - log 3/log n = 1 - 2(1-ϵ)a(1-γ)^2 - Ologlog n/log n = 1 - (1-ϵ)1/2 - √(a(1-a))- Ologlog n/log n = 1/2 + ϵ/2 + √((1-ϵ)a(1-ϵ - (1-ϵ)a))- Ologlog n/log n ≥1/2 + √(μ^2/log n1 - Ologlog n/log n - μ^2/log n) - Ologlog n/log n. Let αlog n + ω(loglog n) ≤μ^2/2. For any estimator M̂, loglog[M̂ = M]^-1/log n≥ 1 - √(μ^2/2 log n) - √(α-Ologlog n/log n)^2 - Ologlog n/log n. Let x = Q(γμ), y = Q((1-γ)μ), and θ = 2γ-1/2μ^2. From Lemma <ref>, for any 0 ≤γ≤ 1, [M̂ = M] ≤∑_k nks!/(k+s)! (1-y)^n-k(1-x)^k(k+s)e^k θ ∑_k nk(n+s)!/(k+s)! (1-Q((1-γ)μ))^n-k(1-Q(γμ))^k(k+s)e^k θs!/(n+s)! = ∑_k nk (1-y)^n-k(1-x)^k(k+s)e^k θs!/(k+s)! ≤∑_k nk (1-y)^n-ke^-k(k+s) xe^k θs!/(k+s)! ≤∑_k nk (1-y)^n-ke^-ksxe^k θ = (1-y+e^θ-s x)^n, where we used (k+s)! ≥ s! and k^2 ≥ 0. Suppose that can find θ such that e^θ-s x≤ y/2. Then (1-y+e^θ-s x) ≤ (1-y/2)^n ≤ e^-ny/2. and loglog[M̂=M]^-1≥log(ny/2). Let q(z) = exp(-z^2/2)/Q(z), so 1 ≤ q(z) ≤ O(z). Then x = Q(γμ) = exp(-γ^2μ^2/2)/q(γμ) and y = Q((1-γ)μ) = exp(-(1-γ)^2μ^2/2)/q((1-γ)μ). Now we find the value of θ that satisfies e^θ-s x≤ y/2: s x ≥θ - log(y/2) = (2γ-1)μ^2/2 + log(2 q((1-γ) μ)) + (1-γ)^2μ^2/2 = γ^2 μ^2/2 + log(3 q((1-γ) μ)) αlog n - γ^2 μ^2/2 - log(q(γμ)) ≥logγ^2 μ^2/2 + log(3 q((1-γ) μ)) A stronger condition is αlog n - γ^2 μ^2/2 ≥log(q(μ))+logμ^2/2 + log(3 q(μ)) = ϵlog n. We have 0 ≤ϵ≤ Ologlog n/log n. Pick γ = √(2 (α - ϵ) log n/μ^2). To ensure γ≤ 1, we need μ^2/2 ≥αlog n + ω(loglog n). Then log(ny/2)/log n = log n + log Q(γμ)- log 2/log n = 1 - (1-γ)^2μ^2/2 log n- log q((1-γ)μ) - log 2/log n = 1 - √(μ^2/2 log n) - √(γ^2μ^2/2 log n)^2 - log q((1-γ)μ) - log 2/log n = 1 - √(μ^2/2 log n) - √(α-ϵ)^2 - log q((1-γ)μ) - log 2/log n = 1 - √(μ^2/2 log n) - √(α-Ologlog n/log n)^2 - Ologlog n/log n. Let n = |M| = || and α = log||-n/log n if ||>n. Each of the following conditions guarantee that any estimator makes at least Ω(n^1-β) errors with probability 1-o(1): Necessary cond. Range of β Boundary ζ≤1+2√(β(1-β))log n - ω(loglog n) 0<β≤min{1-α,1/2} elliptic ζ≤√(α)+√(β)^2 log n - ω(loglog n) 1-α<β parabolic ζ≤ 2 log n - ω(loglog n) 1/2<β vertical ζ≤1+2√(β(1-β))log n - ω(loglog n) 0 < β≤1/2 ζ≤√(α)+√(β)^2 log n - ω(loglog n) 0 < β≤ 1 ζ≤ 2 log n - ω(loglog n) 1/2≤β≤ 1. Assume ζ≤1+2√(β(1-β))log n - ω(loglog n). Because β≤1/2, ζ≤ 2 log n - ω(loglog n), and from Lemma <ref> loglog[M̂≠ M]^-1/log n≥1/2 + √(ζ/2log n1 - Ologlog n/log n - ζ/2log n) - Ologlog n/log n. If β≥ωloglog n/log n, then loglog[M̂≠ M]^-1 ≥log n/2 + 1/2√(ζ2 log n - Ologlog n - ζ) - Ologlog n ≥ (1-β)log n + ω(loglog n) and the first part of the theorem follows from Lemma <ref>. If ζ≤ 2 log n - ω(loglog n), loglog[M̂≠ M]^-1 ≥log n/2 + 1/2√(ζ2 log n - Ologlog n - ζ) - Ologlog n ≥log n + ω(loglog n) and the third part of the theorem again follows from Lemma <ref>. Similarly, the second part of the theorem follows from Lemma <ref> and Lemma <ref>. § COMBINATORIAL ANALYSIS §.§ Elementary misalignments between mappings Let ,'∈{0,1}^× be the matrix encodings of mappings m,m' and and let ∈^× denote the score matrix for databases ,. As shown in [sec:algo]Section <ref>sec:algo, comparing the likelihoods of , being generated by m versus m' is equivalent to comparing the values of , and ,'. Assume -' can be written in block diagonal form Δ_1Δ_2. Let _1' - Δ_1 and _2' - Δ_2. m_1',m_2' are two valid mappings that in some sense partition the disagreement between m and m' into two. , < ,',-' < 0 ,Δ_1 + ,Δ_2 <0 ,-_2' + ,-_1' < 0 ,-_2'≤ 0 or ,-_1'≤ 0 Then m' has higher score than m only if at least one of m_1',m_2' also has higher score than m. Furthermore, m' is the minimizer for the inner product ,-' only if both ,-_1' and ,-_2' are negative. So m' is the optimal mapping only if each of its `submappings' (i.e. mappings whose mismatch with m are entirely contained in m') have higher score than m. It is then of interest to define elementary misalignments between mappings. Let m_1,m_2 be a pair of mappings between and that are bijective between from their domain to their co-domain. Let _1,_2∈{0,1}^× be binary matrices that encode these mappings. We say the mismatch between the two mappings is elementary if and only if _1-_2 does not have a block-diagonal representation with multiple zero-sum, non-zero blocks. There are three types of elementary misalignments, as shown in [fig:decomposition1]Fig. <ref>fig:decomposition1 and [fig:decomposition2]Fig. <ref>fig:decomposition2. These are * I - Cycles: The two mappings use the same set of users from and but pair them up differently. This type of mismatch consists of a single cycle. * II - Even paths: The two mappings use the same set of users from one of the sets (say ) but differ in the users they map from the other side () by 1 user. This type of mismatch consists of one path and is cycle-free. * III - Pair of odd paths: The two mappings differ in the users they map on both sides, by 1 user per side. This type of mismatch consists of two paths and is cycle-free. The bigraph representation in [fig:decomposition1]Fig. <ref>fig:decomposition1 can be used to explain why these three are the only types of elementary misalignments. Since m_1 and m_2 map each user at most once, each vertex can have at most one edge from each mapping and a degree of at most 2. Then the bigraph has alternating edges and maximum degree 2. Graphs of maximum degree 2 decompose into cycles and paths. Each component in the bigraph corresponds to a block in the adjacency matrix. Since edges are alternating between the two mappings, each cycle has even length and an equal number of edges coming from both graphs. Then each cycle corresponds to a block in _1-_2 with sum of entries equal to 0. Therefore each cycle is an elementary misalignment. The same holds for even paths. Odd paths contain one more edge from one mapping than from the other. Therefore these correspond to blocks in _1-_2 whose sum equal 1 or -1. Since _1-_2 has sum of entries equal to zero, it follows that there must be an equal number of blocks whose entries sum up to 1 and blocks whose entries sum up to -1. Pairing these up gives us elementary blocks. Let and sets of users of size n and n+s respectively. Let m be the true mapping of size n. Consider the elementary misalignments induced by all mappings m' of size n. The number of distinct elementary type-I misalignments of size δ is upper bounded by n^δ/δ if δ∈{2,3,⋯} and 0 if δ=1. The number of distinct elementary type-I misalignments of size δ is upper bounded by sn^δ. There are be no elementary misalignments of type III. We count the ways to pick some m' that induces an elementary misalignment with m of size δ. There are nδ ways to pick the δ pairs from m to be misaligned by m'. Let us denote the sets of these users as '⊆ and '⊆. |'|=|'|=δ. Assume these sets are fixed. * If δ=1, there is no way to obtain a type I mismatch, since the only way to pair the single user in ' to the single user in ' is the same as the original mapping in m. For δ∈{2,3,⋯}, there are (δ-1)! ways to pair ' and ' to obtain a type I mismatch. (Forming the `cycle' in [fig:decomposition1]Fig. <ref>fig:decomposition1 is simply a matter of arranging the blue edges around the cycle, which results in a unique way to pick the red edges.) Then, in total, there are nδ(δ-1)! = 1/δnδδ! ways to pick a type I mistmatch. * There are s ways to pick a user w' from that is not mapped by m. Given this user, there are δ! ways to pair ',' and w', leaving one user from either ' unpaired. We can generate this pairing as follows: Take any of the (δ-1)! type I matchings. Break the cycle at any of the δ red edges and connect that edge to w' from the appropriate side. This gives us an even path. Then, in total, there are snδδ! ways to pick a type II mistmatch. * We only consider mappings m' of size n, which is the same as the true mapping m. So, given the representation in [fig:decomposition1]Fig. <ref>fig:decomposition1, there must be an equal number of red and blue edges. Odd paths have an extra edge of either color. To construct an odd path with more edges belonging to m', there need to vertices in both and not covered by m. (These correspond to vertices u' and v' in [fig:decomposition1]Fig. <ref>fig:decomposition1.) Since has size equal to that of m, all vertices in are covered, and there can be no odd path with more edges from m'. Since the total number of edges from each mapping needs to be equal, it then follows that there can also be no odd path with more edges from m. Using the fact that nδδ! ≤ n^δ, we simplify the expression to get the result. Let and sets of users of size n and n+s respectively. Let m be the true mapping of size n. Let c∈(0,∞) some arbitrary constant. The number of different mappings m' that result in a misalignment of size δ is upper bounded by: * expδ1+log n + log(1+1/c) if δ≥ cs, and * expδ1+logns/δ + log(1+c) if δ≤ cs. We count the number of different ways to construct m' that results in a misalignment of size δ. There are nδ different ways to pick the set of vertices to be misaligned by m'. Given the δ pairs of vertices to be misaligned, there are no more than δ+s ways to misalign each vertex. So, there are no more than (δ+s)^δ ways to misalign the set of δ pairs. nδ is strictly less than en/δ^δ. If δ≥ cs, then (δ+s)^δ is at most (1+1/c)^δδ^δ. If δ≤ cs, then (δ+s)^δ is at most (1+c)^δ s^δ. The products of these terms give us the claimed results. § CONCENTRATION INEQUALITIES We use m,m',m_1 etc to denote mappings between and , and ,',_1∈{0,1}^× to denote binary matrix representations of these mappings, where ()_u,v=1 if and only if m maps u to v. §.§ Concentration inequalities for database alignment refers to the information density matrix under the database alignment setting as described in [subsec:modelDatabase]Subsection <ref>subsec:modelDatabase. Specifically, be the matrix such that G_u,v is the log-likelihood ratio of hypotheses uMv vs. uMv for any (u,v)∈×. Given some |τ|≤ I_XY and m a partial mapping fully contained in the true mapping M, -τ,≤ 0|m⊆ M≤exp-|m|·(I_XY-τ)^2/4I_XY. The atypicality event is completely independent from users that are not contained in m. Then, without loss of generality, we can assume m=M instead of m⊆ M. By [cor:chernoff]Corollary <ref>cor:chernoff, τ|m| ≥,|M=m≤expθτ|m|R(1-θ). By [cor:rMapping]Corollary <ref>cor:rMapping, this last expression is upper bounded by expθτ|m|-θ(1-θ)I_XY|m|. Let θ = I_XY-τ/2I_XY. Then τθ = I_XYτ-τ^2/2I_XY and θ(1-θ)I_XY = I_XY^2-τ^2/4I_XY, which gives us τ|m| ≥,|M=m ≤expτθ|m|-θ(1-θ)I_XY|m| = exp-|m|·I_XY^2-2I_XYτ+τ^2/4I_XY which matches the claimed result. Given some |τ|≤ I_XY, G_u,v≥τ|uMv≤exp-(I_XY+τ)^2/4I_XY. G_u,v≥τ |uMv ≤ e^-τθexpθ G_u,v|uMv G_u,v only depends on (u) and (v). So, without loss of generality, we can assume = {u} and ={v}. Since uMv, it follows that M maps nothing and the databases are independent. Let m denote the empty mapping and m' denote the mapping consisting of (u,v). Then θ G_u,v = ,θ' = ,θ' + (1-θ). It then follows that G_u,v≥τ |uMv ≤ e^-τθexpθ G_u,v|uMv = e^-τθexp,θ' -|M=m = e^-τθR(θ') where the last line follows from [lemma:genFunc2exp]Lemma <ref>lemma:genFunc2exp. R(θ') = R([θ]). By [cor:rMapping]Corollary <ref>cor:rMapping, R([θ]) ≤exp-θ(1-θ)I_XY. G_u,v≥τ |uMv ≤exp-τθ - -θ(1-θ)I_XY = exp-I_XY^2+2I_XYτ+τ^2/4I_XY which matches the claimed result. Let m and m' denote two mappings of same size and δ denote the number of pairs mapped by m but not by m'. Then [,≤,'|M=m] ≤exp-δ/2I_XY. By [cor:chernoff]Corollary <ref>cor:chernoff, [,≤,'|M=m] is upper bounded by R() where = 1/2+1/2'. can be represented in block-diagonal form. For each n-δ pair of users (u,v) that is mapped both by and ', we get a 1×1 block [1]. The remaining δ pairs whose mapping is not the same between and ', we get blocks _j that correspond to cycles or even paths as described in [def:elementaryBlocks]Definition <ref>def:elementaryBlocks. Given this block-diagonal form, by [lemma:blockGenFunc]Lemma <ref>lemma:blockGenFunc, R() is equal to the product R([1])^n-δ·∏_j R(_j). By [lemma:rOne2One]Lemma <ref>lemma:rOne2One, R([1])=1. By Lemmas <ref> and <ref>, plugging in ν=1, we have R(_j)≤exp-δ_j I_XY/2 where δ_j is the total length (i.e. number of blue edges or number of red edges) of the corresponding cycle or even paths. The total number of user pairs that whose mapping differs between and ' is δ. Then ∏_j R(_j) ≤exp∑_j-δ_j I_XY/2 = exp-δ I_XY/2. It then follows that [,≤,'|M=m] ≤ R() ≤exp-δ I_XY/2. Let m and m' denote two mappings of same size δ such that no pair is mapped under both mappings. Given 0≤τ≤ I_XY, ,≥τ |m| and ,'≥,|m⊆ M≤exp-δ·I_XY^2+τ^2/2I_XY+δ·6ρ_max^2τ where ρ_max = max |ρ_i|, the largest correlation coefficient under the canonical form. By [lemma:svd2rho]Lemma <ref>lemma:svd2rho, under [cond:highDimensional]Condition <ref>cond:highDimensional, ρ_max^2 ≤ o(1), so the bound can be simplified as exp-δ·I_XY^2+τ^2/2I_XY(1-o(1)). By [cor:chernoff]Corollary <ref>cor:chernoff, ,≥τ |m| and ,'≥,|m⊆ M is upper bounded by e^-τ|m|(ν-1)R where = ν(1-θ) + νθ'. Consider the decomposition of into blocks: we get cycles and even paths as described in [def:elementaryBlocks]Definition <ref>def:elementaryBlocks. (There are no one-by-one blocks since m and m' have no intersection.) This decomposition gives us a block-diagonal representation of . By [lemma:blockGenFunc]Lemma <ref>lemma:blockGenFunc, R() = ∏ R(_j), where _j denotes the block corresponding to an elementary misalignment (i.e. cycle or even path). Let m_j and m_j' denote the partial misalignments that correspond to the intersection of block _j with m and m'. Let δ_j=|m_j|=|m_j'| denote their size. (Cycle and even path type misalignment consist of mappings of equal size.) Under [cond:highDimensional]Condition <ref>cond:highDimensional, Lemmas <ref> and <ref> give us Rν(1-θ)_j + νθ_j' ≤exp-δ_j·I_XY/2ν(2-ν) + 6δ_jρ_max^2 I_XY Rν(1-θ) + νθ' ≤exp-δ·I_XY/2ν(2-ν)+6δρ_max^2 I_XY Pick ν = 1 + τ/I_XY. Then Rν(1-θ) + νθ ≤exp-δ·I_XY^2-τ^2/2I_XY+δ· 6ρ_max^2τ and e^-τδ(ν-1) = exp-δ·τ^2/I_XY. Then e^-τ|m|(ν-1)R≤exp-δ·I_XY^2+τ^2/2I_XY+δ· 6ρ_max^2τ and we have the claimed result. §.§ Concentration inequalities for planted matching refers to the edge weight matrix of the bipartite graph under the planted matching setting described in [subsec:modelPlanted]Subsection <ref>subsec:modelPlanted. We state the concentration inequalities in terms of , as well as another matrix _Gμ - μ^2/2, which is scaled and shifted to match the statistics of , as given in [app:stats]Appendix <ref>app:stats. Let m a partial mapping fully contained in the true mapping M. Given some τ_W ≤μ and τ_G = μτ - μ^2/2, τ_W|m|≥,|m⊆ M ≤exp-(μ-τ_W)^2/2 which is equivalent to τ_G|m|≥_G,|m⊆ M ≤exp-(ζ-τ_G)^2/4ζ where ζ = μ^2/2. Given uMv, W_u,v is normal with mean μ and unit variance. Then, its moment generating function is given by e^θ W_u,v=e^θμ + θ^2/2. By Markov's inequality, for any θ < 0, W_u,v≤τ_W|uMv ≤ e^-θτ_We^θ W_u,v = exp-θτ_W +θμ + θ^2/2 Pick θ = -(μ - τ_W). Then W_u,v≤τ_W|uMv≤exp-(μ-τ_W)^2/2. Since all entries in are independent, ,≤τ_W|m| |m⊆ M is the product of all of these terms. (μ-τ_W)^2/2 = μ^2 - μτ_W^2/2μ^2 = μ^2/2 - (μτ_W-μ^2/2)^2/2μ^2 = (ζ - τ_G)^2/4ζ. Then (W_G)_u,v≤τ_G|uMv = W_u,v≤τ_W|uMv≤exp-(ζ - τ_G)^2/4ζ. Once again, taking the product of all of this term over all pairs in m gives us the claimed result. Given some τ_W ≥ 0 and τ_G = μτ - μ^2/2, W_u,v≥τ_W|uMv ≤exp-τ_W^2/2 which is equivalent to (W_G)_u,v≤τ_G|uMv ≤exp-(ζ + τ_G)^2/4ζ where ζ = μ^2/2. Given uMv, W_u,v is normal with zero μ and unit variance. Then, its moment generating function is given by e^θ W_u,v=e^θ^2/2. By Markov's inequality, for any θ > 0, W_u,v≤τ_W|uMv ≤ e^-θτ_We^θ W_u,v = exp-θτ_W + θ^2/2 Pick θ = τ_W. Then W_u,v≥τ_W|uMv≤exp-τ_W^2/2. τ_W^2/2 = (μτ_W)^2/2μ^2 = μ^2/2 + (μτ_W-μ^2/2)^2/2μ^2 = (ζ + τ_G)^2/4ζ. Then (W_G)_u,v≤τ_G|uMv = W_u,v≤τ_W|uMv≤exp-(ζ + τ_G)^2/4ζ. Let m and m' denote two mappings of same size and δ denote the number of pairs mapped by m but not by m'. Then ≤exp-δμ^2/4. ,≤,'|M=m ≤exp-δμ^2/4 which is equivalent to _G,≤_G,'|M=m ≤exp-δζ/2 where ζ = μ^2/2. ,-,' is the linear combination of 2δ independent Gaussian random variables and therefore is Gaussian. (|m|-δ of the terms in , get canceled out by the |m|-δ common terms in ,'.) The difference has mean δμ and variance 2δ. Then, the moment generating function is given by expθ,-θ,' = expθδμ + θ^2 δ. Then, by Markov's inequality, for any θ <0, ,≤,'|M=m = ,-,'≤0|M=m ≤expθ,-θ,' = expθδμ + θ^2 δ. Picking θ = -μ/2 gives us the claimed result. Let m and m' denote two mappings of same size δ such that no pair is mapped under both mappings. Given some τ_W≥μ/2 and τ_G = μτ - μ^2/2, ,≥τ_W |m| and ,'≥,|m⊆ M≤exp-δ·τ_W-μ/2^2 - δμ^2/4 which is equivalent to _G,≥τ_G |m| and _G,'≥_G,|m⊆ M≤exp-δ·τ_G^2+ζ^2/2ζ where ζ = μ^2/2. If y≥ x and x≥ t, then θ_1(y-x)+θ_2(x-t) ≥ 0 for any choice of θ_1,θ_2> 0. Replacing y by ,', x by , and t by τ_W |m|, we get the implication between the events of interest: ,'≥, and ,≥τ_W |m| implies θ_1 ,'-,+θ_2,≥θ_2τ_W|m|. ,≥τ_W |m| implies θ_1 ,'-,+θ_2, is the linear combination of 2δ independent Gaussian random variables and is therefore Gaussian. It has mean δμ(θ_2-θ_1) and variance δθ_1^2 + δ(θ_2-θ_1)^2 = δ2θ_1^2-2θ_1θ_2+θ_2^2. By Markov's inequality θ_1 ,'-,+θ_2,≥θ_2τ_W|m| |m⊆ M ≤ e^-θ_2τ_W|m|e^θ_1 ,'-,+θ_2,|m⊆ M = exp-θ_2τ_W δ +δμ(θ_2-θ_1) + δ/22θ_1^2-2θ_1θ_2+θ_2^2 Pick θ_1 = τ_W and θ_2= 2(τ_W-μ/2). Then, the expression in the last line simplifies to exp-(τ_W-μ/2)^2 - μ^2/4, which matches the first part of the claim. -(τ_W-μ/2)^2 - μ^2/4 = -(μτ-μ^2/2)+(μ^2/2)^2/μ^2 = τ_G^2 + ζ^2/2ζ gives us the second part of the claim. §.§ Geometric intuition behind concentration inequalities For database alignment, by [lemma:highDimStats]Lemma <ref>lemma:highDimStats, entries corresponding to true pairs in have mean I_XY and false pairs have mean -I_XY. All entries have variance 2I_XY(1± o(1)). For planted matching, given _G = μ - μ^2/2 a scaled and shifted version of the original edge weight matrix , entries corresponding to true pairs in _G have mean μ^2/2 and false pairs have mean -mu^2/2. All entries have variance μ^2. For the rest of the section, we use ζ refers to I_XY in the context of database alignment and μ^2/2 in the context of planted matching. Then true pairs have mean ζ, false pair have mean -ζ, and all pairs have variance 2ζ in both and _G. We want to bound the measure of the probability spaces that correspond to each type of error event. Consider the probability space ^×. A two-dimensional projection of this space is given in [fig:CE0]Fig. <ref>fig:CE0. Note the mean point (ζ ,-ζ ). As shown in [sec:algo]Section <ref>sec:algo, the objective function for all three algorithms is to maximize a linear combination of a shifted version of or _G. All concentration equalities we use are bounds on the measures of half-spaces in the probability space ^×. Approximating the entries of as independent normal random variables with appropriate statistics (i.e. (μ,σ^2)=(ζ ,2ζ ) for true pairs and (μ,σ^2)=(-ζ ,2ζ ) for false pairs), we are able to get quick approximations for the bounds on half-spaces using the Chernoff bound. Specifically, the probability of a half-space is bounded by exp-ℓ^2/2σ^2 where ℓ denotes the separation between the half-space and the mean point and σ^2=2ζ is the variance of the terms. These bounds hold exactly in the planted matching case since entries of _G are indeed independent normal random variables. §.§.§ True pair failing threshold testing The left-hand side of [fig:CE1]Fig. <ref>fig:CE1 illustrates the half-space corresponding to {G_u,v≤τ}. The separation between the half-space and the mean point is equal to ℓ=ζ -τ. Then, the Chernoff bound gives us exp-ℓ^2/2σ^2 = exp-ζ -τ^2/4ζ, which exactly matches the statement in [lemma:typicality]Lemma <ref>lemma:typicality. Similarly, the right-hand side of [fig:CE1]Fig. <ref>fig:CE1 illustrates the half-space corresponding to the case with 2 true pairs: {G_u_1,v_1+G_u_2,v_2≤ 2τ}. The separation between the half-space and the mean point is equal to ℓ=ζ -τ√(2). Then, the Chernoff bound gives us exp-ℓ^2/2σ^2 = exp-2·ζ -τ^2/4ζ, which exactly matches the statement in [lemma:typicality]Lemma <ref>lemma:typicality. This argument can be generalized to an arbitrary number of true pairs. §.§.§ False pair passing threshold testing [fig:CE2]Fig. <ref>fig:CE2 illustrates the half-space corresponding to {G_u,v'≥τ}. The separation between the half-space and the mean point is equal to ℓ=ζ +τ. Then, the Chernoff bound gives us exp-ℓ^2/2σ^2 = exp-ζ +τ^2/4ζ, which exactly matches the statement in [lemma:FP]Lemma <ref>lemma:FP. §.§.§ Misalignment [fig:CE3]Fig. <ref>fig:CE3 illustrates the half-space corresponding to {G_u,v≤ G_u,v'}. The separation between the half-space and the mean point is equal to ℓ=ζ√(2). Then, the Chernoff bound gives us exp-ℓ^2/2σ^2 = exp-ζ/4, which exactly matches the statement in [lemma:misalignment]Lemma <ref>lemma:misalignment. §.§.§ Misalignment despite typicality The misalignment half-space shown in [fig:CE3]Fig. <ref>fig:CE3 can be broken down into two cases based on whether or not the true pairs have high enough average score. This is illustrated in [fig:CE4]Fig. <ref>fig:CE4. These `slices' of a half-space can then be covered by another set of half-spaces, as illustrated in [fig:CE5]Fig. <ref>fig:CE5. In both figures, the left-hand side corresponds to the atypicality event and the right-hand side corresponds to the misalignment-despote-typicality event. If can be shown that, the half-space in the right-hand side of [fig:CE5]Fig. <ref>fig:CE5 is at distance (√(2ζ ^2+2η^2)) to the mean point. Then, the Chernoff bound gives us exp-ℓ^2/2σ^2 = exp-ζ ^2+τ^2/2ζ, which exactly matches the statement in [lemma:condMisalignment]Lemma <ref>lemma:condMisalignment. These figures also help demonstrate the contribution of this approach in analysis: Both the original misalignment half-space in [fig:CE3]Fig. <ref>fig:CE3 as well as the misalignment-despite-typicality half-space in the right-hand side of [fig:CE5]Fig. <ref>fig:CE5 change based on the choice of (u,v'). The separation between half-spaces is greater with the misalignment-despite-typicality event, which gives us some improvement in the error bound at the cost of having to consider the atypicality error shown in the left-hand side of [fig:CE5]Fig. <ref>fig:CE5. This last half-space however, does not depend on (u,v'). The atypicality half-space is fixed once we pick (u,v). Therefore this term does not require taking a union bound. For an appropriate choice of τ, the gains made by the improvement from the misalignment half-space to the misalignment-despite-typicality half-space can compensate for the cost of having to consider the atypicality half-space. This is thanks to the fact that, unlike misalignment and misalignment-despite-typicality, we need not need to reconsider atypicality for every choice of (u,v'). § GENERATING FUNCTION The generating function R=R^,:^×→ is defined such that R() ∫∫exp, f_()f_()d d where f_,f_ denote the marginal probabilities for the two databases and ∈^× denotes the information density matrix for , as defined in [subsec:algo1]Section <ref>subsec:algo1. exp,|M=m_1 = R(+_1) The key equality of the proof is that , = log f_,|M(,) - log f_()-log f_(), where f_,|M is the joint distribution between databases given M. We show as follows: Let _M⊆ and _M⊆ denote the set of users that have a mapping under M and _M⊂_M×_M denote the set of pairs mapped by M. By the model for Gaussian data structures (as given in [subsec:modelDatabase]Subsection <ref>subsec:modelDatabase), all matched feature pairs and unmatched features are mutually independent. It then follows that log f_,|M(,) = ∑_(u,v)∈_mlog f_XY((u),(v)) + ∑_u'∈∖_mlog f_X((u')) + ∑_v'∈∖_mlog f_Y((v')) where f_XY denotes the joint distribution of correlated features while f_X,f_Y denote the marginals. As defined in [subsec:algo1]Section <ref>subsec:algo1, G_u,v=logf_XY((u),(v))/f_X((u))f_Y((v)) for any u∈ and v∈. Then, , = ∑_(u,v)∈_mlog f_XY((u),(v)) - log f_X((u)) - log f_Y((v)) log f_,|M(,)-, = ∑_(u,v)∈_mlog f_X((u))+log f_Y((v)) + ∑_u'∈∖_mlog f_X((u')) + ∑_v'∈∖_mlog f_Y((v')) = ∑_u∈log f_X((u)) + ∑_v∈log f_Y((v)) = log f_() + log f_() which shows that , = log f_,|M(,) - log f_()-log f_(). Then exp,-|M = ∫∫exp,- f_,|M(,)d d = ∫∫exp,+log f_,|M(,) - , d d = ∫∫exp,+log f_() + log f_() d d = ∫∫exp, f_()f_()d d = R() This completes the proof. Let _1,_2∈{0,1}^× be the matrix encodings of the mappings m_1 and m_2 respectively. We have the following Chernoff bounds: [a)] * Probability of atypicality: τ|m_1| ≥,_1|M=m_1≤expθτ|m_1|R(1-θ)_1 for any θ>0. * Probability of misalignment: ,_2≥,_1|M=m_1≤ R(1-θ)_1 + θ_2 for any θ>0. * Probability of misalignment despite typicality: ,_1≥τ|m_1|,_2≥,_1|M=m_1≤ e^-τ|m_1|(ν-1)Rν(1-θ)_1 + νθ_2 for any θ>0 and ν>1. [a)] * Probability of atypicality: τ|m_1| ≥,_1|M=m_1 =-τ,_1≤ 0|M=m_1 = expθ·-τ,-_1≥ 1^θ | M=m_1 = exp-τ,-θ_1≥ 1 | M=m_1 ≤exp-τ,-θ_1| M=m_1 = expθτ|m_1|exp,-θ_1| M=m_1 = expθτ|m_1|R(1-θ)_1. * Probability of misalignment: ,_2≥,_1|M=m_1 =,_2-_1≥ 0|M=m_1 = exp,θ(_2-_1)≥ 1 | M=m_1 ≤exp,θ(_2-_1)| M=m_1 = R(1-θ)_1 + θ_2. * Probability of misalignment despite typicality: If y≥ x and x≥ t, then θ_1(y-x)+θ_2(x-t) ≥ 0 for any choice of θ_1,θ_2> 0. Replace θ_1 by νθ and θ_2 by ν-1. θ_1,θ_2> 0 holds for any θ>0 and ν>1. It then follows that, if y≥ x and x≥ t, then νθ (y-x)-(1-ν)x ≥ (ν-1)t. Then ,_2≥,_1,_1≥τ|m_1| |M=m_1 ≤,νθ(_2-_1)-(1-ν)_1≥τ|m_1|(ν-1) |M=m_1 = e^,νθ(_2-_1)-(1-ν)_1≥ e^τ|m_1|(ν-1) |M=m_1 ≤ e^-τ|m_1|(ν-1)e^,νθ(_2-_1)-(1-ν)_1|M=m_1 = e^-τ|m_1|(ν-1)Rνθ(_2-_1)+ν_1 = e^-τ|m_1|(ν-1)Rν(1-θ)_1 + νθ_2 §.§ Main lemmas on the generating function Define =^,:^××(-1,1)→^(⊔)×(⊔) such that (,ρ) 1-ρ^2+ρ^2·()-ρ-ρ^⊤ρ^2·(^⊤) where represents appropriately indexed vectors of all ones. If (,ρ_i) positive definite for each i∈, then evaluating the generating function R at gives the expression R() = ∏_i∈1-ρ_i^2^||+||-∑_(u,v)Θ_u,v/(,ρ_i)^1/2 where ∈ (-1,1)^ is the correlation vector in canonical form. Let , databases in canonical form with statistics = and = ()() and ∈^× their information density matrix. g_XY(,) = logf_,(,)/f_()f_() = ∑_i∈-1/2log1-ρ_i^2-ρ_i^2x_i^2+y_i^2-2ρ_ix_iy_i/21-ρ_i^2 Define ∈^ and ∈^ such that a_u and b_v denote the features associated with users u and v respectively. As shown in the proof for [lemma:genFunc2exp]Lemma <ref>lemma:genFunc2exp, log f_,|M(,) - , = ∑log f_X(a_u)+∑log f_Y(b_v). Without loss of generality, assume =[ρ]. Then, , + log f_,|M(,) - , = ,+∑_u∈log f_X(a_u)+∑_v∈log f_Y(b_v) = -1/2log1-ρ^2∑_(u,v)Θ_u,v-||+||/2log2π - 1/2(1-ρ^2)∑_u∈ a_u^21-ρ^2+ρ^2∑_v∈Θ_u,v - 1/2(1-ρ^2)∑_v∈ b_v^21-ρ^2+ρ^2∑_u∈Θ_u,v -1/2(1-ρ^2)∑_u∈∑_v∈ρ_u,v a_ub_v + ρ_u,v b_va_u = -1/2log1-ρ^2∑_(u,v)Θ_u,v-||+||/2log2π -1/2(1-ρ^2)^⊤(,ρ) Then we can write exp ,- f_,|M(,) = 1-ρ^2^-1/2∑_(u,v)Θ_u,v/(2π)^||+||/2·exp-1/21-ρ^2_i_i^⊤(,ρ)_i_i Taking the integral of this expression gives us the claimed result. For the case where is multi-dimensional, taking the product of this expression over each i∈ gives us the proper expression. If ∈^× can be written in block diagonal form, i.e. if and can be partitioned into _1,_2 and _1,_2 such that can be written as =_100_2∈^(_1⊔_2)×(_1⊔_2), then R^,() = R^_1,_1(_1)· R^_2,_2(_2) This follows from the fact that ^,(,ρ)∈^× can be transformed into black matrix form as ^_1,_1(_1,ρ)00^_2,_2(_2,ρ) by simultaneously permuting rows and columns. Then ^,() = ^_1,_1(_1,ρ)·^_2,_2(_2,ρ) and we get the claimed result. If θ∈ such that |θ|<|1/ρ_i| for any i∈, then R([1-θ]) = ∏_i∈1-ρ_i^2^θ/1-ρ_i^2θ^2^1/2 Furthermore, if θ∈[-1,1] then R([1-θ]) ≤exp-θ(1-θ)I_XY, where I_XY = -1/2∑_i∈log1-ρ_i^2. ([1-θ],ρ) = 1-ρ^2θ-ρ(1-θ)-ρ(1-θ)1-ρ^2θ The eigenvalues of this matrix are 1-ρ^2θ-ρ(1-θ) and 1-ρ^2θ+ρ(1-θ). They are both strictly positive if and only if |θ|<|1/ρ|. Their product equals (1-ρ^2)(1-ρ^2θ^2). Plugging in this value in the denominator of 1-ρ_i^2^||+||-∑_(u,v)Θ_u,v/(,ρ_i) as given in [lemma:computationGenFunc]Lemma <ref>lemma:computationGenFunc gives us the exact expression for R([1-θ]). By [lemma:push2exp]Lemma <ref>lemma:push2exp, (1-ρ^2θ^2)≥(1-ρ^2)^θ^2 if θ∈[-1,1], which gives us R([1-θ]) ≤∏_i∈1-ρ_i^2^θ(1-θ)/2 This gives us the upper bound for R([1-θ]). Let ∈{0,1}^× some binary matrix with row sums and columns sums at most 1. Let n denote the sum of its entries. Then R((1-θ)) = ∏_i∈1-ρ_i^2^θ/1-ρ_i^2θ^2^n/2 Furthermore, if θ∈[-1,1] then R([1-θ]) ≤exp-nθ(1-θ)I_XY, where I_XY = -1/2∑_i∈log1-ρ_i^2. (1-θ) has at most 1 non-zero entry in each row and in each column. Then it can be arranged to have block diagonal form where each non-zero block on the diagonal has size 1 and is equal to [1-θ]. By [lemma:blockGenFunc]Lemma <ref>lemma:blockGenFunc, R((1-θ)) decomposes into the product of R([1-θ])^n and R() where is the zero block of the block diagonal decomposition. By [lemma:computationGenFunc]Lemma <ref>lemma:computationGenFunc, R()=1. The expression for R([1-θ] is given by [lemma:rOne2One]Lemma <ref>lemma:rOne2One. §.§ Generating function evaluated for cycles and even paths Given ν∈[0,2], let m_1 and m_2 be two mappings of size n such that = ν/2(_1 + _2) be a matrix block corresponding to a cycle of the type given in [fig:decomposition2]Fig. <ref>fig:decomposition2-I. log R()/I_XY ≤ -n/2ν(2-ν)+n·ρ_max^2(ν-1) where I_XY = -1/2∑log1-ρ_i^2, mutual information between correlated features, and ρ_max = max |ρ_i|, the largest correlation coefficient under the canonical form. To keep track of _1 and _2, let us write = νθ_1 + ν(1-θ)_2 such that the two matrices have distinct coefficients. Without loss of generality, let have its rows and columns arranged such that = ν[ θ 1-θ 0 ⋯ 0 0; 0 θ 1-θ ⋯ 0 0; 0 0 θ ⋯ 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 ⋯ θ 1-θ; 1-θ 0 0 ⋯ 0 θ; ]. has ||=|m_1|=n rows and ||=|m_1|=n columns. Recall the definition of (,ρ) as given in [lemma:computationGenFunc]Lemma <ref>lemma:computationGenFunc: (,ρ) 1-ρ^2+ρ^2·()-ρ-ρ^⊤ρ^2·(^⊤) Define the _1 and _2 to be the matrices that form the diagonal blocks of (,ρ) = _1-ρ-ρ^⊤_2: _1 1-ρ^2 + ρ^2(1) = 1-ρ^2(1-ν) _2 1-ρ^2 + ρ^2(^⊤1) = 1-ρ^2(1-ν) Then (,ρ) = _1_2 - ρ^2^⊤_1^-1 = 1-ρ^2(1-ν)^n1-ρ^2(1-ν) - ρ^2/1-ρ^2(1-ν)^⊤ = 1-ρ^2(1-ν)^2 - ρ^2^⊤ where _2 - ρ^2^⊤_1^-1 = 1-ρ^2(1-ν) - ρ^2/1-ρ^2(1-ν)^⊤ is the Schur complement of _1. We can write ^⊤ = ν^21-2θ(1-θ)+2ν^2θ(1-θ) where 1/2[ 0 1 0 ⋯ 0 0 1; 1 0 1 ⋯ 0 0 0; 0 1 0 ⋯ 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 0 ⋯ 0 1 0; 0 0 0 ⋯ 1 0 1; 0 0 0 ⋯ 0 1 0 ] and has eigenvalues cos2kπ/n, k∈{0,1,2,⋯,n-1}. These are all between -1 and 1. Furthermore, the sum of the eigenvalues of is ()=0 Thus the eigenvalues of 1-ρ^2(1-ν)^2 - ρ^2^⊤ are 1-ρ^2(1-ν)^2 - ρ^2ν^21-2θ(1-θ)+2θ(1-θ)cos2kπ/n. By [lemma:minProd]Lemma <ref>lemma:minProd with τ = 1-ρ^2(1-ν)^2 - ρ^2ν^21-2θ(1-θ) and σ = -2ρ^2ν^2θ(1-θ) (,ρ) = 1-ρ^2(1-ν)^2 - ρ^2^⊤ ≥1-ρ^2(1-ν)^2 - ρ^2ν^2 + 4ρ^2ν^2θ(1-θ)^n/21-ρ^2(1-ν)^2 - ρ^2ν^2^n/2 = 1-ρ^2(1-ν)^2 - ρ^2ν^2(1-2θ)^2^n/21-ρ^2^n/21-ρ^2(1-ν)^2^n/2 This last expression is maximized over θ by picking θ = 1/2. Then, the inequality becomes the inequality above becomes (,ρ) ≥1-ρ^2(1-ν)^n1-ρ^2^n/21-ρ^2(1-ν)^2^n/2. Since ν∈[0,2](1-ν)^2∈[0,1] by [lemma:push2exp]Lemma <ref>lemma:push2exp, we have the bound 1-ρ^2(1-ν)^2 ≥1-ρ^2^(1-ν)^2. Furthermore, since -(1-ν)∈[0,1], we have the bound 1-ρ^2(1-ν) ≥1+ρ^2^-(1-ν). Then, log(,ρ) ≥n/21+(1-ν)^2log1-ρ^2-n(1-ν)log1+ρ^2 R() has 1/2||+||-∑_(u,v)Θ_u,v = n/2(1+1-ν) = n/2(2-ν) multiplicative terms of 1-ρ^2 in the numerator and our bound has n/41+(1-ν)^2 of them in the denominator, which gives, in total, n/4ν(2-ν) + n/2(1-ν) such terms. Then log R() = n/4ν(2-ν)log1-ρ^2 - n/2(ν-1)log1+ρ^2+log1-ρ^2 By [lemma:compareEpsilon]Lemma <ref>lemma:compareEpsilon, -1/2∑log1-ρ_i^2+log1+ρ_i^2 is upper bounded by -1/2∑ρ_i^2log1-ρ_i^2, which itself is upper bounded by ρ_max^2 I_XY where ρ_max = max |ρ_i| and I_XY = -1/2∑log1-ρ_i^2. This gives us the claimed result. Given ν∈[1,2], let m_1 and m_2 be two mappings of size n such that = ν/2(_1+_2) be a matrix block corresponding to a cycle of the type given in [fig:decomposition2]Fig. <ref>fig:decomposition2-II. log R()/I_XY≤ -n/2·ν (2-ν) - (ν-1)^2ν^2 + 6n·ρ_max^2(ν-1) where I_XY = -1/2∑log1-ρ_i^2, mutual information between correlated features, and ρ_max = max |ρ_i|, the largest correlation coefficient under the canonical form. Without loss of generality, let have its rows and columns arranged such that = ν[ θ 1-θ 0 ⋯ 0 0 0; 0 θ 1-θ ⋯ 0 0 0; 0 0 θ ⋯ 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 0 ⋯ θ 1-θ 0; 0 0 0 ⋯ 0 θ 1-θ; ]. has ||=|m_1|=n rows and ||=|m_1|+1=n+1 columns. Define the _1 and _2 to be the matrices that form the diagonal blocks of (,ρ) = _1-ρ-ρ^⊤_2: _1 1-ρ^2 + ρ^2(1) = 1-ρ^2(1-ν) _2 1-ρ^2 + ρ^2(^⊤1) = 1-ρ^2(1-ν) - ρ^2ν(1-θ,0,0,⋯,0,θ) Then (,ρ) = 1-ρ^2^n 1-ρ^2(1-ν) - ρ^2ν(1-θ,0,0,⋯,0,θ) - ρ^2/1-ρ^2(1-ν)^⊤ where _2 - ρ^2^⊤_1^-1 = 1-ρ^2(1-ν) - ρ^2ν(1-θ,0,0,⋯,0,θ) - ρ^2/1-ρ^2(1-ν)^⊤ is the Schur complement of _1. Define another auxilliary matrix 1/2ν^2θ(1-θ)^⊤ - 1- 2θ(1-θ)ν^2 + ν1-ρ^2(1-ν)(1-θ,0,0,⋯,0,θ) such that the Schur complement of _1 can be expressed as 1-ρ^2(1-ν) - ρ^2ν(1-θ,0,0,⋯,0,θ) - ρ^2/1-ρ^2(1-ν)^⊤ = 1-ρ^21-ν-ρ^2ν^21-2θ(1-θ)/1-ρ^2(1-ν)-2ρ^2ν^2θ(1-θ)/1-ρ^2(1-ν) It can be shown that 1/2[ 1 1 0 ⋯ 0 0 0; 1 0 1 ⋯ 0 0 0; 0 1 0 ⋯ 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 0 ⋯ 0 1 0; 0 0 0 ⋯ 1 0 1; 0 0 0 ⋯ 0 1 1 ] + ν(1-ν)(1-ρ^2)(1-θ,0,0,⋯,0,θ). We have ν(1-ν)max{θ,1-θ}∈[-1,0]. Then, is an irreducible non-negative square matrix with row sums at most 1. Consequently, by the Perron-Frobenius theorem, its eigenvalues are all between -1 and 1. Then the eigenvalues of the Schur complement of _1 are between 1-ρ^2(1-ν)-ρ^2ν^2/1-ρ^2(1-ν) and 1-ρ^2(1-ν)-ρ^2ν^2(1-2θ)^2/1-ρ^2(1-ν). Furthermore, the sum of the eigenvalues of is ()=1+ν(1-ν)(1-ρ^2). Multiplying the n+1 eigenvalues of by -2ρ^2ν^2θ(1-θ)/1-ρ^2(1-ν) and adding 1-ρ^2(1-ν)-ρ^2ν^21-2θ(1-θ)/1-ρ^2(1-ν) gives us the n+1 eigenvalues of the Schur complement of _1. Now we plug in θ=1/2. By [lemma:minProd]Lemma <ref>lemma:minProd, the determinant of the Schur complement of _1 is lower bounded by 1-ρ^2(1-ν)^n/21-ρ^2(1-ν)-ρ^2ν^2/1-ρ^2(1-ν)^n/2+1+ν(1-ν)1-ρ^2. Then, multiplying the determinant of the Schur complement of _1 by _1 = 1-ρ^2(1-ν)^n gives us (,ρ) ≥1-ρ^2(1-ν)^n-11-ρ^2(1-ν)^2-ρ^2ν^2^n/2+1+ν(1-ν)1-ρ^2 = 1-ρ^2(1-ν)^n-1-ν(1-ν)1-ρ^21-ρ^21-ρ^2(1-ν)^2^n/2+1+ν(1-ν)1-ρ^2. Since ν∈[0,2](1-ν)^2∈[0,1] by [lemma:push2exp]Lemma <ref>lemma:push2exp, we have the bound 1-ρ^2(1-ν)^2 ≥1-ρ^2^(1-ν)^2. Furthermore, since -(1-ν)∈[0,1], we have the bound 1-ρ^2(1-ν) ≥1+ρ^2^-(1-ν). Then, log(,ρ) ≥n/2+1+ν(1-ν)1-ρ^21+(1-ν)^2log1-ρ^2 -(n-1-ν(1-ν)1-ρ^2)(1-ν)log1+ρ^2. R() has 1/2||+||-∑_(u,v)Θ_u,v = 1/2(n+n+1-ν n) = n/2(2-ν)+1/2 multiplicative terms of 1-ρ^2 in the numerator and our bound has 1/2n/2+1+ν(1-ν)1-ρ^21+(1-ν)^2 of them in the denominator, which gives, in total, n/4ν(2-ν) - n-1/2(ν-1)+1/2ν(ν-1)^31-ρ^2-1/2ν(ν-1)ρ^2 such terms. Then log R() ≤1/2n/2ν(2-ν)+ν^2(ν-1)^2 - ν^2(ν-1)^2ρ^2-ν(ν-1)ρ^2log1-ρ^2 - ν-1/2(n-1)+ν(ν-1)1-ρ^2log1-ρ^2+log1+ρ^2 For the multi-dimensional case, since under the canonical form all dimensions are mutually independent, we get log R() by simply summing this expression for all ρ_i. This gives us log R() ≤n/2ν(2-ν)+ν^2(ν-1)^2∑_i∈1/2log1-ρ_i^2 - (ν-1)ν^2(ν-1)+ν∑_i∈1/2ρ_i^2log1-ρ_i^2 -(ν-1)(n-1)∑_i∈1/2log1-ρ_i^2+log1+ρ_i^2 + (ν-1)^2ν∑_i∈1/2ρ_i^2log1-ρ^2+log1+ρ^2 Given ν≥ 1, the last term is non-positive, and therefore we can drop it while maintaining the inequality. By [lemma:compareEpsilon]Lemma <ref>lemma:compareEpsilon, -1/2∑log1-ρ_i^2+log1+ρ_i^2 is upper bounded by -1/2∑ρ_i^2log1-ρ_i^2, which itself is upper bounded by ρ_max^2 I_XY where ρ_max = max |ρ_i| and I_XY = -1/2∑log1-ρ_i^2. This gives us log R()/I_XY≤ -n/2·ν (2-ν) - (ν-1)^2ν^2 + n·ρ_max^2(ν-1) + ρ_max^2(ν-1)^2(ν^2+1) Since n≥ 1 and ν∈[1,2], we have (ν-1)^2(ν^2+1) ≤ 5n(ν-1). This gives us the claimed result. § STATISTICS OF THE INFORMATION DENSITY MATRIX Recall the definition of : Let f_XY, f_X and f_Y denote the joint and marginal distributions for correlated features in and . G_u,v = logf_XY((v),(v))/f_X((u))f_Y((v)) The expressions for the first and second moments of in terms of the correlation vector as defined in [sec:canonical]Appendix <ref>sec:canonical are given below. [(a)] * Mean and variance of information density of a true pair: G_u,v|uMv = ∑_i∈ - 1/2log1-ρ_i^2 G_u,v|uMv = ∑_i∈ρ_i^2 * Mean and variance of information density of a false pair: G_u,v'|uMv' = ∑_i∈ - 1/2log1-ρ_i^2 - ρ_i^2/1-ρ_i^2 G_u,v'|uMv' = ∑_i∈ρ_i^2(1+ρ_i^2)/1-ρ_i^2^2 * Covariance between information density of a true match and a false match with a user in common: G_u,v,G_u,v'|uMv = ∑_i∈ -ρ_i^4/2(1-ρ_i^2) * Covariance between information density of two false matches with a user in common: G_u,v',G_u,v”|uMv = ∑_i∈ρ_i^4/41-ρ_i^2^2 * Covariance between information density of two false matches that break apart two true matches: G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 = ∑_i∈ρ_i^4/1-ρ_i^2 * Covariance between information density of two false matches that break apart one true match: G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 = ∑_i∈ρ_i^4(1+ρ_i^2)/21-ρ_i^2^2 * Covariance between information density of two matches that do not break apart any true match: G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 = 0 Assume and are given in canonical form with correlation vector ∈[-1,1]^. Then logf_XY(,)/f_X()f_Y() = log∏_i f_X_iY_i(x_i,y_i)/∏_i f_X_i(x_i)∏_if_Y_i(y_i) = ∑_i∈logf_X_iY_i(x_i,y_i)/f_X_i(x_i)f_Y_i(y_i) = ∑_i∈log1/2π√(1-ρ_i^2)exp-1/2x_iy_i^⊤1ρρ1^-1x_iy_i/1/√(2π)exp-x_i^2/2·1/√(2π)exp-y_i^2/2 = ∑_i∈ -1/2log1-ρ_i^2-ρ_i^2(x_i^2+y_i^2)-2ρ_ix_iy_i/2(1-ρ_i^2) = ∑_i∈ -1/2log1-ρ_i^2-ρ_i^2(x_i-y_i)^2/2(1-ρ_i^2)+ρ_ix_iy_i/1+ρ_i Then G_u,v can be written as the sum of || independent random variables, each a function of only A_i(u),B_i(v) and ρ_i. For the rest of this section we assume ||=1 and drop all subscripts i for simplicity of notation. The means, variances or covariances for the case ||>1 can be found by summing over the corresponding expression for different values of ρ. For the derivations (a) to (d), assume uMv and let X A(u), Y B(v). Then (X,Y)∼,1ρρ1. Define Z Y-ρ X/√(1-ρ^2). Then Y = ρ X + √(1-ρ^2)Z. Also define W B(v'). Then W, X and Z are i.i.d. standard normal random variables. [(a)] * G_u,v|uMv and G_u,v|uMv: We want to find the mean and variance of -1/2log1-ρ^2-ρ^2(X-Y)^2/2(1-ρ^2)+ρ XY/1+ρ. The first term is constant. We find the mean, variance and covariance of the two other terms. X-Y = (1-ρ)X - √(1-ρ^2)Z ∼(0,2(1-ρ)), which implies that (X-Y)^2/2(1-ρ)∼χ^2(1) and therefore ρ^2(X-Y)^2/2(1-ρ^2) has mean ρ^2/1+ρ and variance 2ρ^4/(1+ρ)^2. XY = ρ X^2 + √(1-ρ^2)XZ. Then XY has mean ρ. (XY-ρ)^2 = ρ^2X^4 + 2ρ√(1-ρ^2)X^3Z + (1-ρ^2)X^2Z^2 - 2ρ^2 X^2 - 2ρ√(1-ρ^2)XZ +ρ^2 (XY) = [(XY-ρ)^2] = 3ρ^2 + (1-ρ^2) -2ρ^2 + ρ^2 = 1+ρ^2 So ρ XY/1+ρ has mean ρ^2/1+ρ and variance ρ^2(1+ρ^2)/(1+ρ)^2. ((X-Y)^2,XY) = (X-Y)^2-2(1-ρ)XY-ρ (X-Y)^2-2(1-ρ)XY-ρ = -2(XY-ρ)^2 -2(XY-ρ) + (X^2+Y^2)(XY-ρ) = -2(XY-ρ)^2 -2(XY-ρ) + X^3Y+XY^3-ρ X^2 -ρ Y^2 ((X-Y)^2,XY) = -2(XY) + 2[X^3Y] - 2ρ[X^2] = -2(1+ρ^2) +2 ρ X^4 + √(1-ρ^2)X^3Z - 2ρ = -2-2ρ^2 + 6ρ - 2ρ = -2(1-ρ)^2 The covariance between ρ^2(X-Y)^2/2(1-ρ^2) and ρ XY/1+ρ equals -ρ^3(1-ρ)/(1+ρ)^2. Then G_u,v|uMv = -1/2log1-ρ^2-ρ^2(X-Y)^2/2(1-ρ^2)+ρ XY/1+ρ = -1/2log1-ρ^2 - ρ^2/1+ρ + ρ^2/1+ρ = -1/2log1-ρ^2 G_u,v|uMv = ρ^2(X-Y)^2/2(1-ρ^2) + ρ XY/1+ρ - 2ρ^2(X-Y)^2/2(1-ρ^2),ρ XY/1+ρ = 2ρ^4/(1+ρ)^2 + ρ^2(1+ρ^2)/(1+ρ)^2 + 2ρ^3(1-ρ)/(1+ρ)^2 = ρ^2+2ρ^3+ρ^4/(1+ρ)^2 = ρ^2 * G_u,v'|uMv' and G_u,v'|uMv': Once again, we find the mean, variance and covariance of the random terms in -1/2log1-ρ^2-ρ^2(X-W)^2/2(1-ρ^2)+ρ XW/1+ρ. X-W∼(0,2), which implies that (X-W)^2/2∼χ^2(1) and therefore ρ^2(X-W)^2/2(1-ρ^2) has mean ρ^2/1-ρ^2 and variance 2ρ^4/1-ρ^2^2. [XW]=0 and (XW) = [X^2W^2] = [X^2][W^2]=1. Then ρ XW/1+ρ has mean 0 and variance ρ^2/(1+ρ)^2. ((X-W)^2,XW) = (X-W)^2-2XW (X-W)^2-2XW = X^3W + XW^3 - 2X^2W^2 - 2XW ((X-W)^2,XW) = -2 The covariance between ρ^2(X-W)^2/2(1-ρ^2) and ρ XY/1+ρ equals -ρ^3/(1+ρ)(1-ρ^2). Then G_u,v'|uMv' = -1/2log1-ρ^2-ρ^2(X-W)^2/2(1-ρ^2)+ρ XW/1+ρ = -1/2log1-ρ^2 - ρ^2/1-ρ^2 G_u,v'|uMv' = ρ^2(X-W)^2/2(1-ρ^2) + ρ XW/1+ρ - 2ρ^2(X-W)^2/2(1-ρ^2),ρ XW/1+ρ = 2ρ^4/1-ρ^2^2 + ρ^2/(1+ρ)^2 + 2ρ^3/(1+ρ)(1-ρ^2) = 2ρ^4+ρ^2(1-ρ)^2 + 2ρ^3(1-ρ)/1-ρ^2^2 = ρ^2(1+ρ^2)/1-ρ^2^2 * G_u,v,G_u,v'|uMv: We want to find the covariance between -1/2log1-ρ^2-ρ^2(X-Y)^2/2(1-ρ^2)+ρ XY/1+ρ and -1/2log1-ρ^2-ρ^2(X-W)^2/2(1-ρ^2)+ρ XW/1+ρ. We've already shown that [(X-Y)^2] = 2(1-ρ), [XY]=ρ, [(X-W)^2] = 2 and [XW]=0. * ρ^2(X-Y)^2/2(1-ρ^2),ρ^2(X-W)^2/2(1-ρ^2) = ρ^4/2(1+ρ)^2: (X-Y)^2(X-W)^2 = X(1-ρ)-Z√(1-ρ^2)^2(X-W)^2 = (1-ρ)^2X^4 -2W(1-ρ)+Z√(1-ρ^2)(1-ρ)X^3 + W^2(1-ρ)^2+Z^2(1-ρ^2)X^2 -4X^2WZ(1-ρ)√(1-ρ^2) -2W^2Z(1-ρ)√(1-ρ^2)+WZ^2(1-ρ^2)X + W^2Z^2(1-ρ^2) [(X-Y)^2(X-W)^2] = 3(1-ρ)^2+ (1-ρ)^2 + (1-ρ^2) + (1-ρ^2) = 6 - 8ρ + 2ρ^2 (X-Y)^2,(X-W)^2 = [(X-Y)^2(X-W)^2] - [(X-Y)^2][(X-W)^2] = 6 - 8ρ + 2ρ^2 - 4 + 4ρ = 2(1-ρ)^2 * ρ XY/1+ρ,ρ^2(X-W)^2/2(1-ρ^2) = ρ^4/(1+ρ)^2(1-ρ): XY(X-W)^2 = Xρ X + √(1-ρ^2)ZX^2-2XW+W^2 = ρ X^4 + √(1-ρ^2)Z-2ρ WX^3 + ρ W^2-2√(1-ρ^2)WZX^2 + √(1-ρ^2)XZW^2 [XY(X-W)^2] = 3ρ+ρ = 4ρ XY,(X-W)^2 = [XY(X-W)^2] - [XY][(X-W)^2] = 4ρ - 2ρ = 2ρ * ρ^2(X-Y)^2/2(1-ρ^2),ρ XW/1+ρ = 0: [(X-Y)^2XW] = [(X-Y)^2X][W] = 0 (X-Y)^2,XW = [(X-Y)^2XW]-[(X-Y)^2][XW] = 0 * ρ XY/1+ρ,ρ XW/1+ρ = 0: [XYXW] = [X^2Y][W] = 0 XY,XW = [XYXW]-[XY][XW] = 0 Then G_u,v,G_u,v'|uMv = ρ^2(X-Y)^2/2(1-ρ^2),ρ^2(X-W)^2/2(1-ρ^2) - ρ XY/1+ρ,ρ^2(X-W)^2/2(1-ρ^2) = ρ^4/2(1+ρ)^2 - ρ^4/(1+ρ)^2(1-ρ) = -ρ^4/2(1-ρ^2) * G_u,v',G_u,v”|uMv: We want to find the covariance between -1/2log1-ρ^2-ρ^2(X-W_0)^2/2(1-ρ^2)+ρ XW_0/1+ρ and -1/2log1-ρ^2-ρ^2(X-W_1)^2/2(1-ρ^2)+ρ XW_1/1+ρ. We've already shown that [(X-W_0)^2] = [(X-W_1)^2] = 2 and [XW_0]=[XW_1]=0. It can also be shown that ((X-W_0)^2,XW_1) = (XW_0,(X-W_1)^2) = (XW_0,XW_1) = 0. Then we only need to compute ρ^2(X-W_0)^2/2(1-ρ^2),ρ^2(X-W_1)^2/2(1-ρ^2). (X-W_0)^2,(X-W_1)^2 = X^4 - 2X^3(W_0+W_1) + X^2(W_0^2+W_1^2) - 4X^2W_0W_1 -2X(W_0^2W_1 + W_0W_1^2) + W_0^2W_1^2 [(X-W_0)^2,(X-W_1)^2] = 3 + 1 + 1 = 5 ((X-W_0)^2,(X-W_1)^2) = [(X-W_0)^2,(X-W_1)^2] -[(X-W_0)^2][(X-W_1)^2] = 1 Then G_u,v',G_u,v”|uMv = ρ^2(X-W_0)^2/2(1-ρ^2),ρ^2(X-W_1)^2/2(1-ρ^2) = ρ^4/41-ρ^2^2 * G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2: Let X_k A(u_k), Y_k B(v_k) and Z_k Y_k-ρ X_k/√(1-ρ^2) for k={1,2}. Then X_1,X_2,Z_1,Z_2 are i.i.d. standard normal random variables and {X_2,Y_1} and {X_1,Y_2} are two pairs of independent standard normal variables. We want to find the covariance between -1/2log1-ρ^2-ρ^2(X_2-Y_1)^2/2(1-ρ^2)+ρ X_2Y_1/1+ρ and -1/2log1-ρ^2-ρ^2(X_1-Y_2)^2/2(1-ρ^2)+ρ X_1Y_2/1+ρ. We know that [(X_2-Y_1)^2]=[(X_2-Y_1)^2]=2 and [X_2Y_1]=[X_1Y_2]=0. * ρ^2(X_2-Y_1)^2/2(1-ρ^2),ρ^2(X_1-Y_2)^2/2(1-ρ^2) = 0: (X_2-Y_1)^2(X_1-Y_2)^2 = X_2-ρ X_1 - √(1-ρ^2)Z_1^2X_1-ρ X_2 - √(1-ρ^2)Z_2^2 Ignoring the terms with an X_1,X_2,Z_1 or Z_2 factor of power 1, we get the following expansion: (X_2-Y_1)^2(X_1-Y_2)^2 = ρ^2(X_1^4+X_2^4)+(1+ρ^4-4ρ^2)X_1^2X_2^2 (1-ρ^2)(X_1^2Z_1^2+X_2^2Z_2^2) + ρ^2(1-ρ^2)(X_1^1Z_2^2+X_1^2Z_2^2) 1-ρ^2^2Z_1^2Z_2^2 + (⋯) [(X_2-Y_1)^2(X_1-Y_2)^2] = 6ρ^2 + (1+ ρ^4 - 4ρ^2) + 2(1-ρ^2) + 2ρ^2(1-ρ^2) + 1-ρ^2^2 = 4 (X_2-Y_1)^2,(X_1-Y_2)^2 = [(X_2-Y_1)^2(X_1-Y_2)^2] - [(X_2-Y_1)^2][(X_2-Y_1)^2] = 0 * ρ^2(X_2-Y_1)^2/2(1-ρ^2),ρ X_1Y_2/1+ρ = ρ X_2Y_1/1+ρ,ρ^2(X_1-Y_2)^2/2(1-ρ^2) = -ρ^5/(1+ρ)^2(1-ρ) (X_2-Y_1)^2X_1Y_2 = X_2-ρ X_1 - √(1-ρ^2)Z_1^2X_1ρ X_2 + √(1-ρ^2) Z_2 Ignoring the terms with an X_1,X_2,Z_1 or Z_2 factor of power 1, we get the following expansion: (X_2-Y_1)^2X_1Y_2 = -2ρ^2X_1^2X_2 + (⋯) [(X_2-Y_1)^2X_1Y_2] = -2ρ^2 (X_2-Y_1)^2,X_1Y_2 = [(X_2-Y_1)^2X_1Y_2] - [(X_2-Y_1)^2][X_1Y_2] = -2ρ^2 * ρ X_2Y_1/1+ρ,ρ X_1Y_2/1+ρ = ρ^4/(1+ρ)^2 [X_2Y_1X_1Y_2] = [X_1Y_1][X_2Y_2] = ρ^2 (X_2Y_1,X_1Y_2) = [X_2Y_1X_1Y_2] - [X_2Y_1][X_1Y_2] = ρ^2 Then G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 = ρ^2(X_2-Y_1)^2/2(1-ρ^2),ρ^2(X_1-Y_2)^2/2(1-ρ^2) - 2ρ^2(X_2-Y_1)^2/2(1-ρ^2),ρ X_1Y_2/1+ρ + ρ X_2Y_1/1+ρ,ρ X_1Y_2/1+ρ = 0 + 2ρ^5/(1+ρ)^2(1-ρ) + ρ^4/(1+ρ)^2 = ρ^4/1-ρ^2 * G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2: Let X A(u_1), Y B(v_1) and Z Y-ρ X/√(1-ρ^2). Furthermore let W_x A(u_2) and W_y B(v_2). Then W_x,W_y,X and Z are i.i.d. standard normal random variables. We want to find the covariance between -1/2log1-ρ^2-ρ^2(W_x-Y)^2/2(1-ρ^2)+ρ W_xY/1+ρ and -1/2log1-ρ^2-ρ^2(X-W_y)^2/2(1-ρ^2)+ρ XW_y/1+ρ. We know that [(W_x-Y)^2]=[(X-W_y)^2]=2 and [W_xY]=[XW_y]=0. It can also be shown that ((W_x-Y)^2,XW_y) = (W_xY,(X-W_y)^2) = (W_xY,XW_y) = 0. Then we only need to compute ρ^2(W_x-Y)^2/2(1-ρ^2),ρ^2(X-W_y)^2/2(1-ρ^2). (W_x-Y)^2(X-W_y)^2 = W_x-ρ X - √(1-ρ^2) Z^2(X-W_y)^2 Ignoring the terms with an W_x,W_y,X or Z factor of power 1, we get the following expansion: (W_x-Y)^2(X-W_y)^2 = ρ^2 X^4 + X^2W_x^2 + ρ^2X^2W_y^2 + (1-ρ^2)X^2Z^2 + W_x^2W_y^2 + (1-ρ^2)Z^2W_y^2 + (⋯) [(W_x-Y)^2(X-W_y)^2] = 3ρ^2 + 1 + ρ^2 + (1-ρ^2) + 1 + (1-ρ^2) = 2(1+ρ^2) Then G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 = ρ^2(W_x-Y)^2/2(1-ρ^2),ρ^2(X-W_y)^2/2(1-ρ^2) = ρ^4(1+ρ^2)/21-ρ^2^2 * G_u_2,v_1,G_u_1,v_2|u_1Mv_1,u_2Mv_2 Conditioned on u_1Mv_1,u_2Mv_2, (A(u_2),B(u_1)) is independent from (A(u_1),B(u_2)). Since G_u_2,v_1 is a function of the former and G_u_1,v_2 a function of the latter with no additional randomness, it follows that G_u_2,v_1 and G_u_1,v_2 are independent and therefore have no correlation. Under [cond:highDimensional]Condition <ref>cond:highDimensional, [(a)] * Mean and variance of information density of a true pair: G_u,v|uMv = I_XY G_u,v|uMv = 2I_XY(1-o(1)) * Mean and variance of information density of a false pair: G_u,v'|uMv' = -I_XY(1+o(1)) G_u,v'|uMv' = 2I_XY(1+o(1)) Exact expressions for the statistics are given in [lemma:stats]Lemma <ref>lemma:stats. The expression for G_u,v|uMv is exactly equal to the expression for mutual information I_XY = -1/2∑log1-ρ_i^2. By the Taylor series expansion of log, given x∈(0,1), -1/xlog(1-x) = 1/x∑_k=1^∞x^k/k < 1/x∑_k=1 x^k = 1/1-x So -log(1-x) < x/1-x. Furthermore, the limit of -1/xlog(1-x) and the limit of 1/1-x are both 1 as x→ 0. Then, given x≤ o(1), x/1-x = -(1+o(1))log(1-x). Then ∑ρ_i^2/1-ρ_i^2 = 2I_XY(1+o(1)), which gives us G_u,v'|uMv' = I_XY - 2I_XY(1+o(1)) = -I_XY(1+o(1)). Finally, since -log(1-x) < x/1-x < x(1+x)/(1-x)^2 and the limits of -1/xlog(1-x) and (1+x)/(1-x)^2 are both 1 as x→ 0, we have ∑ρ_i^21+ρ_i^2/1-ρ_i^2^2 = 2I_XY(1+o(1)), which gives us the last part of the claim. So ρ_i^2/1-ρ_i^2 > -log1-ρ_i^2. § OTHER LEMMAS Let a∈(-1,∞). Then * (1+ax)>(1+a)^x if x∈(0,1). * (1+ax)< (1+a)^x if x∈∖[0,1]. * If |ax|≤ o(1), then |1+ax/(1+a)^x-1|≤ o(1). Let f(x) (1+a)^x-(1+ax). f(0)=f(1)=0. The first derivative is given by f'(x)=(1+a)^xlog(1+a)-a. f'(0) is strictly negative for x=0 since log(1+a)<a and f'(1) is strictly positive for x=1 since log(1+a)≥a/1+a. Furthermore the second derivative f”(x) = (1+a)^xlog^2(1+a) is strictly positive. Then this function has a global minimum at some x^*∈(0,1), is strictly decreasing for x<x^* and strictly increasing for x>x^*. Then f(x)<0 for x∈(0,1) and f(x)>0 for x∈(-∞,0)∩(1,∞). The Taylor series expansion for f(x) at x=0 is given by f(x) =-ax + ∑_k=1^∞x^klog^k(1+a)/k!, which is -o(1) + o(1) if |ax|≤ o(1). Then |1+ax/(1+a)^x-1|≤ o(1). The Taylor series expansion for f(x) at x=1 is given by f(x) =-a(x-1) + (1+a)∑_k=1^∞(x-1)^klog^k(1+a)/k!, which is -o(1) + (1+a)o(1) if |a(x-1)|≤ o(1). Then |1+ax/(1+a)^x-1|≤ o(1). Let τ∈(0,∞), σ∈(-τ,τ), x_1,⋯,x_n∈[-1,1] and s = ∑ x_i. Then ∏_i=1^n (τ+σ x_i) ≥τ-σ^n-s/2τ+σ^n+s/2 Define θ_i=x_i+1/2∈[0,1]. By the concavity of log, logτ+σ x_i = logθ_i(τ+σ)+(1-θ_i)(τ-σ) ≥θ_ilog(τ+σ)+(1-θ_i)log(τ-σ) = x_i/2logτ+σ/τ-σ+1/2log(τ^2-σ^2) Then ∏_i=1^n (τ+σ x_i) ≥exp∑_i=1^n x_i/2logτ+σ/τ-σ+1/2log(τ^2-σ^2) = exps/2logτ+σ/τ-σ+n/2log(τ^2-σ^2) = τ+σ^n+s/2τ-σ^n-s/2 Let ρ_maxmax_i |ρ_i| and I_XY -1/2∑_i log1-ρ_i^2. Then -∑_i log1-ρ_i^2 + log1+ρ_i^2≤∑_i ρ_i^2log1-ρ_i^2≤ρ_max^2 I_XY First we show that, for any x∈[0,1], -xlog(1-x) ≥ -log(1-x)-log(1+x): By the Taylor series expansion, -xlog1-x = x∑_k=1^∞x^k/k = ∑_k=2^∞x^k/k-1 = ∑_k=1^∞x^2k/2k-1 + x^2k+1/2k For x∈[0,1], x^2k/2k-1 + x^2k+1/2k≥ x^2k1/2k-1 + 1/2k = x^2k/k1+1/2(2k-1), which is strictly greater than x^2k/k for any k>1/2. Then -xlog1-x = ∑_k=1^∞x^2k/2k-1 + x^2k+1/2k > ∑_k=1^∞x^2k/k (*) -log1-x^2 = -log1-x -log1+x where (*) follows from the Taylor series expansion. Consequently, we have -ρ^2log1-ρ^2≥ - log1-ρ^2 - log1+ρ^2 for any ρ∈[-1,1]. The result follows from the fact that ∑ρ_i^2log1-ρ_i^2≤max_i ρ_i^2 log1-ρ_i^2.
http://arxiv.org/abs/2307.00929v1
20230703110904
Variable selection in a specific regression time series of counts
[ "Marina Gomtsyan" ]
stat.ME
[ "stat.ME" ]
Université Paris-Saclay, AgroParisTech, INRAE, UMR MIA Paris-Saclay, 91120 Palaiseau, France Time series of counts occurring in various applications are often overdispersed, meaning their variance is much larger than the mean. This paper proposes a novel variable selection approach for processing such data. Our approach consists in modelling them using sparse negative binomial GLARMA models. It combines estimating the autoregressive moving average (ARMA) coefficients of GLARMA models and the overdispersion parameter with performing variable selection in regression coefficients of Generalized Linear Models (GLM) with regularised methods. We describe our three-step estimation procedure, which is implemented in the package. We evaluate the performance of the approach on synthetic data and compare it to other methods. Additionally, we apply our approach to RNA sequencing data. Our approach is computationally efficient and outperforms other methods in selecting variables, i.e. recovering the non-null regression coefficients. Variable selection in a specific regression time series of counts M. Gomtsyan August 1, 2023 ================================================================= § INTRODUCTION In recent years, the interest in the study of count time series has increased. These series represent a record of the number of occurrences of events over time and, consequently, are nonnegative and integer-valued. They find practical applications in various fields, such as the contagion dynamics of COVID-19 in epidemiology <cit.>, the number of transactions in stocks in finance <cit.>, and RNA sequencing (RNA-Seq) kinetics data in molecular biology <cit.>. Count time series require special treatment since many continuous models cannot interpret discrete data <cit.>. In addition, as mentioned in <cit.>, count time series are often overdispersed, i.e. the variance is larger than the mean. One can capture the overdispersed nature of such data with negative binomial distribution models. In particular, they efficiently interpret RNA-Seq data <cit.>. Numerous models exist for count time series, with a detailed review in <cit.>. These models can be grouped into two main classes: Integer Autoregressive Moving Average (INARMA) and generalized state space models. McKenzie in <cit.> and Al-Osh and Alzaid in <cit.> were the first to study the Integer Autoregressive process (INAR(1)). Later in <cit.> it was extended to pth order process. The Integer-valued Moving Average (INMA) process is introduced in <cit.>. Integer-valued generalized autoregressive conditional heteroskedasticity (INGARCH) models that can handle overdispersion are studied in <cit.> and <cit.>. An advantage of INARMA processes is their autocorrelation structure, which is similar to the one of the autoregressive moving average (ARMA) models. However, the statistical inference in INAR models is more complex, as explained in <cit.>. It requires intensive computational approaches, such as the efficient MCMC algorithm in <cit.>, developed for INARMA processes of known AR and MA orders. We refer the reader to <cit.> for further details on INARMA models. Generalized state space models, introduced in <cit.>, are one of the most commonly used approaches for time series analysis <cit.>. These models can be classified as parameter-driven and observation-driven models. The main difference between these two model groups is that the state vector evolves independently of past observations in parameter-driven models. In contrast, in observation-driven models, the state vector depends on the past history of the observations. An overview of parameter-driven models can be found in <cit.>. In <cit.>, the Poisson log-liner regression is introduced, which in <cit.> is extended to the case where observations are assumed to have a distribution from the exponential family. In <cit.>, Davis and Wu considered a negative binomial model, where the serial dependence is introduced through a dependent latent process in the link function. Despite the simple construction of these models, the parameter estimation in parameter-driven models is computationally expensive, as explained in <cit.>. The observation-driven models do not suffer from this computational downside. Following the introduction in <cit.>, they were further studied in <cit.>. In the literature, there are two types of observation-driven models: the Generalized Linear Autoregressive Moving Average (GLARMA) models introduced in <cit.> and further studied in <cit.>, <cit.>, <cit.> and the Poisson autoregressive models studied in <cit.>, <cit.> and <cit.>. Note that GLARMA models cannot be seen as a particular case of the log-linear Poisson autoregressive models. In this paper, we will consider the negative binomial GLARMA model introduced in <cit.> with additional covariates. More precisely, given the past history ℱ_t-1=σ(Y_s,s≤ t-1), we assume that Y_t|ℱ_t-1∼NB(μ_t^⋆, α^⋆), where NB(μ, α) denotes the negative binomial distribution with mean μ and overdispersion parameter α. In (<ref>), μ_t^⋆=exp(W_t^⋆) with W_t^⋆=∑_i=0^pβ_i^⋆ x_t,i+Z_t^⋆. Here the x_t,i's represent the p regressor variables (p≥ 1) and Z_t^⋆=∑_j=1^q γ_j^⋆ E_t-j^⋆ with E_t^⋆=Y_t-μ_t^⋆/μ_t^⋆ + μ_t^⋆^2/α^⋆, where 1≤ q≤∞ and E_t^⋆=0 for all t≤ 0. The E_t^⋆'s correspond to the working residuals in classical Generalized Linear Models (GLM). There are several types of residuals but in our model we consider score-type residuals, as proposed in <cit.>. It is important to mention that when q=∞, (Z_t^⋆) satisfies the ARMA-like recursions provided in Equation (4) of <cit.>. The resulting model defined by (<ref>), (<ref>) and (<ref>) is the negative binomial GLARMA model. The main goal of this paper is to introduce a novel approach for variable selection in the deterministic part (covariates) of sparse negative binomial GLARMA models defined in Equations (<ref>), (<ref>) and (<ref>). Here the vector of the β_i^⋆'s is sparse, i.e. many β_i^⋆’s are null, and thus only a few regressor variables are explanatory. The novel approach that we propose consists in combining a procedure for estimating the ARMA part coefficients (to take into account the temporal dependence that may exist in the data) with regularised methods designed for GLM, as those proposed in <cit.> and <cit.>. The existing variable selection approaches for discrete data, such as <cit.>, do not consider temporal dependence. Our approach can be applied in modelling RNA-Seq time series data in molecular biology. With RNA-Seq, it is possible to count the numbers of RNA fragments present in a biological sample. Linking these RNA fragments to genes allows for determining the expression level of genes as integer counts. As explained in <cit.>, non-coding genes are potential key regulators of the expression of coding genes. In this framework, only a few among many non-coding genes are likely to be involved in explaining the expression of the coding genes. Since, as discussed earlier, the nature of RNA-Seq data is captured well with negative binomial models, a variable selection approach for sparse negative binomial GLARMA models can be efficient in identifying the relevant non-coding genes. The paper is organised as follows. Firstly, in Section <ref>, we describe the properties of the likelihood of negative binomial GLARMA models. Secondly, in Section <ref> we propose a novel three-stage estimation procedure. It consists in first estimating the ARMA coefficients, then in estimating the regression coefficients by using a regularized approach, and estimating overdispersion parameter with a maximum likelihood approach. The algorithmic implementation of the methodology is given in Section <ref>. Next, in Section <ref>, we provide some numerical experiments on simulated data in order to illustrate our method and to compare its performance to the regularized methods designed for GLM of <cit.>. Finally, in Section <ref>, we illustrate our method on RNA-Seq data that follows the temporal evolution of gene expression. § VARIABLE SELECTION IN SPARSE NEGATIVE BINOMIAL GLARMA MODELS In this section we introduce our variable selection approach in sparse negative binomial GLARMA models. We start by discussing the properties of the likelihood of negative binomial GLARMA models in Section <ref>. Next, in Section <ref> we explain how our approach estimates the parameters of the model. We conclude by the description of the algorithm of our methodology in Section <ref>. §.§ Properties of the likelihood of negative binomial GLARMA models As stated in <cit.>, the probability mass function of negative binomial distribution is f(Y_t | W_t, α) = Γ(α + Y_t)/Γ(α) Γ(Y_t+1)(α/α + μ_t)^α(μ_t/α + μ_t)^Y_t. Note that it converges to the Poisson probability mass function when α→∞. Let us consider the parameter δ^⋆=(β^⋆',γ^⋆'), where u^' denotes the transpose of the vector u, β^⋆=(β_0^⋆,β_1^⋆,…,β_p^⋆)' represents the vector of regressor coefficients defined in (<ref>), and γ^⋆=(γ_1^⋆,…,γ_q^⋆)' is the vector of the ARMA part coefficients defined in (<ref>). Inspired by <cit.>, we will estimate δ^⋆ by maximizing with respect to δ=(β',γ'), with β=(β_0,β_1,…,β_p)' and γ=(γ_1,…,γ_q)' the following criterion based on the conditional log-likelihood: L(δ, α) = ∑_t=1^n ( logΓ(α + Y_t) - logΓ(Y_t+1) -logΓ(α) + αlogα + Y_t W_t - (α + Y_t) log(α + exp(W_t)) ). In (<ref>), W_t(δ, α)=β'x_t+Z_t(δ, α)=β_0+∑_i=1^pβ_i x_t,i+∑_j=1^q γ_j E_t-j(δ, α), with x_t=(x_t,0,x_t,1,…,x_t,p)', x_t,0=1 for all t and E_t(δ, α)=Y_texp(-W_t(δ, α))-1/1 + exp(W_t(δ, α))/α,t>0E_t(δ, α)=0t≤ 0. To obtain δ defined by δ= δmax L(δ, α), we consider the first derivatives of L: ∂ L/∂δ(δ, α) = ∑_t=1^n( Y_t ∂ W_t(δ, α)/∂δ - (α + Y_t) exp(W_t(δ, α))/α + exp(W_t(δ, α))∂ W_t/∂δ(δ, α)) = ∑_t=1^n( Y_t - (α + Y_t) exp(W_t(δ, α))/α + exp(W_t(δ, α))) ∂ W_t/∂δ , where ∂ W_t/∂δ(δ, α)=∂β' x_t/∂δ+∂ Z_t/∂δ (δ, α), β, x_t and Z_t being given in (<ref>). The computations of the first derivatives of W_t are detailed in Appendix <ref>. The Hessian of L can be obtained as follows: ∂^2 L/∂δ'∂δ(δ, α) = ∑_t=1^n ( Y_t - (α + Y_t) exp(W_t(δ, α))/α + exp(W_t(δ, α))) ∂^2 W_t/∂δ'∂δ(δ, α) - ∑_t=1^n (α + Y_t) exp(W_t(δ, α))/α + exp(W_t(δ, α))( 1 - exp(W_t(δ, α))/α + exp(W_t(δ, α))) ∂ W_t/∂δ'(δ, α)∂ W_t/∂δ(δ, α). The details for computing the second derivative of W_t are given in Appendix <ref>. Since in the sparse framework, with many components of β^⋆ being null, this procedure provides poor estimation results, we devised a novel estimation procedure described in the next section. §.§ Parameter estimation and variable selection To select the most relevant elements of β^⋆, we propose a three-stage procedure. Firstly, we estimate γ^⋆ by using the Newton-Raphson algorithm described in Section <ref>. Next, we estimate β^⋆ by using the regularized approach outlined in Section <ref>. Finally, we estimate α^⋆ by a maximum likelihood approach as explained in Section <ref>. Additionally, in Section <ref> we explain how to guarantee the robustness of the selected variables. §.§.§ Estimation of γ^⋆ In order to obtain the estimate of γ^⋆, we propose using γ=γmax L(β^(0)',γ', α^(0)), where L is defined in (<ref>), β^(0)=(β_0^(0),…,β_p^(0))' and α^(0) are given initial estimations of β^⋆ and α^⋆, respectively, and γ=(γ_1,…,γ_q)'. In Section <ref> we explain how we choose these initial values. We use the Newton-Raphson algorithm to obtain γ. For r ≥ 1, starting from the initial value γ^(0)=(γ_1^(0),…,γ_q^(0))': γ^(r)=γ^(r-1)-∂^2 L/∂γ'∂γ(β^(0)',γ^(r-1)', α^(0))^-1∂ L/∂γ(β^(0)',γ^(r-1)', α^(0)), where the first and second derivatives of L are obtained using the same strategy as the one used for deriving Equations (<ref>) and (<ref>) in Section <ref>. §.§.§ Variable selection: Estimation of β^⋆ In order to obtain a sparse estimator of the β_i^⋆'s in Model (<ref>), we use a regularized variable selection approach proposed in <cit.> for fitting generalized linear models. To perform variable selection in the β_i^⋆'s of Model (<ref>), in other words, to obtain a sparse estimator of β^⋆, we shall use a methodology inspired by <cit.> for fitting generalized linear models. This approach penalises with ℓ_1 penalties a quadratic approximation to the log-likelihood obtained by a Taylor expansion. Using β^(0), γ, and α^(0) defined in Section <ref> the quadratic approximation is obtained as follows: L(β) :=L(β_0,…,β_p,γ, α^(0)) =L(β^(0)) +∂ L/∂β(β^(0),γ, α^(0))(β-β^(0)) +1/2 (β-β^(0))' ∂^2 L/∂β∂β'(β^(0),γ, α^(0)) (β-β^(0)), where ∂ L/∂β=(∂ L/∂β_0,…,∂ L/∂β_p) and ∂^2 L/∂β∂β'=(∂^2 L/∂β_j ∂β_k)_0≤ j,k≤ p. Hence we get, L(β)=L(β^(0))+∂ L/∂β(β^(0),γ, α^(0)) U(ν-ν^(0))-1/2 (ν-ν^(0))' Λ (ν-ν^(0)), where UΛ U' is the singular value decomposition of the positive semidefinite symmetric matrix -∂^2 L/∂β∂β'(β^(0),γ, α^(0)) and ν-ν^(0)=U'(β-β^(0)). In order to obtain a sparse estimator β of β^⋆, we use the criterion β(λ) defined by β(λ)=Argmin_β{-L_Q(β)+λβ_1}, for a positive λ, where β_1=∑_k=0^p |β_k| and L_Q(β) denotes the quadratic approximation of the log-likelihood. This quadratic approximation is defined by -L_Q(β)=1/2𝒴-𝒳β_2^2, where 𝒴=Λ^1/2U'β^(0) +Λ^-1/2U'(∂ L/∂β(β^(0),γ, α^(0)))' , 𝒳=Λ^1/2U', with ·_2 denoting the ℓ_2 norm in ℝ^p+1. The detailed computations for obtaining the expression (<ref>) of L_Q(β) are provided in Section <ref>. §.§.§ Estimation of α^⋆ To estimate α^⋆ we shall use a maximum likelihood approach in the classical GLM model, as described in <cit.>, meaning that in (<ref>) the ARMA part is ignored. In the GLM model we take the design matrix X composed of regressor variables x_t,i, for 1 ≤ t ≤ n and i such that the corresponding β̂_i was estimated to be non-null in the variable selection step. §.§.§ Stability selection In order to guarantee the robustness of the selected variables, we use the stability selection approach by <cit.> for obtaining the final estimator β of β^⋆. The idea of stability selection is the following. The vector 𝒴 defined in (<ref>) is randomly split into a number of subsamples of size (p+1)/2, corresponding to half of the length of 𝒴. In our numerical experiments the number of subsamples is equal to 1000. For each subsample 𝒴^(s) and the corresponding design matrix 𝒳^(s), we apply Criterion (<ref>) with a given λ and by replacing 𝒴 and 𝒳 with 𝒴^(s) and 𝒳^(s), respectively. For each subsampling, we store the indices i of the non-null β_i. In the end, we calculate the frequency of index selection, namely the number of times each i was selected divided by the number of subsamples. For a given threshold, in the final set of selected variables, we keep the ones whose indices have a frequency larger than this threshold. Concerning the choice of λ, we consider the smallest element of the grid of λ provided by the R package, called 𝗌𝗌_𝗆𝗂𝗇 in the following. It is also possible to use the λ obtained by cross-validation (Chapter 7 of <cit.>), called 𝗌𝗌_𝖼𝗏 in the following. §.§ Description of the algorithm The algorithmic implementation of the methodology can be summarised as follows: * For β^(0) we take the estimator of β^⋆ obtained by fitting a GLM to the observations Y_1,…,Y_n, thus ignoring the ARMA part of the model. For α^(0), we take the ML estimate of α^⋆ of the same GLM model. For γ^(0), we take the null vector. * We use the recursion defined in (<ref>) with the initialization (β^(0),γ^(0), α^(0)) obtained in the previous step and we stop at the iteration R such that γ^(R)-γ^(R-1)_∞<10^-6. * To obtain a sparse estimator of β^⋆, we use Criterion (<ref>), where β^(0), γ, and α^(0) appearing in (<ref>) are replaced by β^(0), γ^(R), and α^(0) obtained in the previous steps. We get the indices i by using the stability selection approach described in Section <ref>. * We fit a GLM to the observations Y_1,…,Y_n and the design matrix X, in which we leave only the columns corresponding to the indices i that we got in the previous step. We obtain β and α as the final estimates of β^⋆ and α^⋆. This procedure can be improved by iterating the , , and steps. More precisely, let us denote by β_1, γ^(R_1), and α_1 the values of β, γ^(R), α obtained in the four steps described above at the first iteration. At the second iteration, we replace (β^(0),γ^(0), α^(0)) appearing in the step with (β_1,γ^(R_1), α_1) and continue the steps. At the end of this second iteration, β_2, γ^(R_2) and α_2 denote the obtained values of β, γ^(R), and α, respectively. This approach is iterated until the stabilisation of γ^(R_k). § NUMERICAL EXPERIMENTS In this section we study the performance of our method, which is implemented in the R package available on the CRAN (Comprehensive R Archive Network), using synthetic data generated from the model defined by (<ref>), (<ref>) and (<ref>). We study its performance in terms of support recovery, which is the identification of the non null coefficients of β^⋆, and the estimation of γ^⋆ and α^⋆. We generate observations Y_1,…,Y_n satisfying the model in (<ref>), (<ref>) and (<ref>) with covariates chosen in a Fourier basis defined by x_t,i=cos(2 π i t f/n), when i=1, …, [p/2] and x_t,i = sin(2π i t f/n), when i=[p/2] + 1, …, p, with t = 1, …, n and f=0.7, where [x] denotes the integer part of x. We consider different settings, where we vary the number of observations n and q, namely the length of the γ^⋆ vector. More precisely, in our experiments n takes values in {150, 250, 500, 1000} and q in {1,2}. When q=1, γ^⋆=0.5 and when q=2, γ^⋆=(0.5, 0.25). The value of p is fixed to be 100 with 5% sparsity level (only 5% of the coefficients in β^⋆ is not zero). The non-null values of β^⋆ range from -0.64 to 1.73. We take α^⋆ = 2, in order to ensure that the standard deviation of the observations is much larger than the mean. In each setting we performed 10 simulations with 4 iterations of the algorithm. In the following, we shall see that the estimation results stabilise starting from the second iteration. Hence there is no need to have more than four iterations. §.§ Estimation of the support of β^⋆ In this section, we evaluate the performance of the proposed approach in terms of support recovery of β^⋆. To do so, we calculate the TPR (True Positive Rates, namely the proportion of non-null coefficients correctly estimated as non null) and FPR (False Positive Rates, namely the proportion of null coefficients estimated as non null). Figure <ref> shows the error bars of the difference of TPR and FPR with respect to different thresholds of the stability selection step presented in Section <ref>. Here, we consider both the estimation with 𝗌𝗌_𝗆𝗂𝗇 and 𝗌𝗌_𝖼𝗏. Additionally, we perform variable selection with the classical Lasso approach proposed by <cit.>. As for the λ parameter of Lasso, we either take the λ of standard cross-validation (𝗅𝖺𝗌𝗌𝗈_𝖼𝗏) or the λ that maximises the difference between TPR and FPR (𝗅𝖺𝗌𝗌𝗈_𝖻𝖾𝗌𝗍). Note that in practice it is impossible to obtain the results of 𝗅𝖺𝗌𝗌𝗈_𝖻𝖾𝗌𝗍. From Figure <ref> we can see that our approach, both with 𝗌𝗌_𝗆𝗂𝗇 and 𝗌𝗌_𝖼𝗏, outperforms 𝗅𝖺𝗌𝗌𝗈_𝖼𝗏 and 𝗅𝖺𝗌𝗌𝗈_𝖻𝖾𝗌𝗍 when the threshold is 0.6 and larger. In particular, the best result of 𝗌𝗌_𝗆𝗂𝗇 and 𝗌𝗌_𝖼𝗏 are reached with the threshold 0.7 and 0.6, respectively. This figure presents results only in one simulation setting that we considered. The averages of the differences of TPR and FPR with corresponding standard deviations in all other settings are presented in Table <ref>. Here, for each dataset we show the results obtained with the threshold for which the difference of TPR and FPR is the largest. Similar to Figure <ref>, in all datasets 𝗌𝗌_𝗆𝗂𝗇 and 𝗌𝗌_𝖼𝗏 give better results than 𝗅𝖺𝗌𝗌𝗈_𝖼𝗏 and 𝗅𝖺𝗌𝗌𝗈_𝖻𝖾𝗌𝗍. Although the results of 𝗌𝗌_𝗆𝗂𝗇 and 𝗌𝗌_𝖼𝗏 are quite similar, in the majority of cases 𝗌𝗌_𝖼𝗏 gives slightly better results than 𝗌𝗌_𝗆𝗂𝗇. Hence, in the study of estimation of other parameters we will focus on the results of 𝗌𝗌_𝖼𝗏. Depending on the application, it might be of an interest to have TPR as large as possible, or on the contrary, to minimise the FPR. Based on the objective, one can choose the optimal threshold by looking at TPR and FPR separately. In Figure <ref> we illustrate the error bars of TPR and FPR of the same dataset as in Figure <ref>. For example, if the aim is to have an estimation with the smallest possible FPR, instead of taking the threshold 0.6 in 𝗌𝗌_𝖼𝗏 one can take the threshold 0.7. The TPR with this threshold is still larger than the ones of 𝗅𝖺𝗌𝗌𝗈_𝖼𝗏 and 𝗅𝖺𝗌𝗌𝗈_𝖻𝖾𝗌𝗍, whereas the FPR is smaller. The averages of TPR and FPR with corresponding standard deviations in all other settings are presented in Table <ref> in Appendix <ref>. In Figure <ref> we illustrate how the error bars of the difference between the TPR and FPR depend on n and q. As it can be expected, the methodology has better performance when there are more observations in the dataset and it has always better results than 𝗅𝖺𝗌𝗌𝗈_𝖼𝗏 . §.§ Estimation of γ^⋆ and α^⋆ This section is dedicated to the estimation of γ^⋆ and α^⋆ with our methodology. All the results are obtained by the 𝗌𝗌_𝖼𝗏 approach and in each setting we chose the threshold from Table <ref>. Figure <ref> illustrates the impact of n on the estimation of γ^⋆ when q=2. Similar to the results in the previous section, the estimation improves when n increases and the estimations of both γ_1 and γ_2 are closer to the true values. Iterating the algorithm has positive effects: the estimation of later iterations is better than the estimation at the first iteration. Figure <ref> demonstrates the estimation of α^⋆ in the settings with two different values of q. While for smaller values of n α^⋆ is overestimated, the results are very close to the true value for n=500 and n=1000, both for q=1 and q=2. Once again, iterating the algorithm improves and stabilises the estimation. §.§ Numerical performance Figure <ref> displays the means of the computational times of our methodology in the simulation frameworks discussed previously. We present only the results of 𝗌𝗌_𝖼𝗏 since they are identical to the ones of 𝗌𝗌_𝗆𝗂𝗇. The timings were obtained on a workstation with 32GB of RAM and Intel Core i7-9700 (3.00GHz) CPU. For a given threshold and one iteration the algorithm needs less than one minute to process a dataset when n=1000, p=100 and q=2. Moreover, it is slightly faster when q is smaller. Clearly, when n is smaller, the algorithms needs less time to execute. § APPLICATION TO RNA-SEQ TIME SERIES DATA With RNA sequencing (RNA-Seq) it is possible to identify and count the numbers of RNA fragments present in a biological sample. Linking these RNA fragments to genes allows determining the expression level of genes as integer counts. Over the past decades, advances in RNA-Seq analysis have revealed that many eukaryotic genomes were transcribed outside of protein-coding genes. These new transcripts have been named non-coding RNAs (ncRNAs, <cit.>) as opposed to coding RNAs, which code for proteins. Among these ncRNAs, long non-coding RNAs (lncRNAs) are a heterogeneous group of RNA molecules regulating genome expression. The purpose of this application is to identify the lncRNAs, the expression of which affects the expression of coding genes, by using the temporal evolution of the expression of both coding genes and lncRNAs. For the application of our methodology, we consider 145 RNA-Seq time series of coding genes each having a length n=15. The purpose of the application is to find which lncRNAs among p=95 affect the expression values of coding genes. Figure <ref> shows the relation between the log of the mean and the log of the variance of each RNA-Seq time series. As it can be seen, the variances of the observations are much larger than their means. In addition, the expression of coding genes are integer-valued, therefore we are modelling the RNA-Seq time series with a negative binomial GLARMA model. Strictly speaking, for each coding gene, the time series is described by its expression (values) at 15 temporal points. In Model (<ref>), (<ref>), and (<ref>) the expression of a given coding gene at time t is denoted by Y_t with t=1, 2, …, n =15 and the expression of the jth lncRNAs at time t is denoted by x_j,t with j=1, 2, …, p =95. Our goal is to find which lncRNAs affect the values of (Y_t) for each coding gene. In other words, we aim at finding which β_k^⋆ are non null. §.§ Choice of the threshold In this section we conduct additional experiments for choosing the threshold of 𝗌𝗌 _ 𝖼𝗏 in our methodology. We consider simulated data in the specific context of this application with n=15 and p=95. We take the x_j,t corresponding to the gene expression data of the lncRNAs and generate the Y_t's by the model described in (<ref>), (<ref>), and (<ref>) with q=1, γ_1^⋆=0.5, α^⋆ = 2 and 5 non null coefficients in β^⋆. From Figure <ref> we can see that for the thresholds 0.5 and larger 𝗌𝗌_𝖼𝗏 outperforms 𝗅𝖺𝗌𝗌𝗈_𝖼𝗏 even in this high-dimensional framework with n being much smaller than p. The best results are obtained with the threshold 0.7. Hence, in the application we shall use this value. §.§ Results In Figure <ref> we present results for a sample of 10 coding genes. Our method selected 16 lncRNAs out of 95 as being relevant for explaining the expression of these 10 coding genes. In this figure a dot signifies the effect of the associated lncRNA on a given coding gene. That is, the coefficient β_k^⋆ corresponding to the lncRNA is estimated as non null. If the influence of a lncRNA on a given coding gene is negative, the dot is blue and if it is positive, the dot is red. The brighter the colour of the dot, the larger is the influence. For the 145 coding genes, there are in total 37 lncRNAs selected to be relevant. Figure <ref> displays the estimation of γ_1^⋆ obtained for the 10 series associated to the coding genes. We take q=1 (number of parameters in γ^⋆) since n is very small and it is unrealistic to expect better results for a larger q. After 4 iterations for the 10 coding genes, all the estimates of γ_1^⋆ converge to a value in the interval from -2.5 to 5. § ACKNOWLEDGEMENTS I would like to thank my PhD supervisors Céline Lévy-Leduc, Sarah Ouadah and Laure Sansonnet for their guidance and valuable comments. I am very grateful for their help, without which this work would not have been possible. § DETAILED COMPUTATIONS §.§ Computation of the first and second derivatives of W_t defined in (<ref>) §.§.§ Computation of the first derivatives of W_t By the definition of W_t given in (<ref>), we get ∂ W_t/∂δ(δ)=∂β' x_t/∂δ+∂ Z_t/∂δ (δ), where β, x_t and Z_t are defined in (<ref>). First we will calculate the derivatives of E_t defined in (<ref>). More precisely, for all k∈{0,…,p}, ℓ∈{1,…,q} and t∈{1,…,n} ∂ E_t/∂β_k = ( -Y_t ∂ W_t/∂β_kexp(-W_t) ) ·1/1 + exp(W_t)/α - ( Y_t exp(-W_t) - 1) ∂ W_t/∂β_k·exp(W_t) ·1/α(1 + exp(W_t)/α)^2 = ( -E_t - 1/1 + exp(W_t)/α - E_t exp(W_t)/α/1 + exp(W_t)/α) ∂ W_t/∂β_k = - ( E_t + 1 + E_t exp(W_t)/α/1 + exp(W_t)/α) ∂ W_t/∂β_k, ∂ E_t/∂γ_ℓ = ( -Y_t ∂ W_t/∂γ_ℓexp(-W_t) ) ·1/1 + exp(W_t)/α - ( Y_t exp(-W_t) - 1) ∂ W_t/∂γ_ℓ·exp(W_t) ·1/α(1 + exp(W_t)/α)^2 = ( -E_t - 1/1 + exp(W_t)/α - E_t exp(W_t)/α/1 + exp(W_t)/α) ∂ W_t/∂γ_ℓ = - ( E_t + 1 + E_t exp(W_t)/α/1 + exp(W_t)/α) ∂ W_t/∂γ_ℓ, and thus ∂ W_t/∂β_k =x_t,k+∂ Z_t/∂β_k=x_t,k+∑_j=1^q∧ (t-1)γ_j∂ E_t-j/∂β_k =x_t,k-∑_j=1^q∧ (t-1)γ_j(E_t-j + 1 + E_t-jexp(W_t-j)/α/1 + exp(W_t-j)/α)∂ W_t-j/∂β_k, ∂ W_t/∂γ_ℓ = E_t-ℓ +∂ Z_t/∂γ_ℓ = E_t-ℓ+∑_j=1^q∧ (t-1)γ_j∂ E_t-j/∂γ_ℓ =E_t-ℓ-∑_j=1^q∧ (t-1)γ_j(E_t-j + 1 + E_t-jexp(W_t-j)/α/1 + exp(W_t-j)/α)∂ W_t-j/∂γ_ℓ, where we used that E_t=0, ∀ t≤ 0. The first derivatives of W_t are thus obtained from the following recursive expressions. For all k∈{0,…,p} ∂ W_1/∂β_k =x_1,k, ∂ W_2/∂β_k =x_2,k-γ_1(E_1 + 1 + E_1 exp(W_1)/α/1 + exp(W_1)/α)∂ W_1/∂β_k, where W_1=β' x_1 and E_1=Y_1 - exp(W_1)/exp(W_1) + exp(W_1)^2/ α. Moreover, ∂ W_3/∂β_k=x_3,k-γ_1(E_2 + 1 + E_2 exp(W_2)/α/1 + exp(W_2)/α)∂ W_2/∂β_k-γ_2(E_1 + 1 + E_1 exp(W_1)/α/1 + exp(W_1)/α)∂ W_1/∂β_k, where W_2=β' x_2 +γ_1 E_1, E_2=Y_2 - exp(W_2)/exp(W_2) + exp(W_2)^2/ α, and so on. In the same way, for all ℓ∈{1,…,q} ∂ W_1/∂γ_ℓ =0, ∂ W_2/∂γ_ℓ =E_2-ℓ, ∂ W_3/∂γ_ℓ =E_3-ℓ-γ_1(E_2 + 1 + E_2 exp(W_2)/α/1 + exp(W_2)/α)∂ W_2/∂γ_ℓ and so on, where E_t=0, ∀ t≤ 0, and E_1 and E_2 are defined in (<ref>) and (<ref>), respectively. §.§.§ Computation of the second derivatives of W_t By using (<ref>) and (<ref>), we get that for all j,k∈{0,…,p}, ℓ,m∈{1,…,q} and t∈{1,…,n}, ∂^2 W_t/∂β_j∂β_k = - ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 1 + E_t-iexp(W_t-i)/α/1 + exp(W_t-i)/α) ∂^2 W_t-i/∂β_k ∂β_j + ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 2 E_t-iexp(2W_t-i)/α + Y_t-i/α( 1 + exp(W_t-i)/α)^2 + 1 - E_t-iexp(W_t-i)/α/1+exp(W_t-i)/α) ∂ W_t-i/∂β_j∂ W_t-i/∂β_k, ∂^2 W_t/∂γ_ℓ∂γ_m = ∂ E_t-ℓ/∂γ_m -(E_t-m + 1 + E_t-mexp(W_t-m)/α/1 + exp(W_t-m)/α)∂ W_t-m/∂γ_ℓ - ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 1 + E_t-iexp(W_t-i)/α/1 + exp(W_t-i)/α) ∂^2 W_t-i/∂γ_ℓ∂γ_m + ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 2 E_t-iexp(2W_t-i)/α + Y_t-i/α( 1 + exp(W_t-i)/α)^2 + 1 - E_t-iexp(W_t-i)/α/1+exp(W_t-i)/α) ∂ W_t-i/∂γ_ℓ∂ W_t-i/∂γ_m = - ( E_t-ℓ + 1 + E_t-ℓexp(W_t-ℓ)/α/1 + exp(W_t-ℓ)/α) ∂ W_t-ℓ/∂γ_m -(E_t-m + 1 + E_t-mexp(W_t-m)/α/1 + exp(W_t-m)/α)∂ W_t-m/∂γ_ℓ - ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 1 + E_t-iexp(W_t-i)/α/1 + exp(W_t-i)/α) ∂ W_t-i^2/∂γ_ℓ∂γ_m + ∑_i=1^q∧ (t-1)γ_i ( E_t-i + 2 E_t-iexp(2W_t-i)/α + Y_t-i/α( 1 + exp(W_t-i)/α)^2 + 1 - E_t-iexp(W_t-i)/α/1+exp(W_t-i)/α) ∂ W_t-i/∂γ_ℓ∂ W_t-i/∂γ_m. To compute the second derivatives of W_t, we shall use the following recursive expressions for all j,k∈{0,…,p} ∂^2 W_1/∂β_j∂β_k =0, ∂^2 W_2/∂β_j∂β_k =γ_1 ( E_1 + 2 E_1 exp(2W_1)/α + Y_1/α( 1 + exp(W_1)/α)^2 + 1 - E_1 exp(W_1)/α/1+exp(W_1)/α) ∂ W_1/∂β_j∂ W_1/∂β_k, where E_1 is defined in (<ref>) and so on. Moreover, for all ℓ,m∈{1,…,q} ∂^2 W_1/∂γ_ℓ∂γ_m =0, ∂^2 W_2/∂γ_ℓ∂γ_m =0 and so on with E_t=0 for all t≤ 0 and the first derivatives of W_t computed in (<ref>). §.§ Computational details for obtaining Criterion (<ref>) By (<ref>), L(β)=L(β^(0))+∂ L/∂β(β^(0),γ, α) U(ν-ν^(0))-1/2 (ν-ν^(0))' Λ (ν-ν^(0)), where ν-ν^(0)=U'(β-β^(0)). Hence, L(β) =L(β^(0))+∑_k=0^p (∂ L/∂β(β^(0),γ, α) U)_k (ν_k-ν_k^(0)) -1/2∑_k=0^pλ_k (ν_k-ν_k^(0))^2 =L(β^(0))-1/2∑_k=0^pλ_k(ν_k-ν_k^(0)-1/λ_k(∂ L/∂β(β^(0),γ, α) U)_k)^2 +∑_k=0^p1/2λ_k(∂ L/∂β(β^(0),γ, α) U)_k^2, where the λ_k's are the diagonal terms of Λ. Since the only term depending on β is the second one in the last expression of L(β), we define L_Q(β) appearing in Criterion (<ref>) as follows: -L_Q(β) = 1/2∑_k=0^pλ_k(ν_k-ν_k^(0)-1/λ_k(∂ L/∂β(β^(0),γ, α) U)_k)^2 = 1/2Λ^1/2(ν-ν^(0)-Λ^-1(∂ L/∂β(β^(0),γ, α) U)' )_2^2 = 1/2Λ^1/2U'(β-β^(0))-Λ^-1/2 U' (∂ L/∂β(β^(0),γ, α))' _2^2 = 1/2Λ^1/2U'(β^(0)-β)+Λ^-1/2 U' (∂ L/∂β(β^(0),γ, α))'_2^2 = 1/2𝒴-𝒳β_2^2, where 𝒴=Λ^1/2U'β^(0) +Λ^-1/2U'(∂ L/∂β(β^(0),γ, α))' , 𝒳=Λ^1/2U'. § ADDITIONAL RESULTS chicago
http://arxiv.org/abs/2307.02697v1
20230706000614
Statistical Mechanics of Strahler Number via Random and Natural Language Sentences
[ "Kumiko Tanaka-Ishii", "Akira Tanaka" ]
cs.CL
[ "cs.CL", "physics.data-an" ]
Applying a Color Palette with Local Control using Diffusion Models Vaibhav Vavilala University of Illinois at Urbana-Champaign [email protected] David Forsyth University of Illinois at Urbana-Champaign [email protected] August 1, 2023 ================================================================================================================================================================== The Strahler number was originally proposed to characterize the complexity of river bifurcation and has found various applications. This article proposes computation of the Strahler number's upper and lower limits for natural language sentence tree structures, which are available in a large dataset allowing for statistical mechanics analysis. Through empirical measurements across grammatically annotated data, the Strahler number of natural language sentences is shown to be almost always 3 or 4, similar to the case of river bifurcation as reported by Strahler (1957) and Horton (1945). From the theory behind the number, we show that it is the lower limit of the amount of memory required to process sentences under a particular model. A mathematical analysis of random trees provides a further conjecture on the nature of the Strahler number, revealing that it is not a constant but grows logarithmically. This finding uncovers the statistical basics behind the Strahler number as a characteristic of a general tree structure target. § INTRODUCTION The Strahler number <cit.> was originally introduced in the field of geography, as a measure of the complexity of river bifurcation. Curiously, Strahler found that almost any river in England has a constant value of 4 for this number. The Strahler number has been theorized to describe the statistical mechanics underlying a system that is characterized by bifurcation <cit.>. Apart from geography, the Strahler number has been applied to analyze the complexity of computation trees in computer program source code <cit.>. In particular, it was theorized to equal the minimum number of memory areas that are necessary for evaluation of a computation tree <cit.>. We believe that application of the Strahler number to another target here, namely, natural language sentences, can contribute to understanding both the mechanics that the number describes and the complexity of natural language. On the statistical mechanics side, curiously, previous studies on the Strahler number reported that it is almost 4 <cit.>. However, to the best of our knowledge, what this constant value signifies from a statistical mechanics perspective is not fully understood. In this work, we show that this number actually grows logarithmically with respect to a tree's size. Furthermore, the Strahler number for natural language sentences is not very different from that for random trees. In other words, it is actually the distribution of a system's scale range that produces a Strahler number of 4. As for the natural language side, natural language is another important complex system and has been subject to analyses by statistical mechanics methods. In particular, there have been many reports with respect to Zipf's law <cit.>; more recently, language text has been subject to various analyses of long memory via methods such as long-range correlation <cit.> and fluctuation analysis <cit.>. It has been theorized that grammatical structure lies behind such long memory <cit.>. Previous works on the structural characteristics of natural language sentences have focused on the cognitive load <cit.>. <cit.> suggested a value of 3 to 5 for a “magical number” involved in cognitive short-term memory. In particular, by applying a particular sentence analysis method <cit.>, <cit.> indicated that human sentences require a maximum of four memory areas. However, these previous works did not describe the characteristics underlying the statistical mechanics of the data, which is complimentary to the understanding in cognitive science. As will be shown via the underlying theory, the Strahler number of human sentences shows a kind of lower limit on the amount of necessary memory to understand sentence structure. In this article, we provide a mathematical definition of this lower limit, and we show that the Strahler number of natural language sentences is almost 3 or 4. Although the Strahler number has such a limit, we show that it is not actually a constant; rather, it increases logarithmically with the sentence size. It has long been known, however, that sentence lengths can only take a certain range <cit.>, and this range is the main reason why the Strahler number is seemingly a constant. Furthermore, our work shows that this number is almost the same for all possible tree shapes of the same size, thus providing a signification that the potential “magical number” might not be so “magical,” by explaining its origin. § RELATED WORK This work is related to three fields as follows. The first involves the general history of the Strahler number <cit.>. It was known in the literature before Strahler <cit.>; however, we call it the “Strahler number” following convention. The Strahler number was fundamentally analyzed from a statistical mechanics viewpoint, in relation to the bifurcation ratio and area of a water field <cit.>. Meanwhile, it has found various applications besides river morphology, of which the most important is computer trees, as mentioned above. That theory is the basis of this article, as explained in the following section. The second genre of related work is measurement to characterize the complexity of natural language sentences. It has long been known that there is a bias in the branching direction, such as a right-branching preference in Indo-European (IE) languages <cit.>. This bias has been quantified in various ways, as excellently summarized in <cit.>. Another perspective is consideration of the modifier-modified distances within a sentence <cit.>. Through an analysis of 20 languages, <cit.> reported that the dependency distance is usually less than 4. More recent works have considered sentence structure as a whole. <cit.> showed how syntactic complexity in conversation converges between interlocutors within spans of topic episodes. <cit.> showed how word order can be argued to relate to linguistic complexity. The Strahler number provides another measure of the complexity of natural language sentences. The third type of related work involves the amount of short-term memory as studied in the field of cognitive science. Among early works, <cit.> showed that the number of chunks in short-term memory is 7 ± 2. <cit.> defined the complexity of dependency trees by their depth and argued that this depth is related to Miller's numbers. Beyond language, <cit.> argued that short-term memory is bounded by a “magical number” of 3 to 5. The exact nature of this short-term memory has been controversial, and there has not been an argument based on the statistical mechanics of random trees. Our work thus provides a novel approach by using the Strahler number and its mathematical theory for random trees. § STRAHLER NUMBER §.§ Definition Let t = (V,E) denote a rooted directed tree, where V is the set of nodes and E ∈ V × V is the set of edges. Each edge is directed from a parent to a child. Let T denote the set of finite rooted directed trees, and let n denote the number of nodes in a tree. Later, we will consider different sets as T: (1) dependency structures U (sec:ud), with U(n) denoting the subset with n nodes; (2) random binary structures R_2(n) (sec:random); and (3) random n-node trees R(n) (sec:nrandom), as defined later. Let a binary tree be one for which every inner node has two children. For a binary tree t, the Strahler number is defined in a bottom-up manner <cit.>. Every node v acquires a Strahler number S(v), and the Strahler number of the root is the Strahler number of the whole tree, S(t). The definition is given as follows: * For a leaf node v, S(v)=1. * For an inner node v, let the two child nodes be ch_1(v), ch_2(v). * If S(ch_1(v)) == S(ch_2(v)), then S(v) = S(ch_1(v)) + 1. * Otherwise, S(v) = max(ch_1(v), ch_2(v)). From this definition, the Strahler number is obviously unique for a given tree. fig:strahler shows an example tree with the values of S(v) indicated for every node v. For instance, the node with the green “3” has two children. As the child nodes' numbers are both 2, the parent node's number is 2+1=3. On the other hand, the root node with the purple number also has two children, one with a number of 3 (green) and the other with 2 (blue). Because the child nodes' numbers are different, the Strahler number of the root is max(3, 2) = 3. Through such bottom-up calculation, this tree's Strahler number is calculated as 3. §.§ Relation to Number of Memory Areas Required to Process Trees After the Strahler number's original definition to analyze river bifurcation in England, it was applied to analyze the complexity of computation trees <cit.>. A computation tree is produced from program code, which is parsed into a computation tree and then evaluated. For example, fig:ct shows a tree for a computation (i.e., program code) “1 + 2 * 3^4”. Parsing this program string generates the tree, which is then computed to yield 163. The question here is how much memory is necessary to get this result. The Strahler number is known to give the minimum number of memory areas for tree evaluation by the use of shift-reduce operations <cit.>, which constitute the simplest, most basic theory of computation tree evaluation. Here, we give a brief summary of these operations, with a more formal introduction given in sec:relations. A computation tree can be evaluated with the two operations of shift and reduce by using a memory system comprising a stack, which is a last-in, first-out (LIFO) data structure. A shift operation puts the data element of a tree leaf on the stack, and a reduce operation applies a functional operation (such as addition or multiplication) to the two elements at the top of the stack. For example, consider evaluation of the computation tree shown in fig:ct. For evaluation from the beginning of the tree, the required number of stack spaces is four, as shown in fig:ct-eval1. On the other hand, for evaluation from the end of the tree, the total number is reduced to two, as shown in fig:ct-eval2. As seen here, which leaf of the tree is evaluated first determines the necessary depth of the stack. Every shift-reduce gives a way to traverse a given computation tree, and each way requires a particular number of stack space uses. Thus, there is a particular way to traverse a tree by the shift-reduce method that requires a minimum number of stack spaces. This minimum number of stack spaces required to evaluate a computation tree equals the tree's Strahler number <cit.>, which is obvious from the definition of the shift-reduce method as given in sec:relations. If no self-referential expression is involved, then this number is also the minimum number required for analyzing program code in a sequence represented as a computation tree. This is because analysis of a program as a computation tree is yet another way to traverse the tree. To adapt this theory to natural language sentences, we can consider transformation of a sentence structure into a binary tree. The evaluation of this binary tree (into some kind of meaningful representation) uses a certain memory amount. In describing this amount with use of the shift-reduce method, the necessary number of stack spaces for evaluation is bounded by the Strahler number. Because analyzing a sentence is equivalent to traversing a binary tree, the tree's Strahler number gives the lower bound on the necessary number of stack spaces. This shift-reduce scheme is the simplest general method to deal with a sentence structure <cit.>. It has become a standard way to parse a sentence, and its use is an ongoing research topic <cit.>. Hence, knowledge of a sentence structure's Strahler number can give a lower-limit criterion for the amount of memory required to process the sentence structure. §.§ Strahler Number of Random Binary Trees with n Leaves: R_2(n) Before calculating the Strahler number of a sentence structure, we introduce the Strahler number of a random binary tree, which provides a good theoretical baseline. Let R_2(n) be the set of all binary trees with n leaves. The set's size |R_2(n)| is known to be given by a Catalan number, i.e., |R_2(n)|= 1/n_2n-2C_n-1 <cit.>. It was analytically shown by <cit.> that the mean Strahler number can be deductively described via approximately logarithmic growth with a base of four[ Precisely, <cit.> showed that the mean Strahler number is E[R_2(n)] = log_4 n + 1 - ∫_0^∞ (e^-t^2 H_4(t))( tF(logt+ 1/2logn ) + t/2logt)dt+o(1), where F is a continuous, periodic function having period 1, and H_4 is the fourth Hermite polynomial.], and the mean value obviously increases with the tree size n. Later, this theoretical fact will provide an important reference in understanding the complexity of natural language sentences. In addition to the Strahler number's mean behavior, its upper and lower limits can be considered. For a given set of trees, T, the upper/lower limits are respectively defined as the maximum/minimum Strahler numbers. Hence, we analytically consider the upper/lower limits for R_2(n). By the definition of the Strahler number, the upper limit is obviously acquired from a tree that is closest to a complete tree <cit.>, where the Strahler number equals the tree's maximum depth. Therefore, the upper limit for R_2(n) is ⌊log_2n⌋+1. On the other hand, the lower limit derives from the opposite case of a tree that is closest to a linear tree. Specifically, the lower limit is 1 for n=1, or 2 otherwise, because for n>1, there are two leaf nodes and all inner nodes thus have a Strahler number of 2. § MEASUREMENT OF STRAHLER NUMBER OF SENTENCE STRUCTURE There have been two main paradigms in representing tree structures: phrase structure <cit.> and dependency structure <cit.>. Here, we use these terms under the most conventional definitions, but briefly, the former describes natural language sentences in a similar manner to a computation tree, as described above, where words are located at leaves and inner nodes describe the relations between words. On the other hand, the latter describes a tree structure as the modifier-modified relations among words. In other words, the inner nodes of the tree in the phrase structure paradigm are not words, whereas those in the dependency structure paradigm are words. In this article, we calculate the Strahler number with a dependency structure rather than a phrase structure, because a large amount of annotated data is available in a large number of languages, as with the data that will be described in sec:ud. Hence, the question here is how to calculate the Strahler number for every dependency tree. In sec:definition, the Strahler number was defined for a binary tree, whose inner nodes and leaf nodes are different, with only leaf nodes representing words. On the other hand, both the inner and leaf nodes of a dependency structure are words, with inner nodes having multiple child nodes for modifiers. Filling of the gap between the differences in these two settings would suggest only two directions: to transform dependency structures into binary phrase structure trees; or to extend the Strahler number by adapting it to the dependency structure. Regarding the latter direction, there have been previous attempts to extend the Strahler number to general trees with nodes having more than two children <cit.>. The method in that work extended the rule to count up the Strahler number at each bifurcation as described in sec:Strahler. However, we do not adopt this generalization, mainly because the theory around it is not established. The theory on computation trees would not apply easily; in addition, the analytical theory for random binary trees would not be easy to extend to general trees. Hence, in the following, we consider methods to transform dependency trees into binary trees to calculate the Strahler number. First, we explain two particular binarization methods. Later, in the experimental section (sec:exp), we show that these two methods yield very similar results with respect to the Strahler number. Second, we provide a method to acquire the upper and lower limits across any binarization method. The results for any particular binarization method fall within the range between the upper and lower limits, and the limits can be compared with those of random trees. §.§ Two Binarization Methods for Dependency Structure The transformation of a dependency structure to a phrase structure is not easy <cit.>, partly because the grammatical attribute of every inner node must be estimated, whereas the reverse transformation is relatively feasible <cit.>. Here, we want to effectuate this difficult transformation but without requiring any precise prediction of the attributes of inner nodes, as we want to calculate the Strahler number regardless of its specific value. We transform a given dependency structure via the following two methods: Binary1 Transformation by use of a manually crafted grammar <cit.>. Binary2 Transformation without a grammar, by use of heuristics. Binary1 derives from a grammar proposed by <cit.>. The grammar describes the degree of grammatical relation between the modifier and modified, and the dependency tree is binarized on the order of this degree. For an explanation of this grammar, see <cit.>. On the other hand, Binary2 binarizes a dependency structure via two simple heuristics based on a modifier's distance from the head. The two heuristics are as follows: (1) the farthest modifiers form deeper nodes in the tree; and (2) words before the head are allocated to the head's left, whereas those after are allocated to the right. Although these are heuristics, this method has a relation to the linguistic theory of center embedding of sentences. A binarization example is shown in samples, in which (a) shows the tree of an original dependency structure, and (b) and (c) show its binary-transformed phrase structure trees obtained with Binary1 and 2, respectively. As seen through these examples, the binarization methods each have pros and cons. Binary1 has an advantage in that the resulting tree structure reflects the correct sentence structure, but as mentioned above, its applicability is limited. On the other hand, Binary2 does not strictly reflect the sentence structure, but it is always applicable. After application of Binary1 and 2, each tree's Strahler number can be obtained by following the definition. §.§ Upper/Lower Limits of Strahler Number for Dependency Structures and Random Trees with n Nodes: R(n) Binary1 and 2 are examples of possible methods for transforming a dependency tree to a binary tree. Because the resulting Strahler number depends on the resulting set of trees acquired via the transformation method, we want to obtain the number's upper and lower limits for all possible binary transformation methods. In other words, a dependency tree u_x can be transformed into various binary trees by using some method under conditions that reflect the original dependency structure. Let U_x be the set of all binarized trees for a given u_x, where each element is a binary tree obtained with a particular binarization method. The upper and lower limits are the maximum and minimum sizes, respectively, in U_x. The details of obtaining the upper/lower limits are described in app:upperlower, but a summary is provided here. A binarization method constitutes a method to binarize each inner node v of a tree t. Binary1 and 2 are examples of different strategies, using a grammar or heuristics. At each inner node, there is a binarization method that maximizes or minimizes the Strahler number S(v). The maximizing method binarizes the subtree under v so that it becomes closer to a complete tree, whereas minimization makes the subtree closer to a linear tree. We showed a very similar argument in sec:maxminT2 for random trees. The maximum and minimum can be calculated inductively to acquire the Strahler number's respective upper and lower limits. Note that these limits are obtained while ignoring the word order and the constraint of non-intersection, because the maximum and minimum at each node v are difficult to compute under these constraints. Thus far, we have explained how to acquire the upper/lower limits for a particular tree u_x. We can also get the upper/lower limits across the u_x in a set of U. Specifically, for each subset U(n) of trees with n nodes, the mean upper/lower limits of U(n) can be computed. These upper and lower limits are comparable with those for the set of random binary trees, R_2(n), as mentioned in sec:random. Furthermore, apart from R_2(n), we can consider another set of random trees: all possible trees with n nodes, denoted as R(n). The mean upper/lower limits of R(n) for each n are also empirically computable by the same method described in this section. Because |R(n)| is also a Catalan number <cit.>, computation of the upper/lower limits of R(n) requires dynamic programming to cover the entire set. We summarize that approach in Appendix B and give the details in sec:average. § DATA For the set U, as mentioned above, we use Universal Dependencies <cit.>, version 2.8, to measure the Strahler number for natural language. Universal Dependencies is a well-known, large-scale project to construct large-scale annotated data for natural language sentences. The annotation is defined under the Universal Dependency scheme, which is a representation based on dependency structure. The version used in this article contains 202 corpora across 114 languages. The corpora are listed in Table 1 of sec:datanumsentences. Binary1 and 2 can be applied and upper and lower limits can be calculated for all these data. § RESULTS To summarize the approach thus far, we have a dependency dataset U, in which the subset of trees of size n is denoted as U(n). For random trees, we have a set of binary random trees with n leaf nodes, denoted as R_2(n), and a set of random trees with n nodes, R(n). As described above, for R_2(n), the theoretical mean and upper and lower bounds of the Strahler number are analytically known. For the other sets, these values must be acquired empirically. For a tree t belonging to one of those sets, we calculate the upper/lower limits of Strahler numbers. In terms of n, the averages of each of these four values can be acquired for U(n) and R(n). Binary1, 2 can also be calculated for U(n). In this section, we consistently use color as follows. For R_2(n), we use black for the upper/lower limits and the mean, green for Binary1, and blue for Binary2. For U(n), we use pink for the upper limit and red for the lower limit. For R(n), we use purple for the upper limit and orange for the lower limit. §.§ Strahler Number of Sentence Structure result_strahler_dist_table lists the means and standard deviations for the entire dependency dataset. We see that the Strahler number of a dependency structure is usually less than 4. The Binary1 and Binary2 values are between the upper and lower limits. For each corpus, the specific means and standard deviations for Binary1 and 2 and the upper/lower limits for U(n) are listed in Appendices E.2-E.5, Tables 3-10. result_strahler_dist shows a histogram of the Strahler numbers. It can be seen that the distribution shifts from large to small in the order of the upper limit, Binary1, Binary2, and the lower limit. Note that Binary1 and 2 show pretty similar results, regardless of the binarization method. The median Strahler number is 4 for the upper limit, and 3 for all other cases. Strahler numbers larger than 4 are clearly very scarce. The dependency dataset includes data of various language groups, genres, and modes (speech/writing). According to our analysis, the differences among datasets are not distinct across this variety of data. The largest Strahler number is 7, and the smallest is 1. Examples of both extremes are given in sec:extremes. The examples with a number of 1 are mainly one-word salutations, interjections, and names (even without periods; sec:extremes, Table 11). On the other hand, sentence examples with a Strahler number of 7 are very rare and contain a large number of words. As seen here, sentences with a larger Strahler number above 4 are atypical and include examples for which it might be questionable to call them sentences. The dependency dataset includes such questionable entries, and the Strahler number could provide evidence to quantify such irregularities in the corpora. §.§ Growth of Strahler Number w.r.t. Sentence Length Originally, when the Strahler number was used for analysis of rivers in England, it was found to be 4. We can also conclude from the previous section that the Strahler number for sentence structure is almost 4. This leads us to wonder how this number is significant. This number actually depends on the logarithm of the tree size n. Thus far, we have discussed the Strahler number as a constant value with a given distribution. In the following, we show that it is not a constant but merely looks like a constant, because it grows very slowly with respect to n, and the range of sentence lengths is limited. result_strahler_length shows the mean results for the tree sets U(n), R_2(n), and R(n), as summarized at the beginning of this section. The black analytical lines for R_2(n) indicate the exact values following the theory explained in sec:random. For the other sets, the plots show empirical results measured across trees of size n. All plots approximately increase logarithmically, but none of them are smooth, as they have a step at n=2, and they globally fluctuate by changing their logarithmic base. Overall, the necessary number of stack spaces is bounded by the logarithm of the tree size. For each n, the possible range of Strahler numbers for R_2(n), which extends between the upper and lower black lines, is obviously far wider than the range for U(n). On the other hand, the range for U(n) is between the pink and red points. The range for R(n) is between the purple and orange lines, which is slightly narrower than the range for U(n), despite R(n) being the average of all random trees with n nodes. These results can be understood from a small example. fig:n5 shows a set of trees of size n=5, with all such possible trees on the left, and typical structures appearing frequently in the dependency dataset on the right. The distribution of tree shapes in the dataset varies, with the set of trees on the right accounting for 80% of the total. The upper and lower limits of the Strahler number are listed below each tree. The averages are listed at the bottom of each box in the corresponding colors from the scheme used throughout this section. For R(n), the respective upper/lower limits are 2.71 and 2.07; in contrast, if the six trees on the right appeared equally, the upper/lower limits would be 2.83 and 2. Thus, the range of R(n) is narrower than that of U(n), even in this small sample with n=5. The actual plots in result_strahler_length were obtained by computing the average across the distribution of shapes, but the range of R(n) is still contained within that of U(n). This small example with n=5 explains why the range of R(n) can be almost the same or even smaller than that of U(n): the Strahler number is mostly the same for any kind of tree of the same size and does not especially characterize the tree shape. § DISCUSSION Previous reports on the maximum amount of short-term memory that can be cognitively used have consistently suggested a value of 3 to 5. In an excellent survey of previous works, <cit.> summarized cognitive works that tested the maximum number of events or instances that could be remembered through human psychological experiments, e.g., via instant memory <cit.> or graphics <cit.>. He summarizes that number of such memory areas as 3 to 5 and refers to the value as a “magical number.” The possible relation of such a maximum number of local memory areas to the number of memory areas required for sentence understanding is nontrivial. Memory is necessary to understand sentences, and one model for theorizing this is based on the shift-reduce approach. Under this setting, the experimental results in this article also show that this number is within the range of Cowan's magical number. Our contribution in this work is that we provided reasoning about this magical number in a rigorous setting via the lower limits of shift-reduce evaluation of a tree. Although our findings are limited to this setting, we have shown that the Strahler number grows with the logarithm of the sentence length. Furthermore, through comparison with R(n), we showed that this trend is not specific to human sentences but derives in general from a wider set of all possible random tree shapes. This understanding might suggest the nature of the magical number to lie at a point in logarithmic growth including random trees, and the human cognitive limitation of short-term memory might explain why sentence lengths do not become extremely long. § CONCLUSION In this article, we examined the use of the Strahler number to understand the nature of Strahler number. The Strahler number was originally defined to analyze the complexity of river bifurcation. Here, we applied it to sentences, which is the first use of this approach to the best of our knowledge. Because the tree structure dataset used here is much larger than the datasets used in previous applications, we could consider the nature of the Strahler number in comparison with random trees. The Strahler number entails the memory necessary to process a sentence structure via the shift-reduce method. We proposed ways to compute a sentence's Strahler number, via two binarization methods and the upper and lower limits across all possible binarization methods. The experimental results showed that the Strahler number of a dependency structure is almost 3 or 4. This number was found to grow with the sentence length, and the upper/lower limits were found to be close to those of random trees of the same length, which is the Strahler number's key statistical mechanics characteristic with respect to trees, including random trees. Furthermore, these findings provide evidence and understanding of the memory limit discussed to date in relation to the magical number conjectured for short-term memory. natbib section § UPPER/LOWER LIMITS OF N-NODE DEPENDENCY TREE Given a dependency tree t, for a word in node v, let f_max(v) and f_min(v) be functions that obtain the maximum and minimum Strahler numbers, respectively. These functions give the maximum and minimum across any binarization of t. Both f_max(v) and f_min(v) are calculated using the following function it(x, y): it(x, y) = { x+1 if x == y max(x, y) otherwise. . Note that this function is almost the same as the definition of the Strahler number given in Section 3.1. Using this function, the upper and lower limits are calculated in a bottom-up manner from the leaves to the root. Let a ⇐ b denote a computational substitution. The calculations of the upper/lower limits, which proceed similarly, are defined below on the left/right, respectively: 2 Upper limit f_max(v) is computed as follows. * For a leaf node v, f_max(v) ⇐ 1. * For an inner node v, f_max(v) is calculated by the following four steps, where CH(v) is the set of children of v in the dependency tree. * Sort CH(v) in ascending order of f_max(v'), v'∈CH(v). Let ch_i ∈CH(v), i=1,…,|CH(v)|, denote the ith child in this order. * Set f_max(v) ⇐ 0 (Initialization). * f_max(v) ⇐ it(f_max(v), 1) (Upper limit should always count as 1). * For ch_i ∈ CH(v), repeat in order of i, as follows: f_max(v) ⇐ it (f_max(v), f_max(ch_i)). Lower limit f_min(v) is computed as follows. * For a leaf node v, f_min(v) ⇐ 1. * For an inner node v, f_min(v) is calculated by the following four steps, where CH(v) is the set of children of v in the dependency tree. * Sort CH(v) in descending order of f_min(v'), v'∈ CH(v). Let ch_i ∈CH(v), i=1,… |CH(v)|, denote the ith child in this order. * Set f_min(v) ⇐ 0 (Initialization). * For ch_i ∈ CH(v), repeat in order of i, as follows: f_min(v) ⇐ it (f_min(v), f_min(ch_i)). * f_min(v) ⇐ it(f_min(v), 1) (Function it should be considered for node v). § CALCULATION OF AVERAGE UPPER/LOWER LIMITS OF R(N) This section briefly describes how to compute the average upper or lower limit of R(n). The method's precise details are given in sec:average. For a given set of trees, R(n), let R_n,p_max be the total number of trees of size n with an upper limit p_max, and let R_n,p_min be the same with a lower limit p_min. The average upper and lower limits, respectively denoted as R_max(n) and R_min(n), are calculated as follows: * Upper limit: R_max(n) = ∑_p_max p_maxR_n,p_max/∑_p_max R_n,p_max. * Lower limit: R_min(n) = ∑_p_min p_minR_n,p_min/∑_p_min R_n,p_min. Hence, it becomes necessary to enumerate R_n,p_max and R_n, p_min through dynamic programming. The details are explained in sec:average. § RELATIONS AMONG SHIFT-REDUCE METHOD, SETHI-ULLMAN ALGORITHM <CIT.>, AND STRAHLER NUMBER A computation tree is evaluated by use of the two operations of shift and reduce. The optimal sequence of operations to minimize the number of stack space uses is given by the Sethi-Ullman Algorithm <cit.>. As mentioned in the main text, <cit.> showed that the minimum number of stack space uses equals the Strahler number. This section clarifies the relations among the shift-reduce method, the Sethi-Ullman algorithm <cit.>, and the Strahler number. Shift-reduce method The shift-reduce algorithm is an algorithm for using a stack (a LIFO structure) to evaluate a computation tree. The tree is a binary tree in which each leaf node stores data and each inner node is a function that combines the values of its two children. The shift and reduce operations are defined as follows: Shift Put a leaf from the unanalyzed part of the tree on the top of the stack. Reduce Remove the top two elements from the stack and combine them into a single component according to the function in the corresponding inner node; then, put the result on the top of the stack. Sethi-Ullman algorithm The Sethi-Ullman algorithm is a depth-first evaluation method using registers for memory. The use of registers when applying a depth-first strategy to an evaluation tree is a LIFO process. The Sethi-Ullman algorithm preprocesses a given evaluation tree by annotating each node with a value, in an equivalent manner to the Strahler number, as follows: * Assign a value x(v) for each node v. * If node v is a leaf, x(v)⇐ 1. * If node v is an inner node, let its two children be v1,v2. * If x(v1)=x(v2), then x(v)⇐ x(v1). * Otherwise, x(v) ⇐ max(x(v1),x(v2)). * The tree evaluation is conducted depth-first, by always first evaluating the child v with the larger x(v). <cit.> proved that this algorithm minimizes the number of registers used for evaluation. § CALCULATION OF AVERAGE UPPER/LOWER LIMITS OF R(N) Here, we explain the details of the method given in Appendix B to calculate the averages of the respective upper and lower limits, R_max(n) and R_min(n), of R(n). The calculation uses dynamic programming for the upper/lower limits by enumerating R_n,p_max and R_n,p_min, respectively. Because the dynamic programming proceeds in the same manner for both cases, the method is explained here for R_n,p, where p indicates either p_max or p_min. Furthermore, for a node v, we use the function f(v) as defined in Appendix A to denote the corresponding f_max(v) or f_min(v). For a given tree t, let #t denote its size and r(t) denote its root node. For node v in tree t, let CH(v) denote the set of children of v. Let Q(t) be Q(t) = {f(ch) | ch ∈CH(r(t))}. Here, Q(t) is a set of integers acquired as limits on the children of r(t). For each Q(t), a Strahler number can be computed. Calculation of the limits requires consideration for ∀ t ∈ R(n), but |R(n)| is a Catalan number, and the enumeration is thus difficult. One strategy is to consider trees with the same Strahler number as a certain type, which can be obtained from Q(t) as follows. As explained in Appendix A for the procedures to calculate the upper/lower limits, the elements of Q(t) are sorted in ascending/descending order, and instances of the same integer element will thus appear together. If the sorted Q(t) has more than two instances of the same integer, such as 1 in Q(t) = {1,1,1,1,2,3}, then half those instances can be removed, yielding {1,1,2,3}, because the resulting Strahler numbers for these sets are the same. The reason for this is explained in the caption of tenary. Hence, we define Q'(t) as a reduced version of Q(t) with appropriate elimination of integers having more than two instances. A state q refers here to each such reduced Q'(t). Trees with the same state have the same upper/lower limits of the Strahler number, denoted as st(q). For the set of trees R(n), the number of trees with state q and size n is denoted as S_n,q. For example, consider calculation of the upper limit for five trees in T = {t_1,…,t_5}, which all have size n=5. Suppose that the trees have Q(t) sets {2,1,2}, {2,1,2,2}, {1,1,1}, {1,2}, {2,1}. By sorting each Q(t) in ascending order, these sets become {1,2,2}, {1,2,2,2}, {1,1,1}, {1,2}, {1,2}. Elimination of more than two appearances of the same integer in each set yields {1,2,2}, {1,2,2}, {1,1}, {1,2}, {1,2}, each of which is a state. Among these states, {1,2,2} and {1,2} are redundant. Thus, the upper/lower limits of t1,t2 and t4,t5 are the same, and they can be handled via the same state type. Then, st(q) can be computed for each state type q, e.g., st({1,2,2}) = 3. Given S_n,q, R_n,p is obviously acquired as follows: R_n,p =∑_st(q)=pS_n, q, R_n, p = 0 if p > 1, n = 1, R_n, p = 1 if p = 1, n = 1. S_n,q is calculated inductively through dynamic programming, by first acquiring the value for a small n and then calculating the value for a larger n. subtreedp illustrates a recursive calculation of S_n,q. The left side shows t1 and t2, which are combined to form t on the right side, where #t=n=#t1+#t2. Let q1 = Q'(t1) and q2 = Q'(t2) be the respective states of t1, t2. We define a function g as follows: g(q1 ,q2) ≡ q1 ⊕ st(q2), where ⊕ denotes the following procedure. * Append st(q2)) as an element of q1; * Sort the elements in ascending/descending order (see Appendix A); * Eliminate integers with more than two instances (see tenary here). Then, S_n,q is calculated for all possible states q1 and q2 as follows: S_n, q = ∑_#t1 + #t2 = n∑_q=g(t1 ,t2) S_#t1, q1 S_#t2, q2 for n > 1. To calculate S_n,q for a tree with #t = n, all trees of smaller size must be considered, i.e., t1 and t2, where #t1 + #t2 = n. Direct enumeration of all pairs t1, t2 for given q and n would be nontrivial because |R(n)| is a Catalan number. However, as mentioned above, the computation is always performed via a set of states, which are far smaller in number than the set of trees, through reduction via state types. § DETAILS OF DATA §.§ Datasets and numbers of sentences §.§ Averages and standard deviations for Binary1 §.§ Averages and standard deviations for Binary2 §.§ Averages and standard deviations of lower limits §.§ Averages and standard deviations of upper limits § SENTENCES WITH SMALLEST AND LARGEST STRAHLER NUMBERS Here are some examples of sentences with the smallest and largest Strahler numbers. strahler1 lists sentences with a Strahler number of 1, all of which are one word. UD_English-EWT contains emails, so it includes many titles, salutations, and signoffs. In addition, it includes strings that are difficult to separate into words, such as URLs. UD_Italian-PoSTWITA is a corpus of tweets, including one-word tweets. UD_French-Sequoia is the result of automatic conversion from another treebank, and it includes titles of paragraphs and chapters. The following is an example of a sentence with a Strahler number of 7. It is a legal text in Czech on notes on the consolidating accounting unit. Sentences with such large Strahler numbers are very scarce. (1) Konsolidující účetní jednotka uvede v příloze v konsolidované účetní závěrce a) způsob konsolidace podle _63_odst._1 a použité metody konsolidace podle _63_odst._4, b) obchodní firmu a sídlo konsolidovaných účetních jednotek zahrnutých do konsolidačního celku; podíl na vlastním kapitálu v těchto účetních jednotkách zahrnutých do konsolidačního celku držený jinými účetn ími jednotkami než konsolidující účetní jednotkou nebo osobami jednajícími vlastním jménem, ale na účet těchto účetních jednotek; dále uvede důvody, na základě kterých se stala ovládající osobou, c) obchodní firmu a sídlo konsolidovaných účetních jednotek nezahrnutých do konsolidačního celku podle _62_odst._6_a__22a_odst. _3_zákona, včetně důvodů jejich nezahrnutí s uvedením podílu na vlastním kapitálu v těchto účetních jednotkách drženého jinými osobami než konsolidující účetní jednotkou, d) obchodní firmu a sídlo účetních jednotek přidružených, které jsou zahrnuty do konsolidované účetní závěrky; podíl na vlastním kapitálu v těchto účetních jednotkách přidružených, který drží účetní jednotky zahrnuté do konsolidace nebo osoby jednající vlastním jménem, ale na účet těchto účetních jednotek, e) obchodní firmu a sídlo účetních jednotek přidružených, které nejsou zahrnuty do konsolidované účetní závěrky podle _62_odst._8, včetně uvedení důvodu pro nezahrnutí, f) obchodní firmu a sídlo účetních jednotek pod společným vlivem zahrnutých do konsolidované účetní závěrky; podíl na vlastním kapitálu v těchto účetních jednotkách pod společným vlivem, který drží účetní jednotky zahrnuté do konsolidace nebo osoby jednající vlastním jménem, ale na účet účetních jednotek; dále uvede důvody, na základě kterých je vykonáván společný vliv, g) obchodní těchto firmu a sídlo účetních jednotek, které nejsou uvedeny pod písmeny b)_až_f), v nichž mají účetní jednotky samy nebo prostřednictvím osoby jednající vlastním jménem na její účet podíl na vlastním kapitálu menší než 20 %; uvede se výše podílu na vlastním kapitálu, včetně celkové výše vlastního kapitálu, výše výsledku hospodaření za poslední účetní období těchto účetních jednotek; tato informace nemusí být uvedena, nejsou-li tyto účetní jednotky významné z hlediska podání věrného a poctivého obrazu předmětu účetnictví a finanční situace v konsolidované účetní závěrce, informace o vlastním kapitálu a o výsledku hospodaření se rovněž neuvádějí, nejsou-li zveřejněny a je-li podíl konsolidující účetní jednotky na vlastním kapitálu přímo nebo prostřednictvím jiných účetních jednotek menší než 50 %, h) informace o použitých účetních metodách a zásadách, o změnách způsobů oceňování, postupů účtování, uspořádání položek konsolidované účetní závěrky a obsahového vymezení položek oproti předcházejícímu účetnímu období, s uvedením důvodů těchto změn; u položek uvedených v konsolidované účetní závěrce, které jsou nebo původně byly vyjádřeny v cizí měně, se uvedou informace o způsobu jejich přepočtu na měnu, v níž byla sestavena konsolidovaná účetní závěrka, i) vysvětlení položek ,,Kladný konsolidační rozdíl“ a ,,Záporný konsolidační rozdíl“, metody jejich stanovení a jakékoli počet významné změny oproti předcházejícímu účetnímu období, j) průměrný přepočtený zaměstnanců konsolidačního celku během účetního období, za které se sestavuje konsolidovaná účetní závěrka, rozčleněných podle kategorií; samostatně se uvede průměrný přepočtený počet zaměstnanců v průběhu účetního období u účetních jednotek pod společným vlivem.
http://arxiv.org/abs/2307.00526v1
20230702093309
TensorGPT: Efficient Compression of the Embedding Layer in LLMs based on the Tensor-Train Decomposition
[ "Mingxue Xu", "Yao Lei Xu", "Danilo P. Mandic" ]
cs.CL
[ "cs.CL", "cs.LG", "cs.NA", "cs.NE", "math.NA" ]
Graph Neural Network based Log Anomaly Detection and Explanation Matthijs van Leeuwen August 1, 2023 ================================================================ High-dimensional token embeddings underpin Large Language Models (LLMs), as they can capture subtle semantic information and significantly enhance the modelling of complex language patterns. However, the associated high dimensionality also introduces considerable model parameters, and a prohibitively high model storage. To address this issue, this work proposes an approach based on the Tensor-Train Decomposition (TTD), where each token embedding is treated as a Matrix Product State (MPS) that can be efficiently computed in a distributed manner. The experimental results on GPT-2 demonstrate that, through our approach, the embedding layer can be compressed by a factor of up to 38.40 times, and when the compression factor is 3.31 times, even produced a better performance than the original GPT-2 model. § INTRODUCTION Storage efficiency is currently prohibitive to unlocking the full potential of lightweight applications of Large Language Models (LLMs). For example, a well-known LLM, the Generative Pre-trained Transformer 2 (GPT-2) <cit.> has 1.5 billion parameters and requires significant disk space, making it prohibitive to be deployed on lower-end devices. One solution to improve storage efficiency, one solution is compressing the embedding layer, which often accounts for a large portion of the parameters. As shown in Figure <ref>, in GPT-2, the embedding layer takes up 31.02% of the parameters of the whole model; therefore, the compression of the embedding layer would substantially reduce the space complexity of LLMs and make them available in edge devices. To this end, we propose to compress the embedding layer of LLMs through Tensor-Train Decomposition (TTD) <cit.> in order to store large embeddings in a low-rank tensor format, with much fewer parameters. This low-rank tensor format is also called TT-format or Matrix Product State (MPS) <cit.>. Given the fact that in many applications the token vocabulary is ever-changing, we consider each individual token embedding (i.e. each row of the token embedding matrix) rather than taking the token embedding matrix as a whole. Benefiting from the super-compression properties of Tensor Networks (TNs), we tensorize and decompose each token embedding, and then construct a highly efficient format of embedding that can be computed efficiently through distributed computing. The proposed approach is evaluated on the GPT-2 model. The experiment results show that, using our approach, the embedding layers can indeed be compressed with a compression rate of up to 38.40 times, and with a compression rate of 3.31 didn't sacrifice model performance. As, due to the approximations involved, for the model performance change after compression, we considered the performance of our TensorGPT on the text reconstruction task, a basic text generation task where the GPT series models excel. We found that with the reconstructed embedding layer from the stored MPS, the overall performance of the GPT-2 even improved under certain TT rank settings, this is likely due to the over-parameterization of the original model. Our contributions are summarized as follows: * To our best knowledge, we are the first to utilize the Tensor-Train Decomposition (TTD) and Matrix Product State (MPS) to compress GPT series models. * A novel approach that treats each token embedding as a Matrix Product State is proposed, which is shown to be very flexible when the token embeddings are inserted or deleted, and also has the potential to be computed in a distributed manner. * A set of rigorous evaluation metrics is adopted to evaluate our approach. The experimental results show that our compression approach has satisfactory performance, while reducing the number of parameters by a factor of 2.31. § RELATED WORK Among the recent research on the compression of the embedding layers within LLMs tensor with decompositions, the closest to our approach is the work by <cit.>, applied TTD to the embedding layer to reduce the space complexity of large language models. This was achieved by reshaping the embedding matrix into an order-2N tensor which was then decomposed and stored as a Matrix Product Operator. While this approach reduced the number of parameters significantly, the decomposition procedure had to be repeated on the entire embedding matrix every time a new token is added to the dictionary. To solve this issue, instead of decomposing and storing the embedding matrix directly, we propose an approach that decomposes and stores each row of the embedding matrix separately. This reduces the computation costs significantly, as the decomposition can be performed locally on every new token. Recent progress also includes the compression of the embedding table of a recommendation system in the work by <cit.>, where the tensorized neural network is trained from scratch, yet our proposed approach operates on a pre-trained neural network without an extra training process. In another work <cit.>, Block-Wise Low-Rank Approximation is used to compress very large scale (∼800k vocabulary) embeddings, where the embedding matrices are split into blocks according to the tokens, and then each block is decomposed by SVD. Except for the difference in decomposition methods with our proposed approach, deciding how to reasonably bind certain tokens into blocks for a specific vocabulary requires additional effort. § PRELIMINARIES This section gives brief mathematical preliminaries of tensor algebra, and basic knowledge in LLMs to facilitate the understanding of our proposed methodology in Section <ref>. §.§ Tensor and Tensor Operations Order-N Tensor. An order-N real-valued tensor is a multi-dimensional array, denoted by a calligraphic font, e.g., A∈ℝ^I_1×…× I_N, where N is the order of the tensor (i.e., number of modes), and I_n (1 ≤ n ≤ N) is the size (i.e., the dimension) of its n-th mode. Matrices (denoted by bold capital letters, e.g., 𝐀∈ℝ^I_1× I_2) can be seen as order-2 tensors (N=2), vectors (denoted by bold lower-case letters, e.g., 𝐚∈ℝ^I) can be seen as order-1 tensors (N=1), and scalars (denoted by lower-case letters, e.g., a∈ℝ) are order-0 tensors (N=0). Tensor Entries. The (i_1, …, i_N)-th entry of an order-N tensor is denoted by a_i_1, ⋯, i_N∈ℝ, where i_n = 1, …, I_n for n=1,…,N. A tensor fiber is a vector of tensor entries obtained by fixing all but one index of the original tensor (e.g., 𝐚_:, i_2, i_3, …, i_N∈ℝ^I_1). Similarly, a tensor slice is a matrix of tensor entries obtained by fixing all but two indices of the original tensor (e.g., 𝐀_:, :, i_3, i_4, …, i_N∈ℝ^I_1 × I_2). Tensorization. A vector, 𝐚∈ℝ^I_1 I_2 ⋯ I_N, can be tensorized (or folded) into an order-N tensor, A∈ℝ^I_1 × I_2 ×⋯× I_N, with the relation between their entries defined by A[i_1, i_2, …, i_N] = a_i for all 1≤ i_n ≤ I_n, where i=1+∑_n=1^N(i_n-1)∏_k=1^n-1I_k. Matricization (Mode-n unfolding). Mode-n matricization of a tensor, ( A, n ) = 𝐀_{n}∈ℝ^I_n × (I_1 ⋯ I_n-1 I_n+1⋯ I_N), is a procedure of mapping the elements from a multidimensional array to a two-dimensional array (matrix). Conventionally, such procedure is associated with stacking mode-n fibers (modal vectors) as column vectors of the resulting matrix. For instance, the mode-1 unfolding of A∈ℝ^I_1 × I_2 ×⋯× I_N is represented as ( A, 1 ) = 𝐀_{1}∈ℝ^I_1 × (I_2 ⋯ I_N), where the subscript, {1}, denotes the mode of matricization, and is given by 𝐀_(1)[i_1,i_2 i_3 … i_N] = A[i_1,i_2,…, i_N] Note that the overlined subscripts refer to to linear indexing (or Little-Endian) <cit.>, given by: i_1 i_2 … i_N = 1 + ∑_n=1^N [ (i_n - 1) ∏_n'=1^n-1I_n'] = 1 + i_1 + (i_2 - 1)I_1 + ⋯ + (i_n-1)I_1 … I_N-1 Vectorization. Given an order-N tensor, A∈ℝ^I_1×⋯× I_N, its vectorization reshapes the multidimensional array into a large vector, ( A) = 𝐚̅∈ℝ^I_1 ⋯ I_N. Tensor contraction. The contraction of an order-N tensor, A∈ℝ^I_1×…× I_N, and an order-M tensor B∈ℝ^J_1×…× J_M, over the nth and mth modes respectively, where I_n=J_m, results in C∈ℝ^I_1 ×⋯× I_n-1× I_n+1×⋯× I_N × J_1 ×⋯× J_m-1× J_m+1× J_M, with entries c_i_1,…,i_n-1, i_n+1, …, i_N, j_1, …, j_m-1, j_m+1, …, j_M = ∑_i_n=1^I_n a_i_1, …, i_n-1, i_n, i_n+1, …, i_N b_j_1, …, j_m-1, i_n, j_m+1, …, j_M A (2, 1)-tensor contraction between two order-2 tensors, A∈ℝ^I_1 × I_2 and B∈ℝ^J_1 × J_2, where I_2 = J_1, is equivalent to a standard matrix multiplication, C = A×_2^1 B = AB∈ℝ^I_1 × J_2. Similarly, a (2, 1)-tensor contraction between an order-2 tensor, A∈ℝ^I_1 × I_2, and an order-1 tensor, b∈ℝ^J_1, where I_2 = J_1, is equivalent to a standard matrix-by-vector multiplication, c = A×_2^1 b = Ab∈ℝ^I_1. §.§ Token, Token Embeddings and Embedding Layer in LLMs Token and Tokenization. In natural language processing (NLP), a token is a meaningful unit of text, such as a word, punctuation mark, or other element that contributes to the structure and meaning of a sentence or document. Tokens are produced through a process known as tokenization, which involves breaking down a piece of text into individual units. The GPT series models employ a widely used tokenization method named Byte Pair Encoding (BPE) <cit.>. The BPE breaks down text into varying-length subword units, and is particularly useful for languages with complex morphology or when dealing with out-of-vocabulary words. Token Embedding and Embedding Layer in LLMs. In the context of LLMs, “embedding” refers to converting discrete input tokens into continuous vector representations. These representations are commonly known as word embeddings or token embeddings. In LLMs, the embedding layer is responsible for output token embeddings according to the sequential input tokens. This layer maps each input token to a high-dimensional vector that captures the semantic and syntactic information about the meaning and context of a token. Normally, an embedding layer considers a set of tokens {v} of size V (also known as “vocabulary”), where v represents a considered token. For each token v, a token embedding x_v of dimension D is assigned by the embedding layer, that is, x_v ∈ℝ^D. The weights of the embedding layer are represented as an embedding weight matrix W, where W∈ℝ^V × D. In practice, if the token embedding dimension D is excessively large, there would be excessive parameter complexity, resulting in high storage costs for the embedding layer, and thereafter high storage costs for the whole language model. The embedding weight matrix W can be seen as a lookup table. A basic embedding generation finds the token embeddings corresponding to all the input tokens and stacks them according to the input order. It should be noted that in the current LLMs, such as GPT-like and BERT-like models, the position and mask information of the tokens are also encoded into the embeddings. In these cases, a token embedding x_v is not merely generated via a lookup process. § METHODOLOGY This section gives a brief introduction to the technical cornerstones that our approach relies on, and a detailed description of our proposed approach. §.§ Tensor-Train Decomposition (TTD) and Matrix Product State (MPS) Tensor-Train Decomposition <cit.> was introduced to help mitigate the computational bottlenecks that arise from the curse of dimensionality, as tensor algorithms can become intractable for high-order tensors. The most common form of TT is the Matrix Product State (MPS or TT-MPS), introduced in the quantum physics community <cit.>, which applies the Tensor-Train Singular Value Decomposition (TT-SVD) algorithm described in Algorithm <ref> <cit.> to decomposes a large order-N tensor, X∈ℝ^I_1 × I_2 ×⋯× I_N, into N smaller 2-nd or 3-rd order core tensors, G^(n)∈ℝ^ R_n-1× I_n × R_n for n=1, …, N, such that X≈G^(1)×_2^1 G^(2)×_3^1 G^(3)×_3^1 ⋯×_3^1 G^(N) The tensors G^(1), …, G^(N) are referred to as the core tensors, while the set { R_0, …, R_N}, where R_0=R_N=1, represents the TT-rank of the TT decomposition. By defining G^(n)_:, i_n, :, i_n = 1, …, I_N as the i_n-th horizontal slice of tensor G^(n), the MPS assumes the element-wise form as x_i_1, i_2, …, i_N = 𝐆^(1)_:, i_1, :⋯𝐆^(N)_:, i_N, : §.§ MPS formatted Token Embedding As mentioned in Section <ref> and Section <ref>, when the vocabulary changes, the tensor decomposition should be re-executed from scratch if the decomposition object is the whole embedding weight matrix. On the other hand, loading the whole embedding weight matrix into the memory and then decomposing also requires massive memory and computation resources. Using the notions in Section <ref> and Algorithm <ref>, decomposing a 2-order W requires roughly the computation cost of 𝒪(VD^2). Rather than decomposing the whole embedding weight matrix, we propose to decompose each token embedding. In this way, each token embedding is reshaped into an order-N tensor, ( x_v ) = X∈ℝ^I_1 ×⋯× I_N such that D = ∏_k=1^N I_k, which is then decomposed and stored in Matrix Product State (MPS) form as X≈G^(1)×_3^1 ⋯×_3^1 G^(N). In other words, instead of storing the entire token embedding x∈ℝ^D, we store the corresponding MPS cores, G^(n)∈ℝ^R_n-1× I_n × R_n, for n=1,…,N. This approach has two advantages: * Lower storage cost: The space complexity reduces from an original 𝒪( I^N ) (exponential in N) to 𝒪( N R^2 I ) (linear in N), where R_n=R and I_n=I for all n=1,…,N for simplicity; * Affordable computation cost of TTD on resource-constrained nodes. Since token embeddings are decomposed individually, the decomposition of an individual token embedding or a small number of token embeddings can be offloaded to a single resource-constrained node (thread, process or device). On a single node, the computation cost is reduced from 𝒪(VD^2) to 𝒪(D). Also considering the ever-changing vocabulary, this approach has the potential to be deployed in a dynamic distributed system. The considered tensor embedding procedure is highly effective for a small rank tensor, R, small MPS dimension, I, and large tensor order N. In practice, the original vector embedding dimension can be chosen to be a power of 2 in order to maximize the compression as D=I^N=2^N. In practice, the MPS is a low-rank approximation of the original embedding. However, the approximation error will tend to zero as the rank R increases. Therefore, the choice of the rank R will have to balance the trade-off between the compression power and the approximation ability. Without considering the position and mask information mentioned in Remark <ref>, the MPS token embedding approach can be directly used to accelerate the token embedding mapping and compress the stored embeddings. However, since the language models we discuss in this paper are typical LLMs containing position and mask encoding, we shall not consider the above two. § EXPERIMENT §.§ Dataset, Tokenization and Language Model The text dataset used for the experiments was a mixed version of General Language Understanding Evaluation (GLUE) <cit.> and Microsoft Research Paraphrase Corpus (MRPC) <cit.>, with 11,602 English sentences in total. For the tokenization, we chose the BPE mentioned in Section <ref>, since it is popular in GPT series models. The language model we used was the HuggingFace version of GPT-2[https://huggingface.co/gpt2], with an embedding weight matrix W∈ℝ^50257× 768, where the vocabulary size V is 50,257 and the token embedding dimension D is 768. §.§ Implementation The 2.12.0 version of TensorFlow was used for the GPT-2 model, while Tensorly <cit.>, a Python package for tensor learning, was used to decompose the token embeddings and the embedding layer. According to Remark <ref> and also to take advantage of the hierarchical structure and multiway interactions expressiveness of Tensor-Train Decomposition, we reshaped each token embedding x_v into a power of 2 format for tensorization, that is, ( x_v ) = X∈ℝ^2×2×⋯×2. The current token embedding dimension D is 768, which cannot be factored into a power of 2. Therefore, we padded each token embedding with zeros to reach a length of 1024, which is the nearest power of 2 to 768 and is a feasible dimension for (·). Note that when Tensor-Train Decomposition is applied to decompose x_v, and then to reconstruct x_v back, the values of each token embedding x_v from index 768 to 1,024 are almost zeros. §.§ Evaluation Metrics Compression Rate. We used the embedding layer compression ratio to describe the degree of size reduction and efficiency in embedding layer storage. More mathematically, the compression rate η is the ratio of the original embedding layer size to the sum of the size of the compressed token. η = V× D/∑_j=1^V∑_k=1^N|G^(k)_j| where G^(k)_j denotes the kth tensor core of the jth token after decomposing each token embedding x_j in the embedding layer weight matrix W. Distortion Error. The distortion metrics were used to describe the compression fidelity, that is, the similarity between the original embedding layer weights and reconstructed embedding layer weights, or the original reconstructed token embeddings and reconstructed token embeddings. Considering the inference process of the embedding layer, the distortion metrics were first calculated sentence by sentence and then derived from the average. There are two kinds of distortion measurements of the embedding weight matrix and token embeddings: * Mean absolute error (MAE) for the embedding weight matrix reconstruction: We used MAE to measure the difference between the original embedding layer weight matrix and the reconstructed embedding layer weight matrix. A lower MAE suggests that the compressed embedding layer weights closely resemble the original embedding layer weights, indicating a higher level of fidelity in the compression process. * Normalized mean absolute error (norm-MAE) for the token embeddings: The token embedding values vary from minus several hundred to several hundred, and to align them for easier comparison like embedding weight matrix, we used the normalized mean absolute error. The norm-MAE is the ratio of mean absolute error and the difference between the maximum and minimum values of original embeddings. Similar to MAE for the embedding weight matrix, a lower norm-MAE indicates a higher compression fidelity. Compatibility with the subsequent GPT-2 network blocks. The primary function of GPT-like models is natural language generation. We here conducted a text reconstruction task to verify if the reconstructed embedding layer can collaborate effectively with the subsequent GPT-2 network blocks for natural language generation. The purpose of this task was to reconstruct the original input data from the encoded representation in the embedding layer and the subsequent network blocks, similar to an autoencoder. This part serves as a sanity check for the stored information in the reconstructed embedding layer, and causes evaluation via text generation loss of the GPT-2 model. §.§ Experiment Results All the evaluation metrics described in Section <ref> on the dataset GLUE/MRPC are shown in Table <ref>. There are two approaches tested for a comprehensive analysis - our proposed approach introduced in Section <ref>, and the approach of directly decomposing the embedding weight matrix W into two TT cores. The latter is actually equivalent to performing tensor SVD. As the ranks of TT cores increase, both approaches exhibit a decrease in compression rate, fidelity of embedding layer reconstruction (MAE), and fidelity of reproduced token embeddings (norm-MAE). There is no apparent decline or increase in the text generation loss, possibly because this metric is highly dependent on the dataset itself. In all settings, the lowest text generation loss was 9.01, which was achieved by our approach when the TT ranks were 1,2,4,4,4,4,4,4,1,1. §.§ Discussion We visualized the MAE for the reconstruction of embedding weight matrix W in Figure <ref>. For better visualization, we folded the dimension that represents the vocabulary index into a reasonable matrix shape. In Figure <ref>, the lighter colour indicates a lower MAE, and the darker colour indicates a higher MAE. From the visualization, since the change of colour shading is not stable, it can be inferred that the decrease in compression fidelity does not decrease smoothly as the TT ranks increase, even for SVD. As for the decomposition of each token embedding, we can identify specific areas where the light (lower MAE) lines consistently appear, suggesting that some dimensions of the token embeddings are more precise in reconstruction. These dimensions may have the potential for further compression. § CONCLUSION AND FUTURE WORK In the context of Large Language Models (LLMs), this study has suggested a compression approach for the embedding layer. The approach has constructed a power-2 tensor Matrix Product State (MPS) format for each token embedding in the embedding weight matrix, followed by the further application of the Tensor-Train Decomposition (TTD). This approach has demonstrated the advantages of adaptability to the ever-changing vocabulary and in a distributed manner, together with the compression of GPT-2 and has achieved a 3.31× compression rate with an improved model performance in the text reconstruction task. The superiority of Matrix Product State has not been not fully explored in our current implementation. An unsolved problem is the integration of MPS into the computation process of token embeddings within other encoded information (e.g. position or mask encodings) in LLMs, so that the LLMs can run faster, and be able to be deployed on lower-end devices. Furthermore, if the generated token embeddings are also formatted by MPS, the embedding generation process might be lighter and easier to store as well. named 10 wang2018glueWang, A., Singh, A., Michael, J., Hill, F., Levy, O. & Bowman, S. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. Proceedings Of The 2018 EMNLP Workshop BlackboxNLP: Analyzing And Interpreting Neural Networks For NLP. pp. 353-355 (2018) radford2019languageRadford, A., Wu, J., Child, R., Luan, D., Amodei, D. & Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog. 1, 9 (2019) dolan2005automaticallyDolan, B. & Brockett, C. Automatically Constructing a Corpus of Sentential Paraphrases. Proceedings Of The Third International Workshop On Paraphrasing. (2005) chen2018groupreduceChen, P., Si, S., Li, Y., Chelba, C. & Hsieh, C. Groupreduce: Block-wise low-rank approximation for neural language model shrinking. Advances In Neural Information Processing Systems. 31 (2018) yin2021ttYin, C., Acun, B., Wu, C. & Liu, X. Tt-rec: Tensor train compression for deep learning recommendation models. Proceedings Of Machine Learning And Systems. 3 pp. 448-462 (2021) tensorlyKossaifi, J., Panagakis, Y., Anandkumar, A. & Pantic, M. TensorLy: Tensor Learning in Python. Journal Of Machine Learning Research. 20, 1-6 (2019), http://jmlr.org/papers/v20/18-277.html gage1994newGage, P. A new algorithm for data compression. C Users Journal. 12, 23-38 (1994) oseledets2011tensorOseledets, I. Tensor-train decomposition. SIAM Journal On Scientific Computing. 33, 2295-2317 (2011) perez2006matrixPerez-Garcia, D., Verstraete, F., Wolf, M. & Cirac, J. Matrix product state representations. ArXiv Preprint Quant-ph/0608197. (2006) Dolgov2014Dolgov, S. & Savostyanov, D. Alternating minimal energy methods for linear systems in higher dimensions. SIAM Journal On Scientific Computing. 36, A2248-A2271 (2014) Cichocki2014Cichocki, A. Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions. Proceedings Of The International Workshop On Smart Info-Media Systems In Asia. (2014,3) cichocki2016tensorCichocki, A., Lee, N., Oseledets, I., Phan, A., Zhao, Q. & Mandic, D. Tensor networks for dimensionality reduction and large-scale optimization: Part 1: low-rank tensor decompositions. Foundations And Trends In Machine Learning. 9, 249-429 (2016) khrulkov2019tensorizedKhrulkov, V., Hrinchuk, O., Mirvakhabova, L. & Oseledets, I. Tensorized Embedding Layers. Proceedings Of The 2020 Conference On Empirical Methods In Natural Language Processing: Findings. pp. 4847-4860 (2020) Oseledets2009Oseledets, I. & Tyrtyshnikov, E. Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM Journal On Scientific Computing.31, 3744-3759 (2009)
http://arxiv.org/abs/2307.00350v1
20230701142758
Relative étale slices and cohomology of moduli spaces
[ "Mark Andrea de Cataldo", "Andres Fernandez Herrero", "Andrés Ibáñez Núñez" ]
math.AG
[ "math.AG", "14D23 (Primary) 14B25, 14J60, 14D07, 14F45 (Secondary)" ]
=1 matrix
http://arxiv.org/abs/2307.02064v1
20230705070031
Facing off World Model Backbones: RNNs, Transformers, and S4
[ "Fei Deng", "Junyeong Park", "Sungjin Ahn" ]
cs.LG
[ "cs.LG" ]
Line Graphics Digitization: A Step Towards Full Automation Omar Moured1,2 Jiaming Zhang1Alina Roitberg1Thorsten Schwarz 2Rainer Stiefelhagen1,2 ======================================================================================== World models are a fundamental component in model-based reinforcement learning (MBRL) agents. To perform temporally extended and consistent simulations of the future in partially observable environments, world models need to possess long-term memory. However, state-of-the-art MBRL agents, such as Dreamer, predominantly employ recurrent neural networks (RNNs) as their world model backbone, which have limited memory capacity. In this paper, we seek to explore alternative world model backbones for improving long-term memory. In particular, we investigate the effectiveness of Transformers and Structured State Space Sequence (S4) models, motivated by their remarkable ability to capture long-range dependencies in low-dimensional sequences and their complementary strengths. We propose S4WM, the first S4-based world model that can generate high-dimensional image sequences through latent imagination. Furthermore, we extensively compare RNN-, Transformer-, and S4-based world models across four sets of environments, which we have specifically tailored to assess crucial memory capabilities of world models, including long-term imagination, context-dependent recall, reward prediction, and memory-based reasoning. Our findings demonstrate that S4WM outperforms Transformer-based world models in terms of long-term memory, while exhibiting greater efficiency during training and imagination. These results pave the way for the development of stronger MBRL agents. § INTRODUCTION The human brain is frequently compared to a machine whose primary function is to construct models of the world, enabling us to predict, plan, and react to our environment effectively <cit.>. These mental representations, referred to as world models, are integral to essential cognitive functions like decision-making and problem-solving. Similarly, one of the pivotal tasks in artificial intelligence (AI) systems that aim for human-like cognition is the development of analogous world models. Model-Based Reinforcement Learning (MBRL) <cit.> has emerged as a promising approach that builds world models through interaction with the environment. As a fundamental component of MBRL, these world models empower artificial agents to anticipate the consequences of their actions and plan accordingly, leading to various advantages. Notably, MBRL offers superior sample efficiency, mitigating the high data requirements commonly associated with model-free methods. Moreover, MBRL exhibits enhanced exploration, transferability, safety, and explainability <cit.>, making it well-suited for complex and dynamic environments where model-free methods tend to struggle. The effectiveness and characteristics of world models crucially depend on their backbone neural network architecture. In particular, the backbone architecture dictates the model's capabilities of capturing long-term dependencies and handling stochasticity in the environment. Additionally, it affects the compactness of memory footprint and the speed of future prediction rollouts. Furthermore, in visual MBRL that finds extensive practical applications, the backbone architecture holds even greater significance than it does in state-based MBRL. This is due to the need to deal with high-dimensional, unstructured, and temporal observations. Nevertheless, choosing the appropriate backbone architecture for visual MBRL has become a considerable challenge due to the rapidly evolving landscape of deep architectures for temporal sequence modeling. This includes the recent advancements in major backbone architecture classes, notably Transformers <cit.> and the Structured State Space Sequence (S4) model <cit.>. Traditionally, Recurrent Neural Networks (RNNs) <cit.> have been the go-to backbone architecture, thanks to their efficient use of computational resources in processing sequential data. However, RNNs tend to suffer from vanishing gradient issues <cit.>, limiting their long-term memory capacity. Recently, Transformers <cit.> have demonstrated superior sequence modeling capabilities in multiple domains, including natural language processing and computer vision <cit.>. Their self-attention mechanism grants direct access to all previous time steps, thereby enhancing long-term memory. Moreover, Transformers offer parallel trainability and exhibit faster training speeds than RNNs. However, their quadratic complexity and slow generation speed pose challenges when dealing with very long sequences. To address these limitations, the S4 model has been proposed, offering both parallel training and recurrent generation with sub-quadratic complexity. In the Long Range Arena <cit.> benchmark consisting of low-dimensional sequence modeling tasks, the S4 model outperforms many Transformer variants in both task performance and computational efficiency. In this paper, we present two primary contributions. Firstly, we introduce S4WM, the first S4-based world model. This is a significant development since it was unclear whether the S4 model would be effective as a high-dimensional visual world model, and if so, how this could be achieved. To this end, we expand the S4 architecture to manage high-dimensional image sequences and propose its probabilistic latent variable modeling framework based on variational inference. Secondly, we conduct the first empirical comparative study on the three major backbone architectures for visual world modeling—RNNs, Transformers, and S4. Our results show that S4WM outperforms RNNs and Transformers across multiple memory-demanding tasks, including long-term imagination, context-dependent recall, reward prediction, and memory-based reasoning. In terms of speed, S4WM trains the fastest, while RNNs exhibit significantly higher imagination throughput. We believe that by shedding light on the strengths and weaknesses of these backbones, our study contributes to a deeper understanding that can guide researchers and practitioners in selecting suitable architectures, and potentially inspire the development of novel approaches in this field. § RELATED WORK Structured State Space Sequence (S4) Model. Originally introduced in <cit.>, S4 is a sequence modeling framework that solves all tasks in the Long Range Arena <cit.> for the first time. At its core is a structured parameterization of State Space Models (SSMs) that allows efficient computation and exhibits superior performance in capturing long-range dependencies both theoretically and empirically. However, the mathematical background of S4 is quite involved. To address this, a few recent works seek to simplify, understand, and improve S4 <cit.>. It has been discovered that S4 and Transformers have complementary strengths <cit.>. For example, Transformers can be better at capturing local (short-range) information and performing context-dependent operations. Therefore, hybrid architectures have been proposed to achieve the best of both worlds. Furthermore, S4 and its variants have found applications in various domains, such as image and video classification <cit.>, audio generation <cit.>, time-series generation <cit.>, language modeling <cit.>, and model-free reinforcement learning <cit.>. Our study introduces the first world model based on S4 for improving long-term memory in MBRL. We also investigate the strengths and weaknesses of S4 and Transformers in the context of world model learning. World Models. World models <cit.> are typically implemented as dynamics models of the environment that enable the agent to plan into the future and learn policies from imagined trajectories. RNNs have been the predominant backbone architecture of world models. A notable example is RSSM <cit.>, which has been widely used in both reconstruction-based <cit.> and reconstruction-free <cit.> MBRL agents. With the advent of Transformers <cit.>, recent works have also explored using Transformers as the world model backbone <cit.>. While Transformers are less prone to vanishing gradients <cit.> than RNNs, their quadratic complexity limits their applicability to long sequences. For example, recent works <cit.> use a short imagination horizon of ∼ 20 steps. In contrast, S4WM can successfully imagine hundreds of steps into the future with sub-quadratic complexity. We also develop an improved Transformer-based world model that can deal with long sequences by employing Transformer-XL <cit.>. Agent Memory Benchmarks. While many RL benchmarks feature partially observable environments, they tend to evaluate multiple agent capabilities simultaneously <cit.> (e.g., exploration and modular skill learning), and may be solvable with a moderate memory capacity <cit.>. Additionally, some benchmarks are designed for model-free agents <cit.>, and may contain stochastic dynamics that are not controlled by the agents, making it hard to separately assess the memory capacity of world models. The recently proposed Memory Maze <cit.> focuses on measuring long-term memory and provides benchmark results for model-based agents. We build upon Memory Maze and introduce additional environments and tasks to probe a wider range of memory capabilities. § BACKGROUND The S4 model <cit.> is a particular parameterization of the state space models. We first present relevant background on the state space models, and then introduce the S4 model. State Space Models (SSMs) are a widely used scientific model that defines a mapping from a 1-D input signal u(t) to a 1-D output signal y(t). They can be discretized into a sequence-to-sequence mapping by a step size Δ. The continuous-time and discrete-time SSMs can be described as: (continuous-time) s'(t) = As(t) + B u(t) y(t) = Cs(t) + D u(t) , (discrete-time) s_k = As_k-1 + B u_k y_k = Cs_k + D u_k . Here, the vectors s(t) and s_k are the internal hidden state of the SSM, and the discrete-time matrices A, B, C, D can be computed from their continuous-time counterparts A, B, C, D and the step size Δ. We will primarily deal with the discrete-time SSMs, which allow efficient autoregressive generation like RNNs due to the recurrence in s_k. Unlike RNNs, however, SSMs also offer parallel computation given the full input sequence u = u_1:T. Specifically, the output sequence y = y_1:T can be computed by a discrete convolution: y = K∗u + Du , K = (CB, CAB, …, CA^T-1B) , where ∗ is the (non-circular) convolution operator, and we have assumed for simplicity that the initial hidden state s_0 = 0. The S4 model aims to use SSMs for deep sequence modeling, where the matrices A, B, C, D and the step size Δ are learnable parameters to be optimized by gradient descent. Note that SSMs involve computing powers of A, which is in general expensive and can lead to the exploding/vanishing gradients problem <cit.>. In practice, SSMs with a randomly initialized A perform very poorly <cit.>. To address these problems, S4 parameterizes A as a Diagonal Plus Low-Rank (DPLR) matrix <cit.>: A = Λ - PP^*, where Λ is a diagonal matrix, P is typically a column vector (with rank 1), and P^* is the conjugate transpose of P. This parameterization allows the convolution kernel K to be computed in O(N+T) time and O(N+T) space, where N is the size of the hidden state s_k and T is the sequence length, enabling fast parallel training. Moreover, this parameterization includes the HiPPO matrices <cit.> which are theoretically derived based on continuous-time memorization and empirically shown to better capture long-range dependencies. In practice, S4 initializes A to the HiPPO matrix. To cope with vector-valued inputs and outputs, i.e. u_k, y_k ∈R^H, S4 makes H copies of the SSM, each operating on one dimension, and mixes the outputs by a position-wise linear layer. § S4WM: AN S4-BASED WORLD MODEL We consider an agent interacting with a partially observable environment. At each time step t, the agent receives an image observation _t. It then chooses an action a_t+1 based on its policy, and receives the next observation _t+1. For simplicity, we omit the reward here. We aim to model p(_1:T|_0, a_1:T), the distribution of future observations given the action sequence. We note that it is not required to model p(_0), as world model imagination is typically conditioned on at least one observation. While S4 <cit.> has shown a remarkable ability to model long-range dependencies, it operates directly in the observation space. For example, S4 models images as sequences of pixels, and directly learns the dependencies among individual pixels. This is hard to scale to high-dimensional sequences, such as the sequences of images that we aim to model. Inspired by RSSM <cit.>, we learn the environment dynamics in a compact latent space. This not only allows fast imagination and planning, but also enables S4 to efficiently model long-range dependencies in the latent space. We name the resulting model S4WM. It models the observations and state transitions through a probabilistic generative process: p(_1:T|_0, a_1:T) = ∫ p(_0 |_0) ∏_t=1^T p(_t |_≤ t, a_≤ t) p(_t |_<t, a_≤ t) d_0:T , where _0:T are the stochastic latent states. We note that computing the likelihood p(_t |_≤ t, a_≤ t) and the prior p(_t |_<t, a_≤ t) requires extracting relevant information from the history (_<t, a_≤ t). Therefore, it is crucial to maintain a long-term memory of the history. To this end, we use a stack of S4 blocks (shown in <ref> Right) to encode the history (_<t, a_≤ t) into an embedding vector h_t for each t. This can be done in parallel during training and sequentially during imagination: (parallel) h_1:T, s_T = S4Blocks(g_1:T, s_0) , (single step) h_t, s_t = S4Blocks(g_t, s_t-1) . Here, g_t = MLP([_t-1, a_t]) is the input to the S4 blocks, and s_t is the internal hidden state of the S4 blocks, whose initial value s_0 = 0. We find that adding the final MLP in each S4 block can improve generation quality. After obtaining h_t, we use it to compute the sufficient statistics of the prior and likelihood: p(_t |_<t, a_≤ t) = MLP(h_t) , p(_t |_≤ t, a_≤ t) = 𝒩(_t, 1) , _t = Decoder([h_t, _t]) . For training, we use variational inference. The approximate posterior is defined as: q(_0:T|_0:T, a_1:T) = ∏_t=0^T q(_t |_t) , where q(_0 |_0) = p(_0 |_0) . We use a CNN encoder to compute the sufficient statistics of the posterior from image observations. This allows all posterior samples _0:T to be obtained in parallel, thereby fully leveraging the parallel computation ability offered by S4 during training. The training objective is to maximize the evidence lower bound (ELBO): log p(_1:T|_0, a_1:T) ≥_q[ ∑_t=1^T log p(_t |_≤ t, a_≤ t) - ℒ_KL( q(_t |_t), p(_t |_<t, a_≤ t) ) ] . We provide an illustration of training and imagination procedure in <ref>, and detailed descriptions in <ref>. Implementation Details. Our CNN encoder and decoder architecture follows DreamerV3 <cit.>, with layer normalization <cit.> and SiLU <cit.> nonlinearity. The latent states _0:T are vectors of categorical variables <cit.> optimized by straight-through gradients <cit.>. To facilitate stable training, we parameterize the categorical distributions as mixtures of 1% uniform and 99% neural network output <cit.>. We use KL balancing <cit.> to scale the gradient of the KL loss ℒ_KL(q, p) with respect to the posterior q and prior p: ℒ_KL(q, p) = α·KL[(q) ∥ p] + (1 - α) ·KL[q ∥(p)] . Here, denotes the stop-gradient operator, and we set α = 0.8 to put more emphasis on learning the prior toward the posterior than the other way around. § EXPERIMENTS §.§ Environments Unlike previous works <cit.> that primarily evaluate the final performance of model-free agents on memory-demanding tasks, we seek to understand the memory capabilities of world models in model-based agents in terms of long-term imagination, context-dependent recall, reward prediction, and memory-based reasoning. We believe that our investigation provides more insights than the final performance alone, and paves the way for model-based agents with improved memory. To this end, we develop a diverse set of environments shown in <ref>, each targeting a specific memory capability. The environments are based on the 3D Memory Maze <cit.> and the 2D Mini­Grid <cit.>, both with partial observations. The world models are learned from an offline dataset collected by a scripted policy for each environment. This allows the world models to be evaluated independently of the policy learning algorithms. Specifically, for each episode, the environment is regenerated. To simplify the evaluation of world model imagination, we design the data collecting policy to consist of a context phase and a query phase. In the context phase, the policy fully traverses the environment, while in the query phase, the policy revisits some parts of the environment. For evaluation, we use unseen episodes collected by the same policy as training. The world model observes the context phase, and is then evaluated on its imagination given the action sequence in the query phase. Because the environments are deterministic and have moderate visual complexity, and the context phase fully reveals the information of the environment, it suffices to use the mean squared error (MSE) as our main evaluation metric. In the following, we first motivate our choice of the baseline world model backbones through a comparison of speed and memory consumption, and then introduce and present the results for each environment in detail. §.§ Baselines RSSM-TBTT. RSSM <cit.> is an RNN-based world model backbone used in state-of-the-art MBRL agents <cit.>. Recently, <cit.> show that training RSSM with truncated backpropagation through time (TBTT) can lead to better long-term memory ability. We follow their implementation and denote the model as RSSM-TBTT. TSSM-XL. TSSM <cit.> is the first Transformer-based world model for improving long-term memory. It was originally evaluated on sequences of length ∼ 100. In this paper, we seek to evaluate on much longer sequences (up to 2000 steps), and it is impractical to feed the entire sequence to the vanilla Transformer <cit.>. Therefore, we use Transformer-XL <cit.> as the backbone and denote the model as TSSM-XL. It divides the full sequence into chunks, and maintains a cache of the intermediate hidden states from processed chunks. This cache serves as an extended context that allows modeling longer-term dependencies. Speed and Memory Usage. We note that the cache length m is a crucial hyperparameter of TSSM-XL. A larger m can potentially improve the memory capacity, at the cost of slower training and higher memory consumption. To ensure a fair comparison in our experiments in the next few sections, we first investigate the speed and memory usage of S4WM, RSSM-TBTT, and TSSM-XL with several m values. The results are shown in <ref>, all obtained on a single NVIDIA RTX A6000 GPU. We make sure that the models have comparable number of parameters. For training, we use a batch size of 8, sequence length of 1000, and image size of 64 × 64. We report the number of episodes per second processed by each model, averaged over 100 batches, and also the peak memory usage. S4WM and TSSM-XL trains much faster than RSSM-TBTT due to their parallel computation during training, while RSSM-TBTT is much more memory-efficient. For imagination, we use a batch size of 64, context length of 500, generation length of 500, and image size of 64 × 64. We report the number of frames per second, averaged over 8 batches. RSSM-TBTT shines here, achieving ∼ 10 × throughput compared to S4WM and TSSM-XL. While S4WM also uses recurrence during imagination, its multi-layered recurrence structure with MLPs in between slows down its performance. As for memory usage, the decoder takes up most of the memory decoding all steps in parallel, leading to similar memory consumption of all models. Based on our investigation, TSSM-XL with a cache length m = 128 is the closest to S4WM in terms of speed and memory usage. Therefore, we use TSSM-XL with m = 128 for all subsequent experiments. §.§ Long-Term Imagination The ability of world models to perform long-term imagination is crucial to long-horizon planning. While many RL benchmarks can be tackled with short-term imagination of ∼ 15 steps <cit.>, here we seek to understand the long-term imagination capability of world models and explore their limits by letting the world models imagine hundreds of steps into the future. To this end, we develop three environments with increasing difficulty, namely Two Rooms, Four Rooms, and Ten Rooms, based on the 3D Memory Maze <cit.>. The top-down views are shown in <ref>. In the context phase, the data collecting policy starts from a random room, sequentially traverses all rooms, and returns to the starting room. In the query phase, the policy revisits each room in the same order as the context phase. During evaluation, the world model first observes the context phase, and is then asked to imagine future observations given the action sequence of the query phase. As shown in <ref>, all models obtain good reconstruction, while S4WM is much better in the Two Rooms and Four Rooms environment for long-term generation up to 500 steps. We demonstrate the superior generation quality of S4WM in <ref>. All models are able to capture the high-level maze layout. However, RSSM-TBTT and TSSM-XL make many mistakes in details such as wall colors, object colors, and object positions, while S4WM is able to generate much more accurately, with only minor errors in object positions. We further show the per step generation MSE in <ref>. S4WM is able to maintain a relatively good generation quality for up to 500 steps, while RSSM-TBTT and TSSM-XL make large generation errors even within 50 steps. We notice a periodic drop in the generation MSE. This is when the agent moves from one room to another through a narrow corridor where the action sequence is less diverse. We also find that all models struggle in the Ten Rooms environment where the context length is 1101 and the query length is 900. This likely reaches the sequence modeling limits of the S4 model, and we leave the investigation of more sophisticated model architectures to future work. §.§ Context-Dependent Recall Humans are able to recall past events in great detail. This has been compared to “mental time travel” <cit.>. Motivated by this, we develop a “teleport” version of the Two Rooms, Four Rooms, and Ten Rooms environments. After the initial context phase, the agent is teleported to a random point in history, and is asked to recall what happened from that point onwards, given the exact same action sequence that the agent took. To succeed in this task, the agent needs to figure out where it is teleported by comparing the new observations received after the teleport to its own memory of the past. In other words, the content to recall depends on the new observations. Transformers have been shown to be better than S4 at performing such context-dependent operations in low-dimensional sequence manipulation tasks <cit.> and synthetic language modeling tasks <cit.>. We investigate this in the context of world model learning, with high-dimensional image inputs. To help the model retrieve the correct events in history, we provide up to 20 observations after the teleport as additional contexts. The generation MSE of the recall is reported in <ref>. TSSM-XL performs the best in the Two Rooms environment where the context phase is short, and is able to recall successfully without additional observations. When the context phase is longer as in Four Rooms and Ten Rooms, S4WM performs the best. We visually show the recall quality with 20 observations after the teleport in <ref>. In the Two Rooms environment, both TSSM-XL and S4WM are able to recall accurately. However, only S4WM is able to maintain such recall quality in the more challenging Four Rooms environment. §.§ Reward Prediction To facilitate policy learning within imagination, world models need to accurately predict the rewards. In this section, we evaluate the reward prediction accuracy over long time horizons. To decouple the challenges posed by 3D environments from long-term reward prediction, we use the visually simpler 2D MiniGrid <cit.> environment. Specifically, we develop the Distracting Memory environment, which is more challenging than the original MiniGird Memory environment, due to distractors of random colors being placed in the hallway. A top-down view is shown in <ref>. Each episode terminates when the agent reaches one of the squares on the right. A reward of 1 is given if the square reached is of the same color as the square in the room on the left. Otherwise, no reward is given. In the context phase, the data collecting policy starts in the middle of the hallway, then traverses the hallway and returns to the starting position. In the query phase, the policy goes to one of the two squares on the right uniformly at random. To accurately predict the reward, the world model must learn to ignore the distractors while keeping track of the agent's position. We report two types of reward prediction accuracy in <ref>. The inference accuracy is measured when the model takes the full sequence of observations as input (including both the context and the query phases). This evaluates the model's ability to capture long-range dependencies independently of the imagination quality. In contrast, the imagination accuracy is evaluated within the model's imagination, conditioned on the observations in the context phase and additionally the action sequence in the query phase. Our results show that only S4WM is able to accurately predict rewards within imagination. TSSM-XL has limited success when observing the full sequence, but fails to imagine future rewards accurately. RSSM-TBTT completely fails, and its reward prediction is close to random guessing. Our visualization of model imagination in <ref> reveals that the failure of TSSM-XL and RSSM-TBTT is mainly due to their inability to keep track of the agent's position. §.§ Memory-Based Reasoning In the previous experiments, the model's memory of the environment can largely be kept fixed after the context phase. In this section, we explore the setting where the memory needs to be frequently updated in order to reason about the future. We develop the Multi Doors Keys environment, where the agent collects keys to unlock doors. A top-down view is shown in <ref>. Each time a door is unlocked, the corresponding key will be consumed, so it cannot be used to unlock other doors of the same color. The agent is allowed to possess multiple keys. In the context phase, the agent visits all keys and doors without picking up any keys. In the query phase, the agent attempts to unlock two random doors after picking up each key. After all keys are picked up, the agent will try to unlock each door once again. To successfully predict the outcome when the agent attempts to unlock a door, the world model must constantly update its memory when a key is picked up or consumed. Since the environment is visually simple, we find the generation MSE to be a good indicator of how well the model predicts the future door states. As reported in <ref> and visually shown in <ref>, S4WM performs well on all environments, demonstrating its ability to keep updating the memory, while both RSSM-TBTT and TSSM-XL struggle. § CONCLUSION In this paper, we introduced S4WM, the first S4-based visual world model that effectively expands the long-range sequence modeling ability of S4 from low-dimensional inputs to high-dimensional images. Furthermore, we presented the first comparative investigation of major world model backbones in a diverse set of environments specifically designed to evaluate critical memory capabilities. Our findings demonstrate the superior performance of S4WM over RNNs and Transformers across multiple tasks, including long-term imagination, context-dependent recall, reward prediction, and memory-based reasoning. One limitation of our study is that we primarily focused on visually simple and deterministic environments to simplify the evaluation process. Future work could explore proper evaluation metrics for more complex and stochastic environments. § ACKNOWLEDGEMENTS This work is supported by Brain Pool Plus Program (No. 2021H1D3A2A03103645) and Young Researcher Program (No. 2022R1C1C1009443) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT. We thank Jurgis Pašukonis, Danijar Hafner, Chang Chen, and Jaesik Yoon for insightful discussions. plainnat § ENVIRONMENT AND DATASET DETAILS For each 3D environment, i.e., Two Rooms, Four Rooms, and Ten Rooms, we generate 30K trajectories using a scripted policy, of which 28K are used for training, 1K for validation, and 1K for testing. For each 2D environment, i.e., Distracting Memory and Multi Doors Keys, we generate 10K trajectories using a scripted policy, of which 8K are used for training, 1K for validation, and 1K for testing. All reported results are obtained from the test trajectories, using the model checkpoints that achieve the best validation loss. The image observations are of size 64 × 64 × 3 for 3D environments, and 40 × 40 × 3 for 2D environments. § ABLATION STUDY In this section, we investigate alternative architectural choices for S4WM and different cache lengths m for TSSM-XL. We conduct these ablation studies on the Four Rooms and Ten Rooms environments. §.§ Alternative Architectures of S4WM S4WM-Full-Posterior. In our main experiments, we have chosen to use the factorized posterior q(_0:T|_0:T, a_1:T) = ∏_t=0^T q(_t |_t) for simplicity and parallel training ability. However, we note that it is possible to condition on the full history while maintaining the parallel training ability: q(_0:T|_0:T, a_1:T) = ∏_t=0^T q(_t |_≤ t, a_≤ t) . We illustrate this architecture in <ref>. Here, we first use a CNN encoder to obtain a deterministic embedding e_t for each image observation _t, and then use a stack of S4 blocks to encode the history and compute the sufficient statistics of the posterior for all t in parallel: q(_t |_≤ t, a_≤ t) = MLP(h_t) , h_0:T, s_T = S4Blocks(g_0:T, s_-1) , g_t = MLP([e_t, a_t]) . We have defined the dummy action a_0 = ∅ and the initial S4 hidden state s_-1, which are both implemented as vectors of all zeros. We note that the S4 blocks in the posterior are not shared with those in the prior. S4WM-No-MLP. In our implementation, each S4 block consists of two S4 layers and one MLP. We note that the MLP is not used in the original S4 <cit.> for the Long Range Arena tasks <cit.>, but is commonly used in language modeling and audio generation <cit.>. Hence, we consider a model variant without the MLP in the S4 blocks to investigate the importance of this MLP in world model learning. Results. We report results on the non-teleport Four Rooms and Ten Rooms environments in <ref>. Results on teleport environments are reported in <ref>. We also show a comparison of speed and memory usage in <ref>. Our results suggest that S4WM-Full-Posterior performs similarly as S4WM on the Four Rooms environment, and becomes better in the more challenging Ten Rooms environment where the episode length is longer. However, it is more computationally demanding than S4WM during training. In contrast, while S4WM-No-MLP is the most computationally efficient, its performance is much worse than S4WM, indicating the importance of the MLP in S4 blocks to both long-term imagination and context-dependent recall. §.§ Cache Length of TSSM-XL In our main experiments, we have used the cache length m = 128 for TSSM-XL, because it is close to S4WM in terms of computational cost. Here we provide a more thorough investigation with larger cache lengths. We report results on the non-teleport Four Rooms and Ten Rooms environments in <ref>. Results on teleport environments are reported in <ref>. We find that increasing the cache length generally improves generation quality, at the cost of slower training and imagination speed. Notably, TSSM-XL with m = 512 shows better context-dependent recall than S4WM on the Ten Rooms teleport environment, consistent with the findings in previous work <cit.> that Transformers are better than S4 at performing context-dependent operations. § OFFLINE PROBING ON MEMORY MAZE Pašukonis et al. <cit.> recently proposed the Memory Maze offline probing benchmark for evaluating the representation learning ability of world models. For completeness, we report the benchmark results in <ref>. Our implementation is based on the newer DreamerV3 <cit.>, and the results are slightly better than reported in the original Memory Maze paper. A key difference between Memory Maze and the environments in our main experiments is that in Memory Maze, each episode has a randomized maze layout, while in our setting, the maze layout is fixed and only the wall colors and objects are randomized. Our results show that RSSM-TBTT excels at recognizing the maze layout when the maze is relatively small. The larger 15 × 15 maze remains challenging for all models. § EXTENDED BACKGROUND In this section, we briefly introduce RSSM and TSSM for completeness. We denote the sequence of observations and actions as (_0, a_1, _1, a_2, _2, …, a_T, _T). Namely, the agent takes action a_t+1 after observing _t, and receives the next observation _t+1. We omit the reward for simplicity. §.§ RSSM RSSM <cit.> models the observations and state transitions through the following generative process: p(_0:T|a_1:T) = ∫∏_t=0^T p(_t |_≤ t, a_≤ t) p(_t |_<t, a_≤ t) d_0:T , where _0:T are the stochastic latent states. The approximate posterior is defined as: q(_0:T|_0:T, a_1:T) = ∏_t=0^T q(_t |_<t, a_≤ t, _t) . The conditioning on previous states _<t and actions a_≤ t appears multiple times. RSSM uses a shared GRU <cit.> to compress _<t and a_≤ t into a deterministic encoding h_t: h_t = GRU(h_t-1, MLP([_t-1, a_t])) . This is then used to compute the sufficient statistics of the prior, likelihood, and posterior: p(_t |_<t, a_≤ t) = MLP(h_t) , p(_t |_≤ t, a_≤ t) = 𝒩(_t, 1) , _t = Decoder([h_t, _t]) , q(_t |_<t, a_≤ t, _t) = MLP([h_t, e_t]) , e_t = Encoder(_t) . The training objective is to maximize the evidence lower bound (ELBO): log p(_0:T|a_1:T) ≥_q [ ∑_t=0^T log p(_t |_≤ t, a_≤ t) - ℒ_KL( q(_t |_<t, a_≤ t, _t), p(_t |_<t, a_≤ t) ) ] . §.§ TSSM Our implementation of TSSM <cit.> uses the same generative process, approximate posterior, and training objective as S4WM. For convenience, we repeat them below. The generative process is: p(_1:T|_0, a_1:T) = ∫ p(_0 |_0) ∏_t=1^T p(_t |_≤ t, a_≤ t) p(_t |_<t, a_≤ t) d_0:T , where _0:T are the stochastic latent states. The approximate posterior is defined as: q(_0:T|_0:T, a_1:T) = ∏_t=0^T q(_t |_t) , where q(_0 |_0) = p(_0 |_0) . The training objective is to maximize the evidence lower bound (ELBO): log p(_1:T|_0, a_1:T) ≥_q[ ∑_t=1^T log p(_t |_≤ t, a_≤ t) - ℒ_KL( q(_t |_t), p(_t |_<t, a_≤ t) ) ] . The main difference from S4WM is that in TSSM the prior p(_t |_<t, a_≤ t) is computed by a stack of Transformer <cit.> blocks. Specifically, the Transformer blocks output an embedding vector h_t through self-attention over the history: h_t = TransformerBlocks(g_1:t) , g_t = MLP([_t-1, a_t]) . The h_t is then used for predicting the next latent state _t and decoding the latent state into image _t: p(_t |_<t, a_≤ t) = MLP(h_t) , _t = Decoder([h_t, _t]) . § HYPERPARAMETERS We base our implementation on the publicly available code of S4 <cit.> and DreamerV3 <cit.>. We provide the hyperparameters used for 3D and 2D environments in <ref> respectively. For 3D environments, we largely follow the hyperparameters used in Memory Maze <cit.>. For 2D environments, we follow the architecture of DreamerV3-S. We use a linear warmup (1000 gradient steps) and cosine anneal learning rate schedule for S4WM. For TSSM-XL and RSSM-TBTT, we use constant learning rates following the original papers. § BROADER IMPACT The proposed S4WM and other world models investigated in this paper are fundamentally deep generative models. Therefore, they inherit the potential negative social impacts that deep generative models may have, such as generating fake images and videos that can contribute to digital misinformation and deception. Caution must be exercised in the application of these models, adhering to ethical guidelines and regulations to mitigate the risks.
http://arxiv.org/abs/2307.03061v1
20230706152834
Learning Constrained Corner Node Trajectories of a Tether Net System for Space Debris Capture
[ "Feng Liu", "Achira Boonrath", "Prajit KrisshnaKumar", "Elenora M. Botta", "Souma Chowdhury" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Learning Constrained Corner Node Trajectories of a Tether Net System for Space Debris Capture Feng Liu^* ^* Ph.D. Student, Department of Mechanical and Aerospace Engineering, AIAA Student Member University at Buffalo Buffalo, New York, 14260 [email protected] Achira Boonrath^* University at Buffalo Buffalo, New York, 14260 [email protected] Prajit KrisshnaKumar^* University at Buffalo Buffalo, New York, 14260 [email protected] Eleonora M. Botta^† ^† Assistant Professor, Mechanical and Aerospace Engineering, AIAA member University at Buffalo Buffalo, New York, 14260 [email protected] Souma Chowdhury^ ^ Associate Professor, Mechanical and Aerospace Engineering, AIAA Senior Member, Corr. author University at Buffalo Buffalo, New York, 14260 [email protected] ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== plain plain specialfooter The earth’s orbit is becoming increasingly crowded with debris that poses significant safety risks to the operation of existing and new spacecraft and satellites. The active tether-net system, which consists of a flexible net with maneuverable corner nodes launched from a small autonomous spacecraft, is a promising solution for capturing and disposing of such space debris. The requirement of autonomous operation and the need to generalize over scenarios with debris scenarios in different rotational rates makes the capture process significantly challenging. The space debris could rotate about multiple axes, which, along with sensing/estimation and actuation uncertainties, calls for a robust, generalizable approach to guiding the net launch and flight – one that can guarantee robust capture. This paper proposes a decentralized actuation system combined with reinforcement learning for planning and controlling this tether-net system. In this new system, four microsatellites with cold gas type thrusters act as the corner nodes of the net and can thus help control or correct the flight of the net after launch. The microsatellites pull the net to complete the task of approaching and capturing the space debris. The proposed method uses a RL framework that integrates a proximal policy optimization to find the optimal solution based on the dynamics simulation of the net and the microsatellites performed in Vortex Studio. The RL framework finds the optimal trajectory that is both fuel-efficient and ensures a desired level of capture quality. ADRActive Debris Removal RLReinforcement Learning Active debris removal, reinforcement learning, optimization § INTRODUCTION Earth’s orbit is becoming increasingly dangerous for current and future space missions since the growing amount of space debris threatens operational safety <cit.>. ADR is one of the solutions to mitigate the problem. Among the multiple methods studied, tether-net systems have been proposed for their high flexibility and good capturing range <cit.>. Previous studies <cit.> have shown that tether-net systems are effective for capturing uncooperative debris. Among others, the research of Botta et al. <cit.> examined the dynamics of the deployment and capture phase of the debris removal tasks using net-based systems. Chen et al. analyzed the system's robustness to errors in a sample mission scenario in which the second stage of the Zenit-2 launch vehicle is the target debris of interest <cit.>. Additionally, Zeng et al. <cit.> conducted research on the closing mechanism with uncertainties, which applies RL <cit.> to ensure debris capture. Studies <cit.> have shown that using space robots is effective in increasing the efficiency and reliability of capturing uncooperative space debris. Meng et al. <cit.> proposed the Autonomous Maneuverable Space Net (AMSN) system, which consists of a flexible net and several Maneuverable Units (MUs). The AMSN has a greater effective net deployment range than the traditional tether-net systems, and the MUs allow the AMSN to perform more flexible operations. A chaser satellite brings the AMSN to rendezvous with the target in orbit around the Earth. The chaser then releases the AMSN with an initial velocity, and the MUs on the AMSN control the shape and movement of the net. The net closes, and locks after the target is in the net mouth. The AMSN then drags the captured target into the atmosphere to be incinerated or to a graveyard orbit. In this process, the trajectory and shape of the net are essential for a successful mission <cit.>. In intelligent autonomous systems, Artificial Neural Networks (ANN) have become a promising analysis tool for decision-support models <cit.>. An ANN can map states to actions in a policy model, and various ANN learning methods have already been applied to robotics and control applications. Besides RL <cit.>, learning methods such as Neuroevolution <cit.> and Supervised Learning <cit.> are also popular in similar scenarios. For the tether-net systems, the launching and wrapping control is compatible with the advanced RL <cit.> and neuroevolution <cit.> methods. Due to various debris characteristics, the system uncertainties and selecting optimal actions could be challenging without these learning methods. Most previous studies are based on the assumption that the launching phase is under ideal conditions. However, in reality, the perfect launching conditions are challenging due to uncertainties and hardware limitations for both the launch equipment and the determination of the target pose. The deployment trajectory of the net is one of the most critical components for a successful capture, especially when considering that a relaunch of the system is not possible, so a method to overcome the effect of potential error and discover more reliable trajectories for the MUs to follow needs to be developed. Meanwhile, most of the designs of tether-net systems are centralized <cit.>, which works well in a short deployment distance and in an ideal environment where the uncertainties in the mission are kept to the minimum, and the target is rotating slowly. A semi-decentralized system offers more flexibility and robustness in this complex environment for a long-range deployment, with a target having more complex movement, such as spinning about multiple axes, and environmental uncertainty. Therefore, instead of focusing on the control of the system in a centralized method, focusing on learning the corner nodes' trajectory to design a semi-decentralized system can be a different approach to capturing the target. The succeeding sections of this paper are arranged as the following: Section <ref> describes the machine learning framework utilized in the studies conducted within this paper. Section <ref> details the dynamics modeling of the maneuverable tether-net system. Section <ref> examines the design optimization approach taken for the maneuverable tether-net system configuration. § OVERALL FRAMEWORK FOR LEARNING CORNER NODE CONTROL The trajectory is one of the key elements for controlling the capture process, and finding the optimal trajectory for the corner nodes can lead to a successful capture. This paper proposes a semi-decentralized reinforcement-learning-based maneuverable space net (RMSN) inspired by the design of Meng et al. <cit.>, and the extensive dynamics research of Botta et al. <cit.>. The machine learning framework is inspired by the study of Zeng et al. <cit.>. Compared to non-autonomous tether-net systems, RMSN has a further capture distance and more flexible maneuverability like the AMSN. Meanwhile, RMSN is even more flexible than AMSN due to its semi-decentralized property and is more robust for capturing a target with more complex movement. The machine-learning-based policy optimization of this robotics system makes RMSN more adaptable and robust under uncertainties. The process is split into two phases for the case study: approaching and capturing. The approaching phase starts with the net launching and ends when the net is just about to contact the target debris. The capturing phase follows the approaching phase and ends when the net is closed and the debris is captured. The reinforcement learning technique used in this paper is Proximal Policy Optimization (PPO) from stable baselines <cit.>. As a state-of-the-art RL method, PPO has proved to be efficient, adaptable, and reliable. By interacting with the environment, PPO updates the gradient based on the experience. Once the update completes, the collected experience is no longer used, so that the next update will start with the new experience. The policy learning framework is showing in Fig. <ref>, which is inspired by the work of Zeng et al. <cit.>. The neural network takes the target's Z-axis offset as input and generates a set of thrust angles as actions to maneuver the net. The final capture quality and fuel consumption are evaluated to calculate rewards for updating the policy. The machine learning framework considers environmental parameters (including geometry and states of MUs, target, and the chaser) and uncertainties of the initial distance between the chaser and the target. The framework finds the optimal policy for each phase to maximize the probability of successful capture, evaluated by the Capture Quality Index (CQI), and minimize fuel consumption. The thrust angles control the corner nodes' trajectories, and by tuning the thrust angles, the optimal trajectory with the highest success rate and minimum fuel cost can be found. Figure <ref> shows the workflow of RL of this paper. To make the simulation reflect some realistic problems, such as errors in the sensor readings, the angular velocity of the chaser, and inaccurate launching velocity, noises need to be added to the simulation in the presence of uncertainty. In this paper, the target is set to have a 9 m offset on X-axis, so it is not aligned with the centerline of the net. A noise ranging from -5 m to +5 m was also applied to the Z-axis position of the target. After initialization, the RL policy model receives observations from the Vortex Studio-based tether-net simulator and generates actions to be the new input to the simulator. The outputs from the simulator are used to calculate the reward, which is then used to update the policy model. § MODEL OF SPACE TETHER-NET The system consists of a square-shaped net, a tether that connects the net to a chaser vehicle, and four MUs. Fig. <ref> shows the structure of the net, chaser, MUs, main-tether, winch, and closing mechanism. The MUs in this proposal can be understood as miniature satellites <cit.> with thrusters. The simulator used in this paper is based in Vortex Studio, a multi-body dynamics simulation software. Inherited from the work of Botta et al. <cit.>, the mass of the net is lumped into multiple small spherical rigid bodies at the knots of the net and its corner MU elements, both of which are called nodes. The axial stiffness and damping properties of the threads in the net are modeled as springs and dampers in parallel between the nodes that cannot withstand compression. The mass lumped in the j-th node, m_j, is defined in the following equation <cit.>: m_j = ∑_γϵΓ_jm_γ/2+m_knot j = 1:N_s^2 ∑ _γϵΓ_jm_γ/2+m_MU j=N^2_s+1 : N^2_s+4 where m_γ is the mass of the threads adjacent to the j-th node belonging to set Γ_j, N_s^2 is the total number of nodes in the net, m_knot is the mass of the knots of the net where the threads intersect, and m_MU is the total mass of each MU. The equations of motion of the nodes are obtained by writing Newton's second law: m_j𝐚_j=∑_γ ϵΓ±𝐓_γ + ∑_s=1^S_j𝐅_ext,s,j where 𝐚_j is the absolute acceleration of j-th node; 𝐓_γ is the tension forces in the thread adjacent to the j-th node; 𝐅_ext,s,j is each of the external forces on the j-th node. The external forces include forces generated by thrusters, contact forces, and gravitational forces. For the scenarios within this paper, the gravitational acceleration is neglected. The tension force is obtained by writing: 𝐓_γ = { T_γ𝐞_γ if (l_γ > l_γ,0) 0 if (l_γ≤ l_γ,0) . The magnitude of the tension T_γ can be calculated with T_γ = k_a,γ(l_γ-l_γ,0) + c_a,γv_r,γ. The vector 𝐞_γ is axial unit vector of the γ-th thread; k_a,γ and c_a,γ are stiffness and damping coefficients of the γ-th thread; l_γ is the current length of the thread; l_γ,0 is the unstretched length of the thread. v_r,γ is the projection of the relative velocity of the thread end nodes in the axial direction. Each rigid body is assigned a material and a collision geometry to model contact dynamics. At each timestep, the simulator checks for the contact between rigid bodies, and contact forces are computed when it is detected. The contact forces rely on the constraint of no penetration between the rigid bodies and the relative velocities of the bodies in contact. The frictional contact forces are calculated using the scaled-box friction model – an approximation of Coulomb's friction modeling – while the normal contact forces and contact forces normal to the plane of contact are computed following a modified Kelvin-Voigt model and the Hertzian theory, respectively. Interested readers should reference <cit.> for more information regarding contact modeling. The direction of deployment is the negative Z direction on the coordinates chosen. The net's MUs are given an initial velocity with a magnitude of v_e, their components in the X and Y directions have the same magnitude and are defined by the following expressions: v_x, 0=v_y, 0=v_e sin(θ_e )/√(2) The shooting angle, denoted by θ_e, is defined as the angle between the initial velocity vector of each MU and the direction of deployment. The magnitude of the initial velocity vector in the direction of deployment can be expressed as: v_z,0=v_e cos(θ_e ) A cubic chaser spacecraft with a side length of L_ch and mass m_ch is utilized to bring the tether-net system close to the target and to move debris into a disposal orbit. The chaser spacecraft is allowed to float freely without any control in the scenarios considered. The main tether, modeled using multiple slender rigid bodies attached via relaxed prismatic joints that accommodates the simulation of axial, bending, and torsional stiffness, connects the center node of the net to a winch with mass m_w and radius r_w. The winch is set to be free to spool during deployment and locked when the closing mechanism is activated and located on one side of the chaser. For the scenarios considered in this paper, as mentioned in <cit.>, torsional stiffness is deemed negligible and is therefore not included. The main tether has a density, Young's modulus, cross-sectional radius, and length of ρ_t, E_t, r_t, and L_t, respectively. The axial stiffness is computed with the following expression per unit length: EA =E_tπ r^2_t Meanwhile, the bending stiffness per unit length EI is written as: EI=E_tπ r^4_t/4 A set of threads is used for the closing mechanism, which passes through the four MUs and eight nodes on the net's perimeter. In the current design, the closing mechanism is activated by four winches placed in each of the MUs to allow for independence from the main tether. The activation of the closing mechanism is represented by applying a constant force between the attachment points of the closing mechanism until distances between adjacent points become less than a desired distance <cit.>, chosen to be 2.0 m for the scenarios of interest. Once the desired adjacent length is achieved, a constraint is applied to lock adjacent pairs of attachment points. As such, there can be a maximum of N_L=12 locked pairs for the net geometry used in this paper. In future designs, the MU's themselves may be used to close the mouth of the net. However, this will require the development of a complex movement coordination algorithm between the MUs. Each MU is modeled as a spherical rigid body with radius r_MU, which is attached to the net proper by corner threads with radius r_CT and length l_CT. To control the MUs, open-loop thrust forces 𝐅_Thrust,i are applied. The thrusters are activated at t=15.0 s after ejection to allow the net to be sufficiently open and are switched off when the center of mass of the net and the target are a set distance from each other. Each i-th thruster is assigned a constant magnitude of F_Thrust=8.9 N and a constant propellant consumption rate of 0.0121 kg/s based on the cold gas thruster datasheet of VACCO <cit.>. The components of the thrust in the X, Y, and Z directions are defined as: 𝐅_Thrust_i =F_Thrust_i,xî+F_Thrust_i,yĵ+F_Thrust_i,z𝐤̂ F_Thrust_i,x=F_Thrustsin(θ_Thrust)cos(ψ_Thrust_i) F_Thrust_i,y=F_Thrustsin(θ_Thrust)sin(ψ_Thrust_i) F_Thrust_i,z=F_Thrustcos(θ_Thrust) where the angles ψ_Thrust,i in the X-Y plane and θ_Thrust Z-Y are visualized in the diagram in Fig <ref>. Each thrust has a unique angle in the X-Y plane but a common angle in the Z-Y plane. This work assumes that the MUs have the attitude control capability to direct the thrusters in the desired directions throughout their activation. The physical parameters of the system, particularly those for the chaser, main tether, and winch, as well as properties of the net except for the thread radius and initial conditions, are inherited from previous work <cit.> and are summarized in Table <ref>. § FORMULATION OF THE OPTIMIZATION BASELINES AND RL PROBLEMS §.§ Simulation Setup and the CQI The interactions with and modifications of the Vortex Studio-based simulator are done through a C++ Application Programming Interface (API). The user defines net and target parameters, such as net thread radius, shooting angle, and the rotation speed of the target debris, in multiple .txt files as the inputs into the simulator. This work uses the Python programming language to implement the RL component, while MATLAB implements the optimization component. To determine the effectiveness of the system in a scenario in which a great number of simulations is necessary for the optimization and RL task, a quantitative metric referred to as the CQI is utilized <cit.>. The CQI value considers the similarity between the convex hull shape of the net and the target and net-target center of mass distance (COM) and is mathematically defined as: J_n = 0.1|V_n-V_t|/V_t+0.1|S_n-S_t|/S_t +0.8|q_n|/L_c where the CQI at the n-th time-step, the convex hull (CH) volume of the net at the n-th time-step, the volume of the target, the CH surface area of the net at the n-th time-step, the surface area of the target, the distance from the center of mass of the target to the net’s COM at the n-th time-step, and the characteristic length of the target, defined as the shortest distance from the target’s COM to its surface and represented as J_n, V_n, V_t, S_n, S_t, q_n, and L_c respectively. Barnes and Botta's version of the CQI has been shown to effectively classify successful and unsuccessful captures <cit.>. The target chosen for this paper is the second stage of the Zenit 2 launch vehicle (see Fig. <ref>), which was also the subject of previous works utilizing Vortex Studio <cit.>. The target has a mass of 9000 kg and dimensions of 3.9 m in diameter and 11.0 m long. As such, the values V_t, S_t, and L_c associated with the target are 125.3 m^3, 159.9 m^2, and 1.95 m respectively. The simulation scenario includes two phases: deployment and capture. In the deployment phase, the net leaves the chaser with an initial velocity and takes approximately 15.0 s to reach an almost fully-expanded state. When the near fully-expanded configuration is reached, the thrusters are activated. Thrusters remain activated until the net's center of mass reaches the closing distance from the target's center of mass, set to be 2.5 m. Once the distance is reached, the system enters the capture phase. The closing mechanism is triggered at the beginning of this phase, which applies constant forces on each pair of adjacent nodes of the closing threads, thus closing the net mouth. This phase is set to last 20.0 s after closing mechanism activation, and in the end, a settled CQI I^*_CQI, and the number of locked node pairs of the closing mechanism N_L are returned. After the simulation, a .txt file containing the two output values and fuel consumed during the mission is generated. The trajectory of the net can be established and observed by creating an animation based on saved screenshots from the simulation. §.§ Defining the Optimization Task As previously mentioned, problems regarding realism need to be considered. For example, because sensing and observation cameras cannot be placed at the exact center on the face of the chaser where the net is launched due to the net ejection mechanisms, the axis of the launch and the target debris may not be perfectly aligned. Therefore, an offset of the target's position should be considered. In the RMSN model, the thrusters on the MUs can correct the trajectory of the net to address the offset mid-deployment. During this mission, the thruster angles must be considered. An optimization case study is designed with the angles as action variables. The objective is to successfully capture the target with minimal fuel consumption with varying action variables. To test the capability of the design to capture a target with a nonzero offset, the target's initial position is set to have a 9.0 m offset on the X-axis. The value of 9.0 m is much greater than expected in reality. However, it is chosen to demonstrate the system's adaptability with thrusters on each MU. During this mission, three parameters must be considered: thruster angles, thrust magnitude, and the initial mass of the MUs. Three optimization case studies are designed with these three parameters as action variables. All three case studies aim to successfully capture the target with minimal fuel consumption with varying action variables. The target's initial position is set to have a 9.0 m offset on the X-axis to test the capability of the design to capture a target with a nonzero offset. The optimization method used in this paper is Bayesian Optimization <cit.>, which has been successfully applied in various fields, including hyperparameter tuning for machine learning models, robotics, and experimental design. The acquisition function used for Bayesian Optimization in this research was Expected Improvement Plus. Compared to the vanilla version of the Expected Improvement <cit.> acquisition function, it can modify behaviors when an area is over-exploiting. Case Study 1: Minimizing Fuel Consumption with Thrust Angles. In this study, the objective is to find the minimum fuel consumption of the thrusters by only controlling the thruster angles ψ_Thrusts_i and θ_Thrusts. The initial mass is set to be 2.5 kg. The objective function is shown in Eq. (<ref>). min_𝐗 f_1(𝐗)=m_p(𝐗) s. t. 𝐗∈[𝐗_L,𝐗_U] g_1 = I^*_CQI≤ 2.5 g_2 = N_L ≥ 8 g_3 = m_f≥ 2.0 where: 𝐗 = [ψ_Thrust_1, ψ_Thrust_2, ψ_Thrust_3, ψ_Thrust_4, θ_Thrust] Where m_p is the mass of the fuel consumed in each MU for each simulation, 𝐗 represents the action variables picked from Table <ref>, in which only the thrust angles ψ_Thrust_i and θ_Thrust were chosen in this case study, m_f represents the final mass of each MU when the thrusters shut down, which also equals to the mass of each MU at the end of the simulation. In this research, a successful capture threshold is set to be I^*_CQI as 2.5, N_L to be 8, and the mass of each MU at the end of the simulation, m_f needs to be greater than 2.0 kg. Case Study 2: Minimizing Fuel Consumption with Thrust Angles and Initial MU Mass. This study follows the previous case study's framework, but the initial mass of each MU is also used as one of the action variables. This case study aims to explore the optimal design of the initial mass and the thrust angles to minimize fuel consumption. The objective function is shown in Eq. (<ref>): min_𝐗 f_1(𝐗)=m_p(𝐗) s. t. 𝐗∈[𝐗_L,𝐗_U] g_1 = I^*_CQI≤ 2.5 g_2 = N_L ≥ 8 g_3 = m_f≥ 2.0 where: 𝐗 = [ψ_Thrust_1, ψ_Thrust_2, ψ_Thrust_3, ψ_Thrust_4, θ_Thrust, m_0] where m_0 is the initial mass of each MU. Case Study 3: Minimizing Fuel Consumption with Thrust Angles, Magnitude, and Initial MU Mass. This case study chooses the magnitude of the thrust force as an additional action variable. The fuel consumption rate is set to be proportional to the magnitude of the thrust force. This case study explores the optimal design of the initial mass, thrust magnitude, and thrust angles to minimize fuel consumption. The objective function is shown in Eq. (<ref>): min_𝐗 f_1(𝐗)=m_p(𝐗) s. t. 𝐗∈[𝐗_L,𝐗_U] g_1 = I^*_CQI≤ 2.5 g_2 = N_L ≥ 8 g_3 = m_f≥ 2.0 where: 𝐗 = [ψ_Thrust_1, ψ_Thrust_2, ψ_Thrust_3, ψ_Thrust_4, θ_Thrust, m_0, F_Thrust] where F_Thrust is the magnitude of the thrust force. Table <ref> summarizes the action variables and their bounds for the optimization tasks. The values for ψ_Thrust_i for i = 1, 2, 3, 4 and θ_Thrust are assigned a bound after initial manual tuning. This allows the optimization algorithm to search for optimal values close to what is already known to yield a feasible solution. The range of possible m_0 values for each MU is chosen to be approximately the same as the mass of a 2U CubeSat <cit.>, which has a similar size to what each MU is envisioned to possess. Meanwhile, the range of F_Thrust is chosen to be ±3 N from the nominal thrust value. §.§ Defining the Learning Task RL models the actions as Markov Decision Processes (MDP) <cit.>. The objective is to capture the target debris and minimize fuel consumption successfully. The actions in this model are the thrust angles, which activate at the time step of 15.0 s. The simulation shuts down the thrusters when the closing condition is met. To test the generalization of the design, a uniformly distributed noise with the range of (-5.0, 5.0) m is added to the target's Z-direction initial position. Therefore, the MDP of this problem can be simplified, where the target parameters define the state space, and the thrusters' angles define the action space. The details are shown in Table <ref> and Table <ref>. The state space in the current framework has five parameters, but only Z-axis Offset is the changing parameter. The rest four parameters are fixed and kept in the framework for future study of RL by adding more complexity to the target's position, orientation, and angular velocity magnitude. The actions and states are sent to the simulator, and the results of the simulation are used for the calculation of the reward function: max_𝐐 R = r_fuel + r_CQI+ r_NL + r_mass + r_end where: r_fuel = m_0 - λ· 1.2· (t_sim-15-20) r_CQI = -ln((I^*_CQI-2.5)^2+1), if I^*_CQI > 2.5 0, otherwise r_NL = -ln((N_L-8)^2+1), if N_L < 8 0, otherwise r_mass = -ln((m_f-2)^2+1), if m_f<2 0, otherwise r_end = 10, if I^*_CQI≤2.5 ∧ N_L≥8 ∧ m_f≥2 0, otherwise where 𝐐 represents the policy model; R represents the reward in every episode; r_fuel represents the reward based on fuel consumption on each thruster; m_0 and m_f are the initial and end mass of each MU; λ is the fuel burning rate; the constant 1.2 is to add twenty-percent more consumption of the fuel as a safety factor; the constant 15 and 20 are the fixed time cost to wait for the net to expand fully and for the CQI to settle; t_sim is the total simulation time; r_I^*_CQI, r_NL, r_mass are the rewards based on the CQI, number of locked pairs, and MUs at the end of the simulation, which act as the logarithmic penalty function to penalize the reward if the constraints are violated; r_end is the terminal reward. In this paper, the objective of RL is to find the optimal trajectory of the corner MUs and the energy cost to capture the target. The thrust angles determine the trajectory and the energy cost, defined as fuel consumption during the approaching phase. The reward received is defined to be the mass of the remaining fuel after the approaching phase ends. The conditional formulations in the reward function, r_CQI, r_NL and r_mass, ensure essential penalties for missing the target (too large settled CQI), insecure capture (too few locked pairs) and consuming too much fuel (remaining mass is less than the dry mass). The logarithmic penalty functions also ensure the penalty is not too large, which could jeopardize the learning because the settled CQI can reach a value of several hundreds for a failed capture. The terminal bonus state reward, r_end, is for the capture that does not reach any of the penalty states, which can prevent the policy model from exploiting only one of the penalty states, such as minimizing the settled CQI or only maximizing the number of locked pairs. The learning technique used in this framework is Proximal Policy Optimization (PPO) <cit.> from stable baselines3 <cit.>. It is a RL algorithm that combines the benefits of trust region policy optimization (TRPO) <cit.> and traditional policy gradient methods. It is designed to balance exploration and exploitation while training deep neural networks for optimal policy learning. PPO builds upon gradient methods by introducing a surrogate objective function with a clipped probability ratio, ensuring the policy updates are limited to a trust region around the old policy. This prevents excessively large policy updates that can destabilize the training process. It also divides the data received into smaller batches and updates the policy parameters incrementally. This approach provides a more efficient and computationally tractable method for optimizing the policy, as it reduces the variance of gradient estimates and allows for more frequent updates. In PPO, the policy network outputs a probability distribution over actions, which is used to sample actions during training and evaluation. The value network estimates the expected cumulative reward from a given state, which is used for temporal difference learning to update the value function and policy. The neural network used in this research is a multi-layer perceptron (MLP) <cit.> with 2 layers and 64 neurons. § SIMULATION RESULTS §.§ Optimization Case Study 1 and Reinforcement Learning Results For optimization Case Study 1, the hardware used is a Windows workstation with an AMD Ryzen 9 5950X 16-Core Processor and 64 GB RAM. The time cost was 17 hours, with five action variables, 500 iterations, 200 points in the active set, and 80 initial sampling points. The minimum fuel consumption value and the action variables at the minimum fuel consumption are shown in the first row of Table <ref>. Figure <ref> showcases selected instances from the optimal Case Study 1 simulation. Figure <ref>(a) shows the configuration of the system in the instance the thrusters are activated. Meanwhile, Fig. <ref>(b) and (c) highlight the motion of the tether-net under propulsion as it proceeds towards the target. Lastly, Fig. <ref>(d) shows the net as it wraps around the target. Meanwhile, RL was performed on a Windows workstation with an AMD Ryzen 9 5950X 16-Core Processor and 64 GB RAM. The environment was vectorized for parallel training of 32 episodes simultaneously. The mini-batch size was 64. The learning rate was tuned during the process of training. For the first 1600 episodes, the learning rate was 0.0001. However, the plot of average reward showed no sign of learning and still had significant fluctuations. Therefore, the learning rate was increased to 0.001 for the next 8000 episodes, and the plot shows the average reward starts to increase. For the last 2112 episodes, the learning rate was reduced to 0.0005. Each episode has only one step in this RL framework. The learning process has 11712 episodes and took 511 hours to finish. As the learning progresses, the episode takes longer because the successful capture requires more time to simulate in Vortex Studio. Figure <ref> shows that the model was learning over time but still shows strong fluctuations and has not converged at the end. For a successful capture, the reward is over 12, and if the capture misses the target, the reward is below -12. Figure <ref> shows the average reward for every 32 and 192 episodes, and the average reward was initially around 1, and in the end, it reached above 10. The trials with rewards in between usually violate one of the constraints, either the settled CQI or the number of locked pairs. The trend of the rewards with the fluctuations is a sign of insufficient training. The saved policy model was then tested with the same initial Z-axis position (-50 m) as the optimization result. The predicted thrust angle is shown in the second row of Table <ref>, with total fuel consumption of 0.083 kg, and it did not violate the constraints of settled CQI and the number of locked pairs, which made a secure capture. Figure <ref> shows instances within the capture simulation – similar to Fig. <ref> – with the thrust angles defined by the policy model. This demonstrates that the policy model can capture a target with the same fixed position as the optimization one, and the fuel consumption of RL is only 0.004 higher. The performance of the RL policy model and the optimization Case Study 1 – both of which only affect the thrust angles of the MUs – is tested with the noise in the range of -5.0 m to +5.0 m added to the target's initial Z-axis position. With the policy model, the predicted thrust angles change when the target's initial Z-axis position differs. At the same time, for the optimization Case Study 1 evaluation, they are fixed to be the values listed in the first row of Table <ref>. The optimization results for this scenario were tested with 50 samples with noise added to the initial Z-axis position, and the success rate was 46%. The RL model with the highest average reward occurs in episode 11360. The model was tested with the same noise added as the optimization one, and the success rate of the 50 samples was 88%. The distribution of the successful capture of the two cases is shown in Fig. <ref>(a) and <ref>(b). The plots show that the RL model has a higher successful capture rate, and the successful captures are more evenly distributed across -45.0 m to -55.0 m for the Z-axis position of the target. Figure <ref>(c) shows that the optimization result's median successful capture Z-axis position was -51.35 m, and the reinforcement result's median successful capture position was -50.28 m. Though the total learning time of the RL model is over five hundred hours, once the policy model of RL is trained, the execution time to predict the ideal thrust angles is only 41 milliseconds. The optimization method for one scenario takes 17 hours. Still, to improve the capture success rate, optimization needs to run in all 50 scenarios, which will take over eight hundred hours by estimation. Therefore, the RL method has a more efficient generalized performance. §.§ Optimization Case Study 2 and 3 For Case Study 2, using the same computer as Case Study 1, the optimization time cost was 24.1 hours, with 6 action variables, 500 iterations, 200 points in the active set, and 80 initial sampling points. This case study is intended to be compared with Case Study 1 optimization. The minimum fuel consumption value and the action variables at the minimum fuel consumption are shown in Fig. <ref>(b) and the first row of Table <ref>. The minimum objective value is smaller than that of Case Study 1, which shows that the fuel consumption can be lower with a smaller thrust magnitude for each thruster while the capture is still successful. Meanwhile, in Case Study 3, using the same computer as the previous two optimization cases, the time cost was 38.9 hours, with 7 action variables, 500 iterations, 200 points in the active set, and 80 initial sampling points. This case study aims to find the optimal actions of each MU's thrust angle, magnitude, and initial mass. For all three optimization case studies, the minimum fuel consumption values over function evaluations are shown in Fig. <ref>, and Table <ref> displays the optimal fuel consumption for the 3 cases as well as the fuel consumption of the RL policy model with the target possessing -50 m Z-axis position. Case Study 3 obtained the lowest fuel consumption of all 3 cases, demonstrating that tuning the initial mass of each MU – in addition to the thrust angles and magnitude – leads to additional fuel savings. Table <ref> compares minimum fuel cost at the -50 m scenario of the three optimization cases and RL and the training time and execution time of optimization and RL. Though the RL method has higher fuel consumption and takes longer to train, once the RL model is trained, the execution time is much shorter than the optimization cases. For the optimization method to get the result, it has to run the optimizing process; however, RL only needs to input the state to the policy model, and the result can be generated in less than a second. § CONCLUSION A semi-decentralized tether-net system was introduced with four maneuverable corner nodes (individually propelled) to control the trajectory of the net for increased robustness in capturing space debris, here represented by the second stage of the Zenit-2 launch vehicle. A reinforcement learning (RL) based approach is proposed to shift the control system design cost to a heavy but offline computation process, leading to fast-to-execute trained controllers that can be used online to control the net trajectory across various scenarios. Here scenarios are defined in terms of lateral offset between the chaser spacecraft launching the net and the target debris. The performance of the RL-based trajectory controllers is compared with optimal trajectory plans resulting from Bayesian Optimization applied to specific scenarios. The optimization-based solutions are developed for three different case study settings with increasing complexity (and increasing control authority allowed by the setting), going from simply controlling the thrust angle to also controlling the thrust force magnitude and initial fuel mass. In Case Study 3, the optimization's thrust angles, initial mass, and thrust magnitude were action variables. As expected, the third case study with the highest complexity achieves the lowest fuel consumption in the selected capture scenario, which also demonstrates the effectiveness of the implemented optimization process. The policy model generated by RL is observed to perform successful capture in the same scenario as used by the optimization scenario while providing a higher success rate when the lateral shift is introduced. This demonstrates the potential generalizability of the RL-based control policy, which is intractable to achieve with optimization since a separate expansive optimization has to be run for every offset scenario with the latter. To put this into perspective, the trained RL policy executes in 50 milliseconds versus 17 hours required by optimization for a given scenario. An immediate next step in this research is to extend the RL approach to produce the control policies for Case Study 2 and 3, involving increased control authority. Further future work would entail adding more features to the scenarios over which RL is tasked to generalize, including the debris's rotational rates and sensing uncertainties during flight. § ACKNOWLEDGEMENTS The authors would like to thank CM Labs Simulations for providing licenses for the Vortex Studio simulation framework. This work is supported under CMMI Award numbered 2128578 from the National Science Foundation (NSF). The author's opinions, findings, and conclusions or recommendations expressed in this material do not necessarily reflect the views of the National Science Foundation. IEEEtran
http://arxiv.org/abs/2307.01809v1
20230704162347
Finite-size scaling of the random-field Ising model above the upper critical dimension
[ "Nikolaos G. Fytas", "Victor Martin-Mayor", "Giorgio Parisi", "Marco Picco", "Nicolas Sourlas" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.dis-nn" ]
Centre for Fluid and Complex Systems, Coventry University, Coventry CV1 5FB, United Kingdom Departamento de Física Téorica I, Universidad Complutense, 28040 Madrid, Spain Instituto de Biocomputacíon y Física de Sistemas Complejos (BIFI), 50009 Zaragoza, Spain Dipartimento di Fisica, Sapienza Università di Roma, P.le Aldo Moro 2, 00185 Rome, Italy and INFN, Sezione di Roma I, IPCF – CNR, P.le A. Moro 2, 00185 Rome, Italy Laboratoire de Physique Théorique et Hautes Energies, UMR7589, Sorbonne Université et CNRS, 4 Place Jussieu, 75252 Paris Cedex 05, France Laboratoire de Physique Théorique de l'Ecole Normale Supérieure (Unité Mixte de Recherche du CNRS et de l'Ecole Normale Supérieure, associée à l'Université Pierre et Marie Curie, PARIS VI) 24 rue Lhomond, 75231 Paris Cedex 05, France Finite-size scaling above the upper critical dimension is a long-standing puzzle in the field of Statistical Physics. Even for pure systems various scaling theories have been suggested, partially corroborated by numerical simulations. In the present manuscript we address this problem in the even more complicated case of disordered systems. In particular, we investigate the scaling behavior of the random-field Ising model at dimension D = 7, i.e., above its upper critical dimension D_ u = 6, by employing extensive ground-state numerical simulations. Our results confirm the hypothesis that at dimensions D > D_ u, linear length scale L should be replaced in finite-size scaling expressions by the effective scale L_ eff = L^D / D_ u. Via a fitted version of the quotients method that takes this modification, but also subleading scaling corrections into account, we compute the critical point of the transition for Gaussian random fields and provide estimates for the full set of critical exponents. Thus, our analysis indicates that this modified version of finite-size scaling is successful also in the context of the random-field problem. Finite-size scaling of the random-field Ising model above the upper critical dimension Nicolas Sourlas August 1, 2023 ====================================================================================== § INTRODUCTION The random-field Ising model (RFIM) represents one of the simplest models of cooperative behavior with quenched disorder <cit.>. Despite being seemingly simple in terms of definition, the combined presence of random fields and the standard Ising behavior accounts for a vast range of new physical phenomena, many of them remain unresolved even after 50 years of extensive research. Additionally, its direct relevance to experimental analogues in condensed-matter physics, such as diluted antiferromagnets in a field, colloid-polymer mixtures, and others <cit.> establishes the RFIM as one of the most prominent platform models for the designing and/or deciphering of experiments. For a review but also a summary of most recent results we refer to Ref. <cit.>. It is well established that the physically relevant dimensions of the RFIM lay between 2 < D < 6, where D_ l = 2 and D_ u = 6 are the lower and upper critical dimensions of the model, respectively <cit.>. Although the critical behavior of the RFIM at these dimensions has been scrutinized by a variety of methods, a consensus has not been reached for decades. Fortunately, over the last few years several ambiguities have been put at ease due to the development of a powerful panoply of simulation and statistical analysis methods, that have set the basis for a fresh revision of the problem <cit.>. In fact, some of the main controversies have been resolved, the most notable being the illustration of critical universality in terms of different random-field distributions <cit.> – see also Ref. <cit.> where it was shown that the diluted Ising model in a field belongs also to the same universality class with the RFIM as predicted by the perturbative renormalization group – and the restoration of supersymmetry and dimensional reduction at D = 5 <cit.>. We refer the reader to Refs. <cit.> for additional evidence supporting this latter respect. Furthermore the large-scale numerical simulations of Refs. <cit.> have provided high-accuracy estimates for the full spectrum of critical exponents, putting at rest previous fears of possible violations of fundamental scaling relations. On the other hand for D ≥ D_ u the RFIM is expected to show dimension-independent mean-field behavior <cit.>, with the critical exponents holding the mean-field values of the pure Ising ferromagnet (exactly at D = D_ u the well-known logarithmic corrections appear <cit.>). At this point we should emphasize that although the method of finite-size scaling has been successfully applied to the analysis of results by numerous numerical simulations for spin models at D < D_ u, the situation becomes more complicated when one considers the system above its D_ u, as discussed extensively for the 5D Ising model (note that D_ u=4 for the pure Ising ferromagnet) <cit.>. In fact the problem is highly non-trivial as the selection of boundary conditions qualitatively changes the scaling (see Ref. <cit.> and references therein). For periodic boundary conditions a possible solution has been proposed. The key point in these studies <cit.> is that at dimensions D > D_ u the linear length scale L of the system should be replaced in finite-size scaling expressions by a new effective length scale of the form L_ eff = L^D/D_ u. For disordered systems, and in particular the RFIM, not much has been achieved in this direction, with the exception of Ref. <cit.> where a qualitative picture of the transition has been provided at high dimensions [In the context of spin glasses, see Ref. <cit.>]. To this end, we present in the current work an extensive numerical study of the RFIM at D = 7 using exact ground-state simulations and a suitable finite-size scaling method based on phenomenological renormalization that takes into account the new effective length scale L_ eff. We locate the critical point of the transition for Gaussian fields and monitor the size evolution of effective critical exponents. Our final results are compatible up to a very good numerical accuracy with their mean-field expectations. Instrumental in our analysis is the use of a proper value for the corrections-to-scaling exponent ω. In this respect, we provide in Appendix <ref> a detailed derivation of ω for the large-N limit of the O(N) model, starting from Brézin’s analysis <cit.>. We find that the exponent ω corresponding to the O(N) model plays a crucial role for a safe determination of the critical properties in the 7D RFIM. The remainder of this manuscript is as follows: In Sec. <ref> the model and methods employed are described shortly and in Sec. <ref> our main results on the scaling aspects of the 7D RFIM are presented. We conclude in Sec. <ref> by providing a summary and an outlook for future work in this direction. § MODEL AND METHODS The Hamiltonian of the RFIM is ℋ = - J ∑_⟨ xy⟩ S_x S_y - ∑_x h_x S_x, with the spins S_x = ± 1 on a D=7 hypercubic lattice with periodic boundary conditions and energy units J=1, and h_x independent random magnetic fields with zero mean and variance σ^2. Given our previous universality confirmations <cit.>, we have restricted ourselves to Gaussian normal-distributed {h_x}. We work directly at zero temperature <cit.> because the relevant fixed point of the model lies there <cit.>. The system has a ferromagnetic phase at small σ, that, upon increasing the disorder, becomes paramagnetic at the critical point σ_ c. Obviously, the only relevant spin configurations are ground states, which are non-degenerate for continuous random- field distributions. An instance of random fields {h_x} is named a sample and thermal mean values are denoted as ⟨⋯⟩. The subsequent average over samples is indicated by an overline, (e.g., for the magnetization density m=∑_xS_x/L^D, we consider both ⟨ m ⟩ and ⟨ m ⟩). The scaling theory of the RFIM entails an analysis of two correlation functions, namely the connected and disconnected propagators C^ (con)_xy and C^ (dis)_xy <cit.>: C^ (con)_xy≡∂⟨ S_x⟩/∂ h_y , C^ (dis)_xy≡⟨ S_x⟩⟨ S_y⟩ . For each of these two propagators we scrutinize the second-moment correlation lengths <cit.>, denoted as ξ^ (con) and ξ^ (dis), respectively. Hereafter, we shall indicate with the superscript “(con)”, e.g., ξ^ (con), quantities computed from the connected propagator. Similarly, the superscript “(dis)”, e.g., ξ^ (dis), will refer to the propagator C^(dis). We also compute the corresponding connected susceptibility χ^ (con) to obtain the anomalous dimension η, as well the dimensionless Binder ratio U_4 = ⟨ m ^4 ⟩/⟨ m^2⟩^2. From simulations at a given σ we compute σ-derivatives and extrapolate to neighboring σ values by means of a reweighting method – see Ref. <cit.> for full mathematical derivations of fluctuation-dissipation and reweighting formulas. In the present work we consider lattice sizes within the range L _ min=2 to L_ max=10. For each pair of {L, σ} values we compute ground states for 10^6 samples (initial exploratory runs were performed using 10^5 samples), outperforming previous studies – For comparison, 5000 samples with L_ max = 8 were used in Ref. <cit.>. We follow the quotients method for finite-size scaling <cit.>, taking into account the modification L → L_ eff = L^7/6, as we work above the upper critical dimension with periodic boundary conditions. In practice, we focus on three dimensionless quantities g(σ,L_ eff) that, barring correction to scaling, are independent of the system size at the critical point, namely ξ^(con)/L_ eff, ξ^(dis)/L_ eff, and U_4. Given a dimensionless quantity g, we consider a pair of lattices sizes (L_ eff, 2L_ eff) and determine the crossing σ_c,L_ eff, where g(σ_c,L_ eff,L_ eff)= g(σ_c,L_ eff,2L_ eff), see Fig. <ref>. This allows us to compute three such σ_c,L_ eff, a first for ξ^(con)/L_ eff, another for ξ^(dis)/L_ eff, and a third for U_4. Dimensionful quantities O scale with ξ in the thermodynamic limit as ξ^x_O/ν, where x_O is the scaling dimension of O and ν the critical exponent of the correlation length. At finite system sizes we consider the quotient Q_O,L_ eff = O_2L_ eff/O_L_ eff at the crossing Q_O,L_ eff^cross = 2^7/6x_O/ν + O(L_ eff^-ω). Q_O,L_ eff^cross can be evaluated at the crossings of ξ^(con)/L_ eff, ξ^(dis)/L_ eff, and U_4. Renormalization group tells us that x_O, ν, and the leading corrections-to-scaling exponent ω are universal. Instances of dimensionful quantities used in this work are the derivatives of correlation lengths ξ^ (con) and ξ^ (dis) [x_D_σξ^(con)=x_D_σξ^(dis)=1+ν] and the connected susceptibility [x_χ^(con)= ν(2-η)]. Scaling corrections for the critical point are of order L_ eff^-(ω+1/ν), L_ eff^-(2ω+1/ν), etc. Note that as we applied the quotients method at the crossings of ξ^ (con) / L_ eff, ξ^ (dis) / L_ eff, and U_4, the data sets of our simulations were tripled for each pair of system sizes used and thus our practice was to use joint fits imposing a common extrapolation to the thermodynamic limit. Finally, the exponent ω is fixed to the value ω = 1/2 throughout the analysis below, see Appendix <ref>. Finally, some comments on the fitting procedure: We restrict ourselves to data with L≥ L_ min and to determine an acceptable L_ min we employ the standard χ^2/ DOF-test for goodness of fit, where χ^2 is computed using the complete covariance matrix and DOF denotes the number of degrees of freedom. Specifically, we consider a fit as being fair only if 10% < Q < 90%, where Q denotes the probability of finding a χ^2 value which is even larger than the one actually found from our data <cit.>. § RESULTS We start the presentation of our results in Fig. <ref> where a joint fit of the form σ_c,L_ eff=σ_c+ b_1 L_ eff^-(ω+1/ν)+ b_2 L_ eff^-(2ω+1/ν) + b_3 L_ eff^-(3ω+1/ν) , provides the estimate σ_ c = 9.48391(50) for the critical field, in excellent agreement (but higher numerical accuracy) with the earlier result 9.48(3) of Ref. <cit.>. The coefficients b_k with k=1,2,3 are just scaling amplitudes and the quality is quite good (Q ∼ 45%). Figures <ref> and <ref> document the infinite-limit size extrapolations of the main critical exponents ν and η using also joint fits of the form (<ref>) in linear and quadratic L_ eff^-ω order and with cutoff sizes L_ min = 2 and 3, respectively. In both cases a fair fit quality is obtained, namely Q∼ 25% and 18%, respectively. Evidently, the obtained estimates ν = 0.516(18) and η = 0.014(23) are compatible to the mean-field (MF) values ν^ (MF)=1/2 and η^ (MF)=0. Obtaining the critical exponent α of the specific heat is much more trickier in most cases, and the random-field problem is no exception <cit.>. The specific heat of the RFIM can be computed via ground-state calculations and the bond-energy density E_J <cit.>. This is the first derivative ∂ E/∂ J of the ground-state energy with respect to the random-field strength σ <cit.>. The σ-derivative of the sample averaged quantity E_J then gives the second derivative with respect to σ of the total energy and thus the sample-averaged specific heat C. The singularities in C can also be studied by computing the singular part of E_J, as E_J is just the integral of C over σ. Thus, one may estimate α from E_J at σ = σ_ c <cit.> via the scaling form E_J(σ_ c,L_ eff) = E_J,∞ + b L_ eff^(α-1)/ν(1+b'L_ eff^-ω), where E_J,∞, b, and b' are non-universal constants. Since α^ (MF) = 0 and ν^ (MF)=1/2 above the upper critical dimension as already noted above, it is expected that (α -1)/ν = -2. Obviously, the use of Eq. (<ref>) for the application of standard finite-size scaling methods requires an a priori knowledge of the exact value of the critical random-field strength σ_ c [An alternative approach based on a three lattice-size variant of the quotients method has been presented in Refs. <cit.> but is not applicable here due to the limited number of available system sizes.]. Fortunately, we currently have at hand such a high-accuracy estimate of the critical field, see Fig. <ref>. Thus, we have performed additional simulations exactly at the critical point σ_ c= 9.48391 for all range of the accessible system sizes using the standard averaging of 10^6 samples. Data for the bond-energy density are shown in the main panel of Fig. <ref> as a function of 1/L_ eff. The solid line is a fair fit (Q ∼ 23%) of the form (<ref>) excluding the smaller system sizes (L_ min=5) while fixing the exponents α, ν, and ω to their expected values. As an additional consistency check we present in the inset of Fig. <ref> the scaling behavior of a “specific-heat-like” quantity C obtained from the bond-energy density derivative with respect to the random-field strength σ at the critical point σ_ c = 9.48391 and using again 10^6 samples. For C the following scaling ansatz is expected C ∼ c_1L_ eff^α/ν(1+c_2L_ eff^-ω) ∼ c_1+c_2'L_ eff^-1/2, since α/ν=0 at the mean-field level. As it is evident from the plot, the data become rather noisy with increasing system size (see also Ref. <cit.>). Therefore we exclude from our fitting attempt the largest system size L = 10 where statistical errors are larger than 30%. The solid line shows a simple linear fit of the form (<ref>) excluding the smaller sizes (L_ min = 4) with an acceptable fitting quality (Q ∼ 89%). § SUMMARY We have presented a finite-size scaling analysis of the 7D random-field Ising model with a Gaussian field distribution and periodic boundary conditions. Indeed, above the upper critical dimension the choice of boundary conditions remains crucial <cit.>. Ground-state simulations in combination with recent advancements in finite-size scaling and reweighting methods for disordered systems <cit.> allowed us to provide a high-accuracy confirmation of the mean-field behavior of the model. A major point has been the numerical verification for the use of an effective length-scale L_ eff = L^D / D_ u in all finite-size scaling relations as has been proposed for the pure Ising ferromagnet <cit.> and also the clarification with respect to the corrections-to-scaling exponent ω in Ising systems above the upper critical dimension. Currently, we are working exactly at D_ u, where characteristic logarithmic scaling violations have been reported <cit.> but still await for a detailed confirmation. We would like to thank Jesús Salas for helping us to carry out numerical checks of the results in Appendix <ref>. N. G. Fytas is grateful to the colleagues in the Department of Theoretical Physics I at Complutense University of Madrid for their warm hospitality, during which part of this work was completed. We acknowledge the provision of computing time on the parallel computer clusters Zeus and Pluto of Coventry University. This work was supported in part by Grants No. PID2022-136374NB-C21, PGC2018-094684-B-C21, funded by MCIN/AEI/10.13039/501100011033 by “ERDF A way of making Europe” and by the European Union. The research has received financial support from the Simons Foundation (grant No. 454949, G. Parisi). § SCALING CORRECTIONS IN THE LARGE-N LIMIT OF THE O(N) MODEL FOR D>4 Benefiting from Brézin’s analysis in Ref. <cit.>, we deduce the corrections-to-scaling exponent ω for the large-N limit of the O(N) model. §.§ General framework Let us start by recalling the basic definitions from the original work by Brézin <cit.>. We consider a ferromagnetic system with an O(N)-symmetric, nearest-neighbor Hamiltonian on a hypercybic lattice of linear size L ℋ = - J N ∑_⟨𝐱,𝐲⟩S⃗_𝐱·S⃗_𝐲 , S⃗_𝐱·S⃗_𝐱=1 , with periodic boundary conditions. From this point and on we shall be using the dimensionless inverse temperature β = J/T. The model greatly simplifies in the limit N→∞. In the paramagnetic phase, β≤β_c, the propagator [G(𝐫)=⟨S⃗_𝐱·S⃗_𝐱+𝐫⟩] is G(𝐫)=1/β1/L^D∑_𝐪e^i𝐪𝐫/m_L^2+λ(𝐪) , where λ(𝐪)=∑_i=1^D 2(1-cos q_i), 𝐪=2π/L(n_1,n_2,…,n_D), 0≤ n_i≤ L-1, and the mass term m_L^2 is the inverse-squared correlation length m_L^2=1/ξ_L^2. One relates m_L^2 and β through the gap equation which simply codes the constraint G(𝐫=0)=1 β=1/L^D∑_𝐪1/m_L^2+λ(𝐪) . Note that the dispersion relation λ(𝐪) depends crucially on our choice of the nearest-neighbor lattice interaction. In fact, the only feature shared by all local-interaction Hamiltonians is λ(𝐪→0)=𝐪^2+ O(q_i^4). As it is well-known, the problem becomes much simpler in the thermodynamic limit (where anyway the choice of boundary conditions becomes inconsequential) G(𝐫)=1/β∫_B.Z.d^D𝐪/(2π)^D e^i𝐪𝐫/m_∞^2+λ(𝐪) , β= ∫_B.Z.d^D𝐪/(2π)^D 1/m_∞^2+λ(𝐪) , where B.Z. stands for the first Brillouin zone and -π < q_i < π for i=1,2,…,D. Note that the integral in Eq. (<ref>) is convergent for D>2 even if we plug m^2_∞=0. The problem we shall be dealing here is the precise connection between Eqs. (<ref>) and (<ref>) as L grows, for D>4. The alert reader will note that this connection cannot be smooth because of the singular behavior at m_L^2=0 and 𝐪=0 (the strong singularity is characteristic of the periodic boundary conditions) 1/L^D∑_𝐪1/m_L^2+λ(𝐪) = 1/L^Dm_L^2 + L^2-D×(regular term in the limit m^2_L→ 0) . The analysis by Brézin <cit.> puts the above observation in a sound mathematical footing. §.§ The (finite) Poisson summation formula Let H(q) be a smooth, periodic function H(q)=H(q+2π). One starts by recalling the (finite) Poisson summation formula 1/L∑_k=0^L-1 H(q= 2π k/L)=∑_n=-∞^∞∫_-π^πd q/2π H(q)e^i q n L . If the function H depends on a D-dimensional argument, H(𝐪), and if it is periodic (with period 2π) along every one of the D axes in the 𝐪 space, then one can use Eq. (<ref>) in a nested way 1/L^D∑_k_1=0^L-1…∑_k_D=0^L-1 H[𝐪= 2π/L(k_1,k_2,…,k_D) ])=∑_n_1=-∞^∞…∑_n_D=-∞^∞∫_B.Z.d𝐪/(2π)^D H(𝐪)e^i L 𝐪·(n_1,n_2,…,n_D) . Let us now use the notation 𝐧=(n_1,n_2,...,n_D) and the short hand ∑_𝐧 to refer to the multi-dimensional series in the r.h.s. of Eq. (<ref>) [∑_𝐧' will be the series in which the term 𝐧=(0,…,0) has been excluded]. Hence, the gap equation (<ref>) can be rewritten as β=1/L^D∑_𝐪1/m_L^2+λ(𝐪)= ∫_B.Z.d^D𝐪/(2π)^D 1/m_L^2+λ(𝐪) + ∑_𝐧' ∫_B.Z.d^D𝐪/(2π)^D e^i L 𝐪·𝐧/m_L^2+λ(𝐪). Let us now introduce the notation y^2=L^2 m_L^2=(L/ξ_L)^2 , and analyze the remainder term R(y,L)≡∑_𝐧' ∫_B.Z.d^D𝐪/(2π)^D e^i L 𝐪·𝐧/m_L^2+λ(𝐪) , m_L=y/L . On the view of Eq. (<ref>), one may expect for small m_L that R(y,L)∼1/L^D m_L^2=L^2-D/y^2 . Our analysis is based on the above asymptotic estimate (that we shall now derive). However, because we are interested in corrections to scaling, we shall need to extend this analysis by obtaining as well the next-to-leading term in Eq. (<ref>). Brézin did the following simplification that is only valid at small 𝐪, and which is, fortunately, the regime of interest ∫_B.Z.d^D𝐪/(2π)^D e^i L 𝐪·𝐧/m_L^2+λ(𝐪)≈∫_ℝ^Dd^D𝐪/(2π)^D e^i L 𝐪·𝐧/m_L^2+ 𝐪^2=∫_0^∞dt ∫_ℝ^Dd^D𝐪/(2π)^D e^-t(m_L^2+𝐪^2)+ i L 𝐪·𝐧 . In the above expression we used the identity 1/A=∫_0^∞dt e^-t A , which allows us to make explicit the integral over 𝐪 (which is now a Gaussian integral) ∫_B.Z.d^D𝐪/(2π)^D e^i L 𝐪·𝐧/m_L^2+λ(𝐪)≈L^2-D/(4π)^D/2∫_0^∞dt/t^D/2 e^-t y^2 - 𝐧^2/4t , where y was defined in Eq. (<ref>). Plugging now Brézin's approximation (<ref>) into Eq. (<ref>), we obtain R(y,L)≈L^2-D/(4π)^D/2∫_0^∞dt/t^D/2 e^-t y^2 g(t) , g(t)= ∑_𝐧'e^-𝐧^2/(4t)=[∑_n=-∞^∞e^-n^2/4t]^D-1 . Note that g(t) behaves for small t as g(t→ 0) ∼ 2D e^-1/4t , hence g(t) regulates the divergence at small t in the integration measure of Eq. (<ref>) (namely t^-D/2). We also need a strong command on the behavior of g(t→∞). Let f(x) be an (aperiodic) smooth function and F(q) its Fourier transform F(k)=∫_-∞^∞d x f(x) e^-i2π k x , then, the Poisson summation formula tells us that ∑_n=-∞^∞ f(n) = ∑_k=-∞^∞ F(k) . Using the above identity for f(x)=exp(-x^2/4t), one obtains ∑_n=-∞^∞e^-n^2/4t=√(4π t)[1+2∑_k=1^∞e^-4π^2 k^2 t] , so that one finds for large t g(t)/(4π t)^D/2∼ 1 - 1/(4π t)^D/2 + 2D e^-4π^2 t/√(4π t)….. Plugging this expansion into Eq. (<ref>), we see that disregarding the leading term, namely 1, one would find a convergent integral even for y=0. Hence, we conclude that β=1/L^D∑_𝐪1/m_L^2+λ(𝐪)= ∫_B.Z.d^D𝐪/(2π)^D 1/m_L^2+λ(𝐪) + R(y = L m_L,L) , with an asymptotic behavior for the remainder term (as y→ 0) R(y,L)=L^2-D[1/y^2 + A + ...] , where A is some constant. The interested reader is invited to compare Eqs. (<ref>) and (<ref>) with Eq. (<ref>). §.§ Scaling at the critical point Let us consider the gap equation at β=β_c for an infinite and a finite system β_c = ∫_B.Z.d^D𝐪/(2π)^D 1/λ(𝐪) , β_c = ∫_B.Z.d^D𝐪/(2π)^D 1/m_L^2+λ(𝐪) + R(y = L m_L,L) . Taking the difference of the above two equations (and multiplying both sides of the resulting equation by L^2) one obtains y^2 ∫_B.Z.d^D𝐪/(2π)^D 1/[m_L^2+λ(𝐪)]λ(𝐪)= L^2R(y,L) . Now, for D<6 one gets (B is some constant) ∫_B.Z.d^D𝐪/(2π)^D 1/[m_L^2+λ(𝐪)]λ(𝐪)= ∫_B.Z.d^D𝐪/(2π)^D 1/λ^2(𝐪) + B m_L^D-4 + O(m_L^2) , (for D>6 the leading correction is of the order of m_L^2 and at D=6 one expects something like m_L^2log(1/m_L^2)). Reference <cit.> introduces the notation σ(D)=∫_B.Z.d^D𝐪/(2π)^D 1/λ^2(𝐪) . So, collecting everything and recalling Eq. (<ref>), we get at the critical point and D<6 y^2[σ(D)+ By^D-4/L^D-4 +… ]= L^2R(y,L)=L^4-D[1/y^2 + A + ...] . Note here that Brézin considered only the case without any corrections to scaling (i.e. A= B=0). In such a case, one gets y[σ(D)]^1/4=L^4-D/4 or ξ_L(β_c)=L^D/4 [σ(D)]^1/4 . For the needs of the present work we need to also consider the corrections-to-scaling terms. Equation (<ref>) can be rewritten as y[σ(D)]^1/4=L^4-D/4[1+ A y^2+…/1+ B/σ(D)y^D-4/L^D-4+…]^1/4 . It is maybe even better to write this in terms of ξ_L, ξ_L(β_c)/L^D/4= [σ(D)]^1/4[1+ B/σ(D)y^D-4/L^D-4+…/1+ A y^2+…]^1/4 . (Note that for D > 6, corrections of the order of y^D-4/L^D-4 become corrections of order y^2/L^2). Now, recalling Eq. (<ref>), we see that y^2∼ L^(D-4)/2. On the other hand, (y/L)^D-4∼ 1/L^[(D-4)D]/4 (that becomes 1/L^(D/2) for D > 6). Therefore, in the regime 4 < D < 6 we identify a dominant exponent ω_1 and a subleading one ω_2, as follows ω_1=D-4/2 , ω_2=(D-4)D/4 . And, of course, one should expect all kind of sub-leading corrections terms, such as L^-2ω_1, L^-(ω_1+ω_2), etc. Relating the result of Eq. (<ref>) to the random-field problem (where D_ u = 6 rather than 4) leads to our main result ω_1=D-6/2. Hence, for the present case of D = 7 we obtain ω = 1/2.
http://arxiv.org/abs/2307.00195v1
20230701020811
Partial Linear Cox Model with Deep ReLU Networks for Interval-Censored Failure Time Data
[ "Jie Zhou", "Yue Zhang", "Zhangsheng Yu" ]
stat.ME
[ "stat.ME" ]
#1 1 1 Partial Linear Cox Model with Deep ReLU Networks for Interval-Censored Failure Time Data Nishant Malik ======================================================================================== The partial linear Cox model for interval-censoring is well-studied under the additive assumption but is still under-investigated without this assumption. In this paper, we propose to use a deep ReLU neural network to estimate the nonparametric components of a partial linear Cox model for interval-censored data. This model not only retains the nice interpretability of the parametric component but also improves the predictive power compared to the partial linear additive Cox model. We derive the convergence rate of the proposed estimator and show that it can break the curse of dimensionality under some certain smoothness assumptions. Based on such rate, the asymptotic normality and the semiparametric efficiency are also established. Intensive simulation studies are carried out to demonstrate the finite sample performance on both estimation and prediction. The proposed estimation procedure is illustrated on a real dataset. Keywords: deep neural network; partial linear Cox model; semiparametric inference; interval-censoring 1.5 § INTRODUCTION In survival analysis, reliability studies and epidemiological studies, the accurate failure time is often not available but rather it is observed as an interval due to the regular follow-up. For example, in the Chinese Longitudinal Healthy Longevity Survey (CLHLS, <cit.>), a subject's cognitive ability is measured by the Mini-Mental State Examination(MMSE, <cit.>) at each visit, and the cognitive impairment(CI) is defined by scores lower than a pre-specified threshold. Therefore, CI appeared in the interval formed by two consecutive visits. In statistics, this type of data is called interval-censoring. Another important application of interval-censored data is in the studies of transfusion-related acquired immune deficiency syndrome (AIDS), where the positive status of the human immunodeficiency virus (HIV) can never be known exactly, while the change in status is only known between some monitoring times. Regression analysis of interval-censored failure time data aims to estimate the covariate effects on it. Semiparametric regression models with linear assumptions on the covariates have been widely developed including the Cox model(<cit.>,<cit.>), proportional odds model(<cit.>), accelerated failure time model(<cit.>), additive hazard model(<cit.>) and transformation model(<cit.>). Although the linear assumption of covariate effects in the aforementioned models provides simplicity and interpretability, this assumption is often violated in applications. For example, <cit.> depicted the U-shaped relationship between blood pressure and the risk of CI in the CLHLS dataset. Partial linear models have been developed to model non-linear and linear effects simultaneously, e.g., <cit.> first investigated a partial linear transformation model for current status data. <cit.> extended the model to additively multivariate cases. <cit.> and <cit.> relaxed the the assumption of a stepwise constant baseline hazard function in these two models. Nevertheless, none of the existing work has relaxed the additive assumption and allowed multivariate nonparametric function estimation in the partial linear Cox model for interval-censoring due to the curse of dimensionality. Intuitively, the overall convergence rate of these models decreases exponentially with respect to the dimension of the nonparametric component and will not be fast enough to establish the asymptotic normality for the parametric component(<cit.>). Recently, deep neural networks(DNN, referred to as deep learning in <cit.>) have emerged as a promising tool to alleviate the curse of dimensionality and have demonstrated superior performance for high-dimensional problems such as image classification(<cit.>), natural language processing(<cit.>), and speech recognition(<cit.>). Theoretical analysis attributes its success to its powerful ability to approximate functions from specific spaces such as the nested function space(<cit.>) or the mixed smoothness Besov space(<cit.>)). For this reason, it has been extensively used in various types of survival models, such as the Cox model(<cit.>), the accelerated failure time model(<cit.>), illness-death model(<cit.>), competing risk model(<cit.>), cure rate model(<cit.>), and nonparametric Cox model for interval-censoring(<cit.>). In this paper, we propose the deep partial linear Cox model(DPLC) for interval-censoring where covariates requiring interpretability are kept linear and all the remaining covariates are modelled nonparametrically by a deep ReLU network. The proposed procedure enjoys some attractive properties. First, the convergence rate is justified to be free of the nonparametric dimension under some smoothness assumptions, suggesting that the model can break curse of dimensionality. Second, the parametric estimator is shown to be asymptotically normal and semiparametric efficient based on such rate. Third, a covariance estimator based on a least square regression rather than the computationally intensive bootstrap is provided to facilitate statistical inference. The rest of the paper is organised as follows. In Section <ref>, we introduce notation, models, assumptions and the full likelihood of the observations for the interval-censored data. Section <ref> presents the estimation procedure based on the deep ReLU network. The asymptotic properties of the proposed estimators are also discussed. In Section <ref>, the finite-sample performance as well as the comparisons with other models are evaluated by a simulation study. In Section <ref>, we apply the proposed model to two real datasets. Section <ref> concludes with remarks and discussion. Technical details of all the proofs are given in Supplementary Material. § MODEL AND LIKELIHOOD Suppose and are ^d and ^r-valued covariates affecting the failure time T. We assume that given and , T follows the partial linear Cox model, e.g., its conditional cumulative hazard function has the form Λ(t|,)=Λ_0(t)exp+g(), where Λ_0(t) is the unspecified baseline hazard function and β and g correspond to the parametric coefficients and the nonparametric function, respectively. For interval-censored data, T is known to lie within a random interval (U, V), where U and V are the two examination times that satisfy U<T≤ V with probability 1. We assume that T is independent of (U, V) given (X, Z) and the joint distribution of (U, V) given (X, Z) is free of parameters θ=(β,γ). Let Δ_1 := 1_T≤ U, Δ_2 := 1_U<T≤ V, Δ_3:= 1_T>V, the observed information for a single object in the interval-censoring denoted by O:=(X, Z, U, V,Δ_1,Δ_2,Δ_3) is distributed as p(O)= (1-S(U|,))^Δ_1S(U|,)-S(V|,)^Δ_2S(V|,)^Δ_3p_,(U,V)h(,), where S(·|,)=exp(-Λ(·|,)) is the conditional survival function and p_,(·, ·) and h(,) are the density functions of (U, V) and (, ), respectively. Under the assumption that the distribution of (U, V) is non-informative for T, then the log likelihood function of (X_i, Z_i, U_i, V_i,Δ_1i,Δ_2iΔ_3i):i=1,..., n is l_n(β,Λ,g;·):=∑_i=1^n l(β,Λ, g; O_i) where l(β,Λ, g; O)= Δ_1log(1- S(U|X, Z))+Δ_2log(S(U|X, Z)-S(V|X, Z))+Δ_3log(S(V|X, Z)). § ESTIMATION AND ASYMPTOTIC PROPERTIES In this section, we consider the estimation of unknown parameters (β,Λ_0, g). A natural approach is to consider the sieve method(<cit.>), e.g., maximizing l_n(β,Λ_0, g) over the product space of ^d , sieve function spaces of Λ_0 and g that grow in capacity with respect to n. We choose these two spaces as a monotone B-Spline space and a deep ReLU network space. The monotone B-Spline space is a non-negative linear span of the integrated spline basis functions M_k(<cit.>), i.e. = {γ_kM_k:γ_k≥ 0}. Since each M_k is non-decreasing function ranging from 0 to 1, constraining the coefficients to be non-negative guarantees the non-negativity and monotonicity of the estimator of the baseline cumulative hazard function Λ_0. As with B-splines, the q_n = p_n + l basis functions are fully determined once the degree and the interior knot set are specified, where p_n is the cardinality of the interior knot set. A deep ReLU network space with input dimension p_0, depth K, hidden unit vector p=(p_0,⋯,p_K,p_K+1), sparsity constraint s and norm constraint D is defined as 𝒢(K, p, s, D)= {g(z)=(W^(K)σ(·)+b^(K))∘⋯∘ (W^(1)z+b^(1)):^p_0↦^p_K+1,. .g_∞≤ D, W^(ℓ)∈^p_ℓ+1× p_ℓ, b^(ℓ)∈^p_ℓ+1,ℓ=0,⋯, K,. . ∑_ℓ=0^KW^(ℓ)_0 + b^(ℓ)_0≤ s,max_ℓ=0,⋯, KW^(ℓ)_∞∨b^(ℓ)_∞≤ 1}, where ·_0 and ·_∞ are the ℓ_0-norm and ℓ_∞-norm of a matrix, respectively and W^(ℓ) and b^(ℓ) are the weights and biases of the network, respectively. The estimation of (,Λ, g) is set to be the maximizer of l_n over ^d×ℳ×𝒢(K, p, s, ∞) with p_0=r and p_K+1=1 after empirical centralization, that is (_n,Λ̂_n, ĝ_n) =_^d×ℳ×𝒢(K, p, s, ∞)l_n(β,Λ, g;·) Such optimization poses several challenges. First, stochastic gradient descent(SGD, <cit.>), the most widely used algorithm in deep learning and its variants such as Adam(<cit.>) can not operate with the non-negativity constraint which is required on the coefficients of . We remove the non-negativity constraint by reparametrization, that is, ={exp(γ̃_k)M_k:γ̃_k∈}. The second challenge is regarding the specification of the hyperparameters of the network, mainly the depth K and the knots p. Unfortunately, there are no suitable model selection criteria for deep neural networks, such as Akaike information criterion(AIC) or Bayesian information criterion(BIC)(<cit.>) for linear sieve models. Meanwhile, cross-validation(CV), another popular method in model selection, is not applicable either due to the computational complexity of deep neural networks. Therefore, we select these hyperparameters in a lightweight and data-adaptive manner. A part of the dataset is randomly hold out as a validation set to select the hyperparameters among the candidates over K and p, and then the selected hyperparameters are used to rerun the estimation on the whole dataset. The third challenge is to deal with the non-concavity with respect to the weights and biases of the network. We solve this problem to some extent by repeating this optimization for many times, each time with different initial values. Although the global maximizer is not reachable, this increases the probability of finding a better estimator. This claim is verified by numeric experiments later in Section <ref>. We now describe the asymptotic properties of the estimator (_n,Λ̂_n, ĝ_n). Denote δ_n=max_i=0,⋯,Kn^-α_i/(2α_i+d̃_i)∨ n^-α_Λ/(2α_Λ+1) where α,d̃ and α_Λ is defined in the follow conditions. [C1] β belongs to a compact subset of ^d [C2] (1) ((X, Z)^⊗ 2) is positive definite. (2) both X and Z are bounded with probability 1. [C3] The support of U denoted as [U_m, U_M] and that of V denoted as [V_m, V_M] satisify that 0=U_m<U_M=V_m<V_M. The densities of U and V are both bounded away from zero and infinity on their support. [C4] There exists some η∈ (0, 1) such that uVar(X|U)u≥ηu(XX|U)u and uVar(X|V)u≥ηu(XX|V)u for all u∈^d or uVar(Z|U)u≥ηu(ZZ|U)u and uVar(Z|V)u≥ηu(ZZ|V)u for all u∈^r. [C5] Λ_0 ∈ℋ_1^α_Λ([L, R], M), the Hölder space defined by ℋ_r^α(𝒟, M)=g:𝒟↦: ∑_κ:|κ|<α∂^κ g_∞+∑_κ:|κ|=αsup_x,y∈𝒟,x y|∂^κg(x)-∂^κg(y)|/x-y_∞^α-α≤ M, and is monotonically increasing from 0 with α_Λ≥ 2. [C6] g∈ℋ(q,α,d,d̃, M), the composite smoothness function space(<cit.>) defined by ℋ(q,α,d,d̃, M):={ g=g_q∘⋯∘ g_0: g_i=(g_i1,⋯,g_id_i+1), g_i: ^d_i↦^d_i+1, g_ij∈ℋ^α_i_d̃_i([a_i,b_i]^d̃_i, M)} and g(Z)=0. [C7] The hypyer-parameters of the deep ReLU network satisfies that K=O(log n), s=O(nδ_n^2log n), nδ_n^2≲min_k=1,⋯,Kp_k≤max_k=1,⋯,Kp_k≲ n and p_n=O(n^1/(2α_Λ+1)). Conditions (C1)-(C4) are similar to those in <cit.>. Condition (C5) and Condition (C6) restrict the function space of Λ and g, respectively. Condition (C7) describes the growing of hyperparameters of DNN with respect to n. We have the following theorems as n→∞. Under Conditions (C1)–(C7), the estimator (_n,Λ̂_n, ĝ_n) is consistent for (β_0, Λ_0, g_0) and the convergence rate is δ_nlog^2n, e.g., ||β̂_n-β_0||_2+||Λ̂_n-Λ_0||_L_2(U, V)+ĝ_n-g_0_L_2()=O_P(δ_nlog^2 n). Suppose Conditions (C1)-(C7) hold, I_β is nonsingular and nδ_n^4→ 0, then √(n) (β̂_n-β_0)⇝ N(0, I^-1_β), where I_β is the semiparametric information bound for β. It is interesting to notice that the polynomial term max_i=0,⋯,Kn^-α_i/(2α_i+d̃_i)∨ n^-α_Λ/(2α_Λ+1) is free of the nonparametric dimension r which means curse of dimensionality is much eased in this model. Furthermore, the estimator of β attains the asymptotic normality and the semiparametric efficiency bound even though the overall convergence rate is slower than n^-1/2. To perform statistical inference on β_0 with finite sample size according to Theorem <ref>, one has to estimate the information matrix I_β. Consider a parametric smooth submodel with parameter (β, Λ_(s),g_(s)), where Λ_(0)=Λ, g_(0)=g and ∂/∂ sΛ_(s)=h_1,∂/∂ s g_(s)=h_2, the score operators are defined by l̇_β(β,Λ,g; o) =∂/∂β l(β,Λ_0,g; o), l̇_1(β,Λ,g; o)[h_1] =∂/∂ s l(β,Λ_(s),g; o)|_s=0, l̇_2(β,Λ,g; o)[h_2] =∂/∂ s l(β,Λ_0,g_(s); o)|_s=0. The least favourable direction (h_1^*, h_2^*) is obtained by projection l̇_β into the product space of l̇_1 and l̇_2, or equivalently, minimize ρ(h_1, h_2)=||l̇_β(β,Λ,g; O)-l̇_1(β,Λ,g; O)[h_1] - l̇_2(β,Λ,g; O)[h_2]||^2. Then I_β can be calculated by I_β=[l̇^*_β(β,Λ,g; O)^⊗ 2]=[(l̇_β(β,Λ,g; O)-l̇_1(β,Λ,g; O)[h^*_1] - l̇_2(β,Λ,g; O)[h^*_2])^⊗ 2], where l̇^*_β(β,Λ,g; O) is the efficient score function. Because there are no closed form expressions for (h_1^*, h_2^*), following <cit.>, we obtain ĥ_1^* and ĥ_2^*, the estimation of h_1^* and h_2^*, by minimizing the empirical version of projection (2) over the product space of another two deep ReLU network spaces with properly chosen hyperparameters. The estimator of I_β is then defined by pluging ĥ_1^* and ĥ_2^* into the empirical version of (3). § SIMULATION In this section, we demonstrate the numerical performance of DPLC and compare it in terms of both estimation and prediction with linear Cox regression and partial linear additive Cox regression for interval-censoring. These two models are implemented using the R packages icenReg and are abbreviated as CPH and PLAC, respectively. DPLC is implemented with the Python package pycox based on PyTorch(<cit.>). We first generate covariates and as follows ∼Binomal(1, 1/2) ∼Clayton(8, 0.5, [-2, 2]) T is generated with Λ_0(t)=μ t^κ, κ∈ (0.5, 1, 2) and _0=1.2. After T is generated, (U, V) is obtained by dividing [0, 5] into 10 equal-distance intervals with visit probability p∈ (0.4, 0.7). g is chosen from the following candidates: * (linear): g()=2.4∑_k=1^10(_k-1/2); * (additive): g()=1.2∑_k=1^10cos(2π/k_k); * (deep-1): g()=4.0∑_k=1^10(_k-1/2); * (deep-2): g()=4.5(max_k=1,2,3_k-min_i=1,2,3_k); Case 1 and Case 2 correspond to CPH and PLAC, respectively while Case 3 and Case 4 are designed for DPLC. The factors 2.4, 1.2, 4.0 and 4.5 in each case were used to scale Var(ϕ())/Var() within the range 4-6, i.e. to control the ratio of signals from the nonparametric and parametric components. Each dataset is randomly splitted with a 80:20 ratio as a training and validation set to select the best hyperparameters from all combinations of K∈{2, 3, 4} and p_k ∈{⌈ u/4r⌉:u=1,⋯,8}. We repeat this 5 times with different initial values for the optimization in (<ref>), with the chosen parameters corresponding to the maximal full likelihood in the validation set. The bias for the pamametric estimator β̂_n and its empirical standard error from 500 replications with n∈{500, 1000} are summarized in Table <ref>. As expected, the bias of DPLC is comparable to, if not slightly worse than CPH and PLAC in Case 1 and Case 2, respectively, since these two cases are specifically designed for them. However, in Case 3 and Case 4, CPH and PLAC are more seriously biased than DPLC and do not improve with increasing n, whereas DPLC does. For example, in Case 3 with n=500, p=0.5 and κ=1, the bias of β_1 for CPH and PLAC are -0.782 and -0.412, respectively, while the bias for DPLC is -0.052, a much smaller one and it decreases to -0.037 when n increases to 1000, but the bias for CPH and PLAC increases to -0.812 and -0.438, respectively. This phenomenon can be explained by the fact that the highly complicated nonparametric function g in Case 3 and Case 4 can be easily fitted by a deep ReLU network, whereas it can not be approximated well by any linear or additive function, and this inapproximability is further comfirmed with increasing n. As might be expceted, empirical standard error decreases with increasing n for all models and all censoring rates. Table <ref> presents the converage proportion of the 95% confidence intervals, suggesting that the empirical coverage probabilities for DPLC were generally around 95% and close to the nominal level in four cases while those for Cox and PLAC are far away from 95% in Case 3 and Case 4 due to the significant bias. The performance in estimating of g measured on a test data formed of 4000 independent samples with the relative mean of squared error(RMSE) defined as RMSE(ĝ_n)=∑_i=1^n(ĝ_n(_i)-g_0(_i))^2/∑_i=1^n(g_0(_i)-g̅_0)^2, is reported in Table <ref>. The smaller the metric is, the more accurate an estimator is. Similar to the results for the parametric estimator, DPLC significantly outperforms both Cox and PLAC by a lot in Case 3 and Case 4. To name a few, 0.448 for DPLC versus 0.971 and 0.936 for CPH and PLAC in Case 4 with n=500,p=0.4 and κ=0.5. In Case 1 and Case 2, DPLC performs only slightly worse than CPH and PLAC. We evalute and compare the predictive power of CPH, PLAC and DPLC with the Intergrated Mean Square Error(IMSE) in Table <ref> defined by IMSE =1/N∑_i=1^N∫_0^U_i(1-Ŝ_n(t|_i,_i))^2dt+∫_V_i^τŜ_n^2(t|_i,_i)dt. It is can be seen from this table that the IMSE of DPLC is much smaller than that of CPH and PLAC in Case 3 and Case 4. For example, it is 0.118 for DPLC while is 0.173 and 0.169 for CPH and PLAC, respectively. In Case 1 and Case 2, CPH and PLAC outperform DPLC by only about 0.005 IMSE. Further analysis of the simulation study can be found in Supplementary Material. .5in § APPLICATION The Chinese Longitudinal Healthy Longevity Survey(CLHLS) is a follow-up survey of the elderly in China organized by Peking University Healthy Aging and Development Research Center and the National Development Research Institute. This longitudinal study covered 23 provinces across the country with the elderly aged 65 and above and is the earliest and longest social science survey in China(1998-2018). Its main objective is to identify the prevalent factors affecting the health and quanlity of life of the China's elderly. The questionnaire for the respondents includes the basic conditions of the elderly and their families, socio-economic background, self-assessment of health and quality of life, cognitive function, personality and psychological characteristics, daily activities, lifestyle, diseases and treatment, etc. In this section, our primary purpose is to assess the effectiveness of our method in adjusting for non-linear covariate effects associated with cognitive impairment while still maintaining good interpretability of some covariates. We use the 2008 wave of the CLHLS, which includes 5813 sujects. To determine the interval that brackets the time to cognitive impairment, the cognitive status is measured at each visit using the Chinese verision of MMSE, which includes 24 cognitive questions with scores ranging from 0-30. The right endpoint of this interval is considered to be the last visit time after which all MMSE scores are below 18, and the left endpoint is the previous visit time. The average length of the interval is 5.328 years if not right-censored. Following <cit.>, we include continuous covariates age, years of education, boday mass index(BMI), systolic blood pressures (SBP) and diastolic blood pressures (DBP) as Z, and the binary covariates sex, exercise, diabetes and stroke or cardiovascular diease as X. These covariates are summarized in Table 3 in Supplementary Material. We randomly divided the sample into a training set (80%of the total) and a test set (the remaining 20%). As in the simulation study, the DNN architecture is chosen from some pre-spcified candidates. Similar to the simulation study, we compare the predictive power between the DPLC, CPH and PLAC in terms of the IMSE. After fitting the model, the IMSE for DPLC is 0.098 while for CPH and PLAC, it is 0.131 and 0.119, respectively. This shows that the performance of our approach is empirically superior to PLAC which is also superior to CPH. This can be intuitively explained by the fact that those covariate effects are highly nolinear and the more complex the sieve, the better the fit. The estimated coefficients of the linearly modelled covariates are shown in Table <ref>. From this table, it can be seen that being male rather than female and more exercise are significantly associated with a lower risk of cognitive impairment while diabetes and stroke or cardiovascular diease are not significant. These conclusions are consistent with those of <cit.>. § CONCLUSION In this paper, we propose a partial linear Cox model with a deep neural network estimated nonparametric component for interval-censored failure time data. This model increases predictive power compared to the partial linear additive Cox model(<cit.>), while retaining interpretation for the covariates of primary interest. The estimators are showed to converge at a rate independent of the nonparametric covariate dimensionality and the parametric estimator is rigorously proved to be asymptotically normal and semiparametric efficient. As shown in simulation studies, the proposed model significantly outperforms the linear Cox model and the partial linear additive Cox model with respect to both estimation and prediction when the nonparametric function is enormously complex. This model is suitable for moderate sample sizes, but otherwise may encounter some problems due to the high non-convexity and complexity of DNN. With a small sample size, the optimization becomes unstable and one has to re-initialize for more often, while with a large sample, a large capacity DNN is preferred and the optimization becomes quite time-consuming. This work can be extended in several ways. For example, other types of semiparametric survival models such as the partial linear transformation model(<cit.>) can be developed using the same procedure. Another interesting future work is to use deep convolutional neural networks(CNN, <cit.>), a popular variant of DNN, to estimate the function g when the nonparametric covariate is a high-dimensional image(<cit.>) or other types of unstructured data. chicago
http://arxiv.org/abs/2307.01758v2
20230704145905
Properties of aqueous electrolyte solutions at carbon electrodes: effects of concentration and surface charge on solution structure, ion clustering and thermodynamics in the electric double layer
[ "Aaron R. Finney", "Matteo Salvalaglio" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
.tocmtchapter mtchaptersubsection mtappendixnone Surfaces are able to control physical-chemical processes in multi-component solution systems and, as such, find application in a wide range of technological devices. Understanding the structure, dynamics and thermodynamics of non-ideal solutions at surfaces, however, is particularly challenging. Here, we use Constant Chemical Potential Molecular Dynamics (CμMD) simulations to gain insight into aqueous NaCl solutions in contact with graphite surfaces at high concentrations and under the effect of applied surface charges: conditions where mean-field theories describing interfaces cannot (typically) be reliably applied. We discover an asymmetric effect of surface charge on the double layer structure and resulting thermodynamic properties, which can be explained by considering the affinity of the surface for cations and anions and the cooperative adsorption of ions that occurs at higher concentrations. We characterise how the sign of the surface charge affects ion densities and water structure in the double layer and how the capacitance of the interface—a function of the electric potential drop across the double layer—is largely insensitive to the bulk solution concentration. Notably, we find that negatively charged graphite surfaces induce an increase in the size and concentration of extended liquid-like ion clusters confined to the double layer. Finally, we discuss how concentration and surface charge affect the activity coefficients of ions and water in the double layer, demonstrating how electric fields in this region should be explicitly considered when characterising the thermodynamics of both solute and solvent at the solid/liquid interface. § INTRODUCTION Carbon-electrolyte interfaces often feature in technologies and devices designed for energy storage <cit.> and water desalination<cit.>. Moreover, carbon allotropes are increasingly employed as nano-reactors,<cit.> as well as supports for liquid-phase catalysts<cit.>. A molecular-level picture of the structure and dynamics of multi-component liquid phases at the carbon interface is important to understand the physical chemistry involved in such technologies/devices in order to improve their design for functional applications. Molecular simulations, particularly molecular dynamics (MD), provide powerful tools to investigate such systems at the atomic level.<cit.> By explicitly capturing the atomistic details of the solid/liquid interface, MD-based methods enable predictions regarding the effect of changes to the bulk solution composition and the applied interfacial potential (that gives rise to a surface charge) on the properties of the so-called electric double layer (EDL). In turn, this allows for an assessment of the suitability of mean-field models that are commonly used to describe and predict the structure and electrochemical properties of solid-solution interfaces.<cit.> Gouy-Chapman theory predicts a monotonically decreasing concentration of ions in the immediate vicinity of electrodes with the same sign of charge, while the concentration of ions with opposite charge to the surface increases smoothly according to a Boltzmann distribution; thus, the solution screens the surface charge by establishing a diffuse EDL. <cit.> This fundamental model for the structure of charge carriers at electrodes inadequately describes the EDL when large potentials are applied and in the presence of high electrolyte concentrations. By neglecting ion finite-sizes and their correlations, it fails to explain the change in the electrical properties of the graphite-electrolyte interface due to specific ion effects, which was demonstrated across the series of alkali chlorides at graphite. <cit.> The Gouy-Chapman-Stern model was developed to address some of these shortcomings, by accounting for the specific adsorption of ions at the electrode and the role that ion solvation spheres play in defining the inner- and outer-Helmholtz plane.<cit.> In this framework, the solution-side of the EDL is modelled as a series of plate capacitors; nonetheless, it is assumed that the finite size of charge carriers can be ignored in the diffuse layer. At low concentrations of simple salts—such as NaCl—in water, these simple mean-field models were suggested to provide a reasonable approximation of the EDL structure, <cit.> especially as charge transfer between the electrode and charge carriers in solution is low. <cit.> However, the combination of high salt concentrations and large surface charge densities results in conditions where the solvation, finite size and cooperative adsorption of ions cannot be neglected. More sophisticated mean-field models of the EDL were developed to address some of these effects. <cit.> Our recent simulations demonstrated how asymmetric electrolyte adsorption gives rise to alternating cation and anion-rich aqueous solution layers perpendicular to planar graphene and graphite substrates at moderate-to-high alkali chloride solution concentrations (∼1 M and above). <cit.> This behaviour is due to the partial saturation of ions in solution layers in contact with the surface that emerges in the EDL. <cit.> This picture of the EDL is reminiscent of the structures observed in ionic liquids at charged surfaces and requires a treatment of the EDL that accounts for the finite size of charge carriers accumulating at the interface.<cit.> The asymmetric ordering of ions results in charge fluctuations in this region—typically four-to-five liquid layers deep—and a departure from descriptions of the EDL expected from the established mean-field models described above.<cit.> Thanks to the adoption of the Constant Chemical Potential Molecular Dynamics (CμMD) method,<cit.> which maintains a constant thermodynamic driving force associated with ion adsorption, we were able to quantify the electric potential drop across the EDL and the excess chemical potential for ions at the solid-solution interface,<cit.> In CμMD, the use of an explicit molecular reservoir coupled to the model interface prevents any ion depletion in the bulk solution, which would otherwise occur in typical finite-sized MD simulations when ions adsorb at an interface. Here, we extend our analysis to consider concentrated NaCl(aq) solutions in contact with charged graphite and the resulting properties of the solution side of the EDL. In our analysis of the simulation results, we pay particular attention to the thermodynamic and structural properties of the solvent (as well as ion speciation). Understanding how the presence of ions and surface charge control the thermodynamics of solvent is essential to predict the activity of interfaces for applications in catalysis, and recent computational studies have demonstrated how interfaces impact the ability of the aqueous medium to screen Coulombic interactions due to a changing dielectric constant. <cit.> In what follows, we recap the effect of concentration on the structure and properties of ions in solution at neutral graphite before considering the combined effects of concentration and surface charge. We characterise the structural properties of water molecules in the EDL when compared to bulk solutions, pure liquid water, and ice. Finally, we evaluate the electrical properties of the EDL and use this information to calculate how the activity constants for ions and water change on moving from the bulk solution towards the graphite surface. § COMPUTATIONAL METHODS Following the protocol proposed by Finney et al. <cit.>, all simulations were performed using the Joung and Cheatham<cit.> force field to describe the interactions of ions with SPC/E water<cit.>. Graphite was modelled using the OPLS/AA force field,<cit.> while the intermolecular interactions between carbon and water were modelled using pairwise potentials fitted to water adsorption energies obtained via random phase approximation calculations.<cit.> A number of force fields are available to model the interactions of carbon with water, and comparisons of some of the different models are available in the literature. <cit.> The carbon-water model adopted here predicts a water contact angle of ∼ 40^∘, with small changes to this mean value dependent upon the number of carbon layers in the substrate and the truncation distance used for the interaction potential. <cit.> The contact angle is more acute than that predicted by earlier force fields; however, it was shown in experiments that graphene becomes less hydrophilic when exposed to air and the surface becomes populated by contaminants (hence, a smaller contact angle should be reproduced by the model than was initially thought). <cit.> The contact angles for pristine graphene and graphite were found to be 42 ± 7^∘ and 42 ± 3^∘, respectively.<cit.> A recent exhaustive computational study of the interaction energies of water with graphene using quantum mechanical calculations suggest an upper bound to the contact angle of water on graphene of 56^∘, as informed by dynamical simulations of coarse-grained water molecules at the carbon surface, where interaction potentials were fitted to the results from calculations at a higher level of theory.<cit.> Our model, therefore, captures reasonably well the thermodynamics of water at the carbon interface (as determined by the surface tension); furthermore, it predicts the correct radial breathing mode frequency for carbon nanotubes in water. <cit.> Ion-carbon interactions were modelled using potentials fitted to the results from electronic structure calculations that capture the polarisability of the carbon surface in the presence of ions surrounded by a conductor-like polarisable continuum, mimicking the presence of a solvent.<cit.> Despite components of the force field being constructed from various sources, it is important to recognise that consistent descriptions of ions and water molecules (i.e., Joung and Cheatham ions and SPC/E water) were used for the fitting of pairwise potentials throughout. The GROMACS 2018.6<cit.> MD engine was adopted to perform simulations within the NVT ensemble unless otherwise stated. Atom positions were evolved during the simulations using a leapfrog time integrator with a 2 fs timestep; as such, water intramolecular degrees of freedom were constrained using the LINCS algorithm. <cit.> Intermolecular interactions were computed for atoms within 0.9 nm, and long-range electrostatics were treated using the smooth particle mesh Ewald summation. <cit.> The temperature was held constant at 298 K (within fluctuations) using the Bussi-Donadio-Parrinello thermostat.<cit.> The simulation set-up for graphite in contact with NaCl(aq) solutions follows our previous work. <cit.> This involved preparing an eight-carbon layer 2.7 × 5.4 × 5.5 nm (x × y × z) graphite slab with basal surfaces perpendicular to the simulation cell x-axis. 1,672 Na^+ and Cl^- ions, as well as 13,819 water molecules, were placed in the orthorhombic simulation cell with periodic boundaries in all three dimensions. With carbon atoms fixed at their lattice positions, a 0.2 ns MD simulation was performed to relax the simulation cell volume in the NPT ensemble using the barostat of Berendsen et al.<cit.> at a pressure of 1 atm. Following this equilibration step, the ions in solution were accumulated in a reservoir region far away from the graphite surface(s) by applying an external harmonic potential to the distance between the surface and ions (using the PLUMED v2.5 plugin<cit.> with force constant, k=3 × 10^5 kJ mol^-1). The minimum distance between carbon atoms and ions in this external bias was 6 nm. Simulations in the NVT ensemble were performed until the ions were at least 5.9 nm from the carbon slab. The final configuration of the system from this preparatory phase was taken as the starting structure for 100 ns CμMD simulations, where the final 50 ns steady-state trajectory window in 100 ns simulations was used in all analyses of the interfacial properties. A similar procedure was used to prepare simulations of graphite in contact with pure water; here, however, no ions were included in the simulation cell, and it was not necessary to prepare the ionic reservoir. For all CμMD simulations, PLUMED 2<cit.> was utilised to compute the external forces required to control the solute density (n) in a 2.2 nm control region, whose innermost edge (closest to a graphite basal plane) was x_F=3.7 nm from the centre of the simulation cell x-axis, defining the origin. The CμMD force on ion i takes the functional form, F_i(x)=k_i/4 ω(n_i^CR - n_i^t) [ 1 + cosh( x-x_F/ω) ]^-1 where ω tunes the width of the force region and was taken to be 0.01% of the cell length in x; superscript CR and t indicate the instantaneous and target value of n in the control region; and k_i=2× 10^5 kJ mol^-1 is the force constant for the function that acts like a semi-permeable membrane for the ions. Standard MD simulations were performed to simulate graphite in contact with water, which we subsequently refer to as 0 mol dm^-3 (M). In the presence of ions, on the other hand, CμMD simulations were performed where the target ion density was 0.6022, 3.0110 and 6.022 nm^-3, equating to molar concentrations of 1, 5 and 10 M. When the simulations reached a steady state, the concentrations of ions were maintained at 1.2 ± 0.03, 5.01 ± 0.05 and 9.23 ± 0.07 M. The difference between the target and evaluated concentrations is small and is due to the relative occupancy of the ionic reservoir and the parameters used to apply CμMD forces; this is not particularly important for the current study, and the simulations can be prepared to ensure that concentrations precisely match the target value, if necessary. Both 5 and 10 M cases are beyond the solubility for NaCl(s), determined to be approximately 3.5 M for the Joung and Cheatham force field; the solubility of halite is 3.7 mol kg^-1,<cit.> which is approximately 3.5 M according to a 2^nd order polynomial fitting of molarities (c) as a function of molalities (b) obtained from steady-state bulk NaCl(aq) solutions ranging from 1 to 16 mol kg^-1 (b=-0.0174c^2+0.9822c+0.0537). We explored the effects of applied surface charge in systems at all three sampled bulk solution concentrations as well as in simulations of graphite in contact with pure water. To achieve this, we applied uniform charges to the outermost carbon atoms in the graphite slab; equal charges with the opposite sign were applied to 1144 carbon atoms on each face of the graphite slab to generate charge densities, |σ| = 0.19, 0.39, 0.58 and 0.77 e nm^2. (We use the descriptors positively/negatively charged surface and positive/negative electrode interchangeably throughout.) As such, a single simulation provides information on the effect of positive and negative applied potentials by examining the interface on different sides of the graphite slab. The total charge in the simulation cell was, therefore, zero. We believe that this approach to applying charges to the surface is reasonable for the current study; indeed, when we tested a Drude oscillating charge model for the surface charge polarisability at 1 M, we did not observe significant differences in the properties of the interface when compared with the uniform distribution of charge to carbon centres discussed below. More sophisticated models of the surface charge polarisability have been developed; <cit.> although a comparison of the accuracy and applicability of these methods is beyond the scope of the current work. § RESULTS AND DISCUSSION §.§ Solution structure at charged graphite In this section, we discuss the salient features of the steady-state structure of NaCl(aq) solutions at graphite surfaces when negative and positive surface charges are applied to carbon atoms, equating to surface charge densities (σ) in the range 0 - ± 0.77e nm^2. The target bulk ion solution concentrations in our CμMD simulations here were 1, 5 and 10 M. In addition, we also perform simulations of neutral and charged graphite in contact with pure water. A snapshot of a typical CμMD simulation is provided in Figure <ref> A, where the ion-rich reservoir can be seen spanning the periodic boundaries, far from the carbon-electrolyte interface. As ions accumulate in the EDL, the solution in contact with the graphite surface is replenished with ions from the reservoir to maintain a constant thermodynamic driving force for the process, ensuring that the solution, several nanometres from the interface—which we refer to as the bulk—is electroneutral. In the following subsections, we focus our analysis on the steady state that emerges between species in solution in this bulk region and the EDL. Density profiles—concentration and charge effects The one-dimensional atom densities of solution species perpendicular to the graphite basal surface are reported in Figure <ref> B. In addition, Figures <ref>–<ref> provide the same densities on a linear scale over a wider range of Δ x. In line with our previous simulation studies<cit.> and those from others using different force fields and graphene,<cit.> the densities indicate a preference for cation adsorption in the first solution layer above the substrate; this is due to the favourable interactions between positively charged ions and the electron-rich carbon surface (implicitly captured by the force field) <cit.>. At the lowest concentrations, we observe a diffuse anion-rich solution layer adjacent to the first cation-rich layer. As discussed in detail by Finney et al. <cit.>, the asymmetric adsorption gives rise to a surface potential, the magnitude of which is governed by the sharp cation density in x in the first solution layer, even in the absence of an applied surface charge. This effective surface charge is screened by a diffuse anion layer and, at concentrations below 0.6 M, the concentration profile leading to such charge screening is qualitatively consistent with simple mean-field models of the EDL<cit.> However, at ∼ 1 M and above, additional cation and anion density peaks are observed in the EDL, and a complex multi-layered solution structure emerges due to the finite size and cooperative adsorption of ions that is explicitly captured by atomistic simulations and is apparent from blue curves in Figure <ref> B. This picture is reminiscent of the structure of ionic liquids at planar surfaces, where the finite size of charge carriers cannot be ignored. <cit.> These results highlight the inadequacy of simple mean-field models to predict the structure and, ultimately, the electrochemical properties of the interface for even simple electrolyte solutions at moderate to high ion concentrations. For a more detailed discussion of the solution structure at uncharged graphite, see the discussion by Finney et al. <cit.> In this work, we focus on the effect of varying graphite surface charges on water and ion atom densities in the EDL. Such variations are reported more clearly in Figure <ref>. In pure water, increasing the surface charge density to +0.77e nm^2 results in positive and negative changes to the water oxygen and hydrogen densities (ρ_O_w and ρ_H_w), respectively, in the first solution layer adjacent to the graphite basal surface. Essentially, the surface charge induces an increased ordering of water molecules locally in the vicinity of the positively charged carbon surface. At the negative electrode, however, we observe a restructuring of the liquid in the first two water layers as the magnitude of the surface charge, σ, increases. This can be explained by water molecules reorienting to increase the interactions between H-atoms, bearing a positive partial atomic charge, and the excess negative charge uniformly distributed amongst carbon atoms in the outermost graphite layer. The reorientation of water molecules manifests in the density profiles as a splitting of the first ρ_H_w peak into two peaks separated by ∼ 0.1 nm. The electrostatic repulsion of water oxygen atoms with the surface also displaces molecules in the first liquid layer, as shown by a decrease in ρ_O_w in the first peak and a shallower minimum in density between the first two water layers. Regardless of the sign or magnitude of σ, perturbations to the liquid structure encompass approximately three water layers, up to ∼ 1 nm from the graphite basal plane, consistent with other studies of carbon-solution interfaces. <cit.> Interestingly, as ions are added to the system, the amplitude for the fluctuations in water density at the interface somewhat diminish, although perturbations to the water structure cf. the bulk extend further into the solution as the magnitude of the applied surface potential increases. This can be explained by the screening of the surface charge due to the ions accumulating in the EDL. For example, at 10 M, the ordering of water is observed four-to-five water layers from the surface, and the spacing between peak centres in ρ_H_w decrease, most notably when σ is negative, due to the complex ion layering that is found under these conditions at the interface. This can be reconciled by considering how the surface displaces ions with associated water molecules in their solvation spheres—cations, in particular, have a relatively strong solvent coordination sphere, and so any change to the steady-state structure of ρ_Na will affect the density of water molecules in the EDL. Figures <ref> B and <ref> show that a single peak in ρ_H_w is observed in the first solution layer at all values of σ, denoting an inhibition of the water structuring found in the absence of electrolyte. Moreover, at 10 M, a large cation density in the first solution layer gives rise to a large increase in the local anion density, which is not apparent at 1 M; hence, the cooperative accumulation of ions displaces water in the second solution layer. Furthermore, the screening of the surface charge by ions at high concentrations mitigates any reorientation of water dipoles. Figure <ref> indicates that at low concentrations, the density of water in the first solution layer increases at the negatively charged surface due to an increased cation density; however, at 10 M, the density change is negative, demonstrating the complex restructuring of water that occurs in the EDL at high solution concentrations. The perturbation of the solution structure under the effect of increasing surface charge is analogous to an `accordion-like' deformation, where solution layers are compressed under the action of the additional Coulombic forces. This compression of the solution layers gives rise to further perturbations to the solution structure, as increased ion ordering propagates the effect of the surface charge into solution: a radically different picture of the EDL than those predicted by simple Poisson-Boltzmann-based models. This feature is also observed as a function of concentration, where partial saturation of the solution with ions in the first layers above the surface result in further deviations from the bulk structure moving away from the interface (see the comparison of the blue curves for ion density in Figure <ref> B). In general, we conclude that increasing the solution concentration and surface charge have analogous effects on the solution structure in the EDL. While it is possible to capture the effects of asymmetric adsorption and ion correlations in mean-field models of the EDL<cit.> and the role that water structuring can play in screening the surface potential,<cit.> the complex solution structure observed at the high concentrations and surface charges here suggests that an explicit model for atoms in the EDL is necessary to capture these cooperative, emergent effects. EDL relaxation in the presence of surface charge In order to study the collective dynamics of ionic species at the interface, it is useful to consider how the application of a surface charge changes the composition in the EDL as a function of time. As such, we performed five additional simulations at 1, 5 and 10 M, where the initial configurations were taken from 50, 60, 70, 80 and 90 ns time points in simulations where the graphite had no applied charge. In these simulations, however, we set σ to ± 0.77 e nm^-2 on opposite surfaces of the graphite slab. By starting from an equilibrated steady-state structure in the absence of surface charge, these simulations allow us to investigate how the EDL evolves in time when a surface charge is instantaneously applied. Figure <ref> A shows the average change in the number of cations and anions within 2.5 nm from the outermost carbon layer of the graphite surface. At all simulated concentrations, there is a rapid change in the number of ions in the EDL (Δ N_ions). Indeed, on the log scale provided, the change in Δ N_ions from zero is not apparent, as this occurs during the first 0.05 ns. The change is, therefore, extremely rapid as ions are displaced to minimise electrostatic forces. This behaviour might be expected for this system which is often adopted as a model system to study electric double-layer capacitors (EDLCs).<cit.> EDLCs offer a high power density, able to deliver and absorb electrical energy at a much higher rate than typical batteries through rapid charge/discharge cycles. The excellent cycling capability of EDLCS—which typically undergo millions of charge-discharge cycles (instigated by changing the applied potential)—is possible without significant degradation of the interface, making them attractive options for energy storage devices. <cit.> Despite the rapid change in Δ N_ions, it is clear from Figure <ref> A that a longer-timescale relaxation of the EDL composition develops over tens of nanoseconds. At 1 M, the preference for cations to adsorb in the first solution layer means that there is an asymmetry in the displacement of Cl^- at the negative electrode when compared with Na^+ at the positive electrode. This means that after 30 ns, Δ N^Na increases on both electrodes and so does Δ N^Cl, such that Δ N^Na_σ>0≈ 0. At all stages throughout the relaxation, the surface charge is screened by ions in the EDL (Δ N^Na_σ<0-Δ N^Cl_σ<0≈Δ N^Cl_σ>0-Δ N^Na_σ>0≈ |σ|). A different behaviour is observed as the concentration of ions is increased. At 5 M, Figure <ref> A shows that after 30 ns, the surface charge is almost completely screened by incorporation of cations or their removal from the EDL, whereas Δ N^Cl≈ =0, regardless of the sign of the applied potential. When the concentration is increased to 10 M, Δ N^Na_σ<0 increases and Δ N^Cl_σ<0 is positive; essentially, as more cations are accumulated in the EDL beyond the number necessary to screen the surface charge, ion-ion correlations also induce an increase to the number of anions in the EDL. Irrespective of this change in behaviour, the surface charge remains screened by changes to ion concentrations in the EDL. Figure <ref> B summarises the changes to the EDL ion concentrations. Here, we present the differences in absolute changes to the ion concentrations at the counter- (where the sign of the ion charge is opposite to the surface charge) and co-electrodes (where the sign of the charge on ions matches that of the surface charge) for Na^+ and Cl^- as |Δ N^Na_σ<0|-|Δ N^Na_σ>0| and |Δ N^Cl_σ>0|-|Δ N^Cl_σ<0|. The data indicate the affinity of the charged surfaces for cations or anions. A value of zero in Figure <ref> B indicates equivalent displacement of the ions at the positive and negative electrode, which is the case only when c(NaCl) is 5 M. At 1 M, on the other hand, the surface can accumulate a net excess of ions in equal amounts at both positive and negative σ. Finally, at 10 M, the net accumulation of cations exceeds that of anions when comparing surfaces with opposite charges. Together, these data highlight the multifaceted relaxation of the EDL structure due to asymmetric ion effects and cooperative changes that result, which are typically neglected in analytical models of the EDL. §.§ Ion association in solution In our previous work, we demonstrated how correlations in bulk solutions give rise to liquid-like NaCl assemblies, also referred to as clusters (across the concentration range sampled in this work), that can reach substantial sizes at high concentrations.<cit.> Essentially, NaCl(aq) solutions become increasingly non-ideal as the solution concentration is increased. This results in small ion associates containing up to three or four ions at ∼ 1 M, but these clusters can encompass hundreds of ions at the high end of solution concentration, i.e., ∼ 10 M. Experimental studies of levitated droplets recently found large liquid-like NaCl clusters at high supersaturations in NaCl(aq), with MD simulation results—using the same force field as the one adopted here—supporting the experimental observations.<cit.> The authors speculated that these clusters could play a role in NaCl crystallisation, which was later shown to be the case in simulations of high concentration metastable solutions from our own studies of homogeneous NaCl(aq) solutions,<cit.> and in solutions at and beyond the limit of solution stability.<cit.> Cutting-edge experiments have also demonstrated that NaCl crystals can emerge from disordered ion associates confined to aminated conical carbon nanotubes.<cit.> In addition, amorphous NaCl solids have been isolated using supersonic spray-drying techniques, where the rapid removal of water from dense ion assemblies occurs before crystal nucleation can occur.<cit.> It was recently shown, using a sophisticated machine learning force field trained on ab-initio MD simulation trajectories, that the final stage during NaCl dissolution involves the dissipation of amorphous ion clusters, <cit.> and MD simulations using the Joung-Cheatham force field also indicate this mechanism for cluster dissolution. <cit.> These studies combined, therefore, suggest that disordered ion assemblies are potentially involved in both the formation and dissolution of solid NaCl and that the Joung-Cheatham force field can capture these mechanisms reasonably well. In the presence of graphite, we demonstrated how disordered ion assemblies are stabilised in the EDL due to the increased ion densities in this region and postulated that by catalysing the formation of these clusters, surfaces might control the pathway for NaCl crystallisation. <cit.> In this section, we explore how applied surface charges affect liquid-like ion assemblies in the EDL. We determined the connectivity between ions in their first coordination sphere using a truncation distance based on the first minimum in the radial distribution functions between pairs of atoms (∼ 0.35 nm) and a continuous rational switching function to smoothly decorrelate ions according to this definition (details are provided in the files that can be obtained by the link in Supporting Information). Figure <ref> A shows a typical cluster residing in the first solution layer at an uncharged graphite surface when c(NaCl) is 5 M. These clusters evolve their topology over ∼ps timescales due to density fluctuations in solution. <cit.> Figure <ref> B provides the average cation-anion coordination number, < CN_Na-Cl>, in the EDL when compared to the bulk values. At 1 M, there is no clear surface effect; however, as the bulk solution concentration increases, the coordination of ions in the EDL exceeds the bulk, in line with our previous observations. <cit.> When a surface charge is applied, Figure <ref> B shows a clear bias for higher levels of ion coordination on the negative electrode. At 5 and 10 M, there is a monotonic increase in the average coordination number when σ<0, while this remains roughly constant when σ>0, matching bulk values at 5 M, but increasing compared to the bulk at 10 M. The asymmetric ion coordination can be reasoned by considering the changes in the EDL atom densities as a function of σ, as provided in Figure <ref>. The affinity for cations to adsorb in the first solution layer increases as negative charges are applied to carbon atoms, and this results in increasing ρ_Cl in the vicinity of the surface, particularly as the bulk solution concentration is increased. This effect is most apparent at 10 M, where the first peaks in both ρ_Na and ρ_Cl increase substantially, in contrast to the positively charged surface, where the increase in the first ρ_Cl peak is a small fraction of that on the negative electrode and Δρ_Na for the first peak is negative. Because the highest atom densities occur close to the surface in the EDL, increasing the density in these regions means that the distribution of ions at the negative electrode is more disproportionate than at the positive electrode (see also Figure <ref>), facilitating greater ion coordination close to the surface. Figure <ref> C provides the maximum cation-anion coordination number: max(CN_Na-Cl). The plot indicates that there is a greater propensity to form ion pairs in the EDL than in the bulk solution, but there is no clear surface charge effect at the lowest concentration. Two-coordinate cations are most likely to be observed at 5 M, and this increases to three-fold cation-anion coordination in the EDL when σ≪ 0, indicating a change from linear ion coordination to branched coordination, consistent with changes to the structure of clusters that are found on increasing ion concentrations at uncharged graphite. <cit.> The complex, multi-layered solution structure emerging at 10 M means that very high levels of ion coordination are observed in the EDL, irrespective of the sign of the applied charge. A value of CN_Na-Cl=6 is consistent with the levels of coordination in the rock salt crystal structure. The fact that the max(CN_Na-Cl) values approach this limit in the EDL at 10 M supports the hypothesis that crystallisation is promoted in this region, with order emerging from the liquid-like clusters. Indeed, when σ=+0.58 e nm^-2, a high ion density, anhydrous region of the extended cluster could potentially progress to a close-packed crystal structure. The solutions are highly metastable at this bulk concentration (using the adopted force field, the bulk solutions are metastable at 3.7-15 mol kg^-1 <cit.> which is approximately 3.5-10.9 M). Nonetheless, crystal nucleation is a rare event that is unlikely to occur over the simulation times sampled in this work. To evaluate the size of the ion clusters, we performed a graph analysis to identify the subsets of connected components, considering ions as nodes in the graph.<cit.> Figure <ref> D provides the average largest cluster size as a function of the total number of clusters at each concentration. This confirms that ion pairs are likely to form in the EDL at 1 M. When the concentration of ions is increased to 5 M, we observed clusters in the EDL containing around four ions, and the number of these clusters substantially increases when σ<0. At 10 M, many clusters are observed in all regions of the solution; however, the largest clusters are typically found in the EDL. Moreover, the EDL at the negatively charged graphite surface contains clusters which are significantly larger than the largest clusters observed at the positively charged surface. A snapshot of one of these extended ion networks is provided inset in Figure <ref> B; this highlights the chemical heterogeneity and liquid-like quality that is typically observed in the assemblies. §.§ Water structure at graphite Following the evaluation of how surface charge and concentration changes the structural properties of ions at the interface, in this section, we discuss how the interface affects the microscopic water structure in the EDL when compared to the bulk solution. To this aim, we evaluated variables which are functions of the positions of water O atoms that quantify the relative order of the molecules, as well as the H-bond network in the solvent. It is useful to compare these analyses of the CμMD simulations to liquid and solid forms of water. As such, additional simulations of bulk liquid water, cubic ice (ice I_c) and hexagonal ice (ice I_h), containing 4,000, 2,744 and 2,880 molecules, respectively, were performed for 5-10 ns. Water ordering at the interface To quantify the local ordering of water molecules approaching the solid-liquid interface, we computed the approximate two-body excess entropy (S_2), adopting the position of oxygen atoms in water as a proxy for the centre of mass of the molecules:<cit.> S_2 = -2 πρ_O_w k_B∫_0^r_lim r^2 [g(r) ln g(r) - g(r) +1] dr here, k_B is Boltzmann's constant, N_O_w is the total number of water oxygen atoms, and ρ_O_w is the atom density in the simulation cell. g(r) is a radial pair distribution function of the distances, r, between i and j pairs of water O atoms: g(r) = 1/4 π N_O_wρ_O_w r^2∑_i^N_O_w∑_j ≠ i^N_O_w1/√(2 π)ξexp( -(r-r^ij)^2/2 ξ ^2) We chose r_lim to be 0.5 nm and the broadening parameter, ξ = 0.015. We obtained local averages<cit.> of S_2 according to, S_2 = 1/N_O_w∑_i^N_O_w(S_2^i + ∑_j^N_O_w f(r^ij) S_2^j/1+ ∑_j^N_O_w f(r^ij)) Here, f(r^ij) is a sharp but continuous switching function that identifies water molecules in the first coordination sphere according to, f_s(r_ij)=1-(r^ij/r^0)^p/1-(r^ij/r^0)^q where r^0=0.35 nm, p=50 and q=100. Figure <ref> A provides the approximate excess entropy probability distributions, f(S_2), when c(NaCl) is 5 M at the most extreme values of σ (± 0.77 e nm^-2). All liquid water states are clearly separated from the S_2 values calculated for solid water phases, demonstrating that the variable adopted differentiates water molecules with different levels of local order. The presence of ions leads to a shifting of the median S_2 to larger values when compared to the case of pure water. In the context of the variable adopted, this suggests that the water network in solution is less ordered than in liquid water. In the EDL, the distributions are shifted to higher values of S_2, particularly so in the case of the positive electrode, indicating further loss of order when compared to molecules in the bulk solution. In these analyses, the EDL was taken to be the region above the carbon surface encompassing only the first two solution layers; hence, this is the region of the double layer where the water structure is most perturbed when compared to the bulk. As shown in Figure <ref> B, which provides f(S_2) in slices throughout the entire simulation cell x-axis, the local structure of water is only significantly perturbed in the immediate vicinity of the carbon substrate. Given this observation, it is important to assess how significant the presence of ions and surface charge change the water structure, as opposed to the excluded volume effects associated with the water void space occupied by the graphite slab. Clearly, Figure <ref> A identifies a surface charge effect, but to consider the role that ions play in changing the local water order, we computed additional variables, namely the third-order Steinhardt bond orientational order parameter (q_3) <cit.>, as well as its local (lq_3) and local average (q_3) values, for all water molecules comprising the bulk solution. For the functional form of q_3 and lq_3, please refer to the PLUMED documentation. <cit.> These variables were previously combined with S_2 to identify water order in different physical states. <cit.> All of the distributions for these variables, provided in Figure <ref> A, indicate a small deviation from the pure water case, although this is minimal when compared to ice and amorphous water (liquid water crash cooled to 100 K during a 10 ns simulation). From this analysis, we can conclude that the surfaces have a greater effect on the local ordering of water molecule centres than the presence of structure-breaking ions. Water H-bonds The intermolecular structure of water is usually described in terms of the H-bond network. We, therefore, calculated—using MDAnalysis<cit.>—the number of H-bonds per water molecule, n_H, in the bulk and EDL regions of all simulations using a simple geometric criterion for these bonds of the type, O_D—H⋯O_A, where D and A subscripts refer to the H-bond donor an acceptor, respectively. H-bonds were assigned when O_D and O_A were within 0.3 nm and the angle, ∠O_DHO_A > 150^∘. Figure <ref> provides the distributions for H-bond distances and angles in the EDL and bulk regions of CμMD simulations, as well as for ice I_h and pure liquid water. Figure <ref> B shows that irrespective of the applied surface charge, the mean n_H for water in the bulk and EDL regions of CμMD simulations in the absence of ions are close to the values in pure liquid water. The small deviation from the homogeneous liquid case is likely to result from the excluded volume effects associated with the selection of water molecules in regions of the simulation cell x-axis that creates (artificial) excluded volumes, even in the case of what we describe as the bulk. Slightly more than one H-bond per water molecule is observed, which is understandably lower than the expected value of two for ice I_h. In contrast to the structural variables discussed above, n_H is far more sensitive to the solution concentration and less sensitive to the magnitude and sign of σ. At 1 M, we find a difference between n_H in the EDL when compared to the bulk by around n_H=0.1. This difference was approximately the same at all levels of concentration; however, n_H in all regions of the simulations decreased as the concentration of ions increased. At 10 M, n_H is approximately 0.35-0.4, suggesting a near complete breaking of the H-bond structure at the highest concentrations. The presence of ions and their assemblies, as well as associated local electric fields, greatly perturbs the water structure from the pure solvent, which ultimately has implications for these systems and their performance as conductors of electrical charge. The lifetime for H-bonds was determined according to the autocorrelation function (ACF): ACF(τ) = < H_ij(t_0) H_ij(t_0 + τ)/H_ij(t_0)^2> where τ is a time lag in the data, t_0 indicates a time origin and H_ij signifies the presence of an assigned H-bond between molecules i and j, taking a binary value of zero or one according to the H-bond distance and angle cut-offs described above. The H-bond lifetimes are provided in Figure <ref> C for pure liquid water, ice and the bulk and EDL regions of simulations at 0 and 10 M with σ = ± 0.77 e nm^-2. In the absence of ions, the H-bond lifetime for water in the bulk and at the negative electrode is identical to the lifetime of H-bonds in pure water. At the positive electrode, however, the H-bond lifetime is extended upon the application of a large surface charge. This is most likely due to the increased binding strength of water O atoms to the charged graphite surface. At 10 M, we find that the H-bond lifetimes in the bulk are extended cf. 0 M, and there is a divergence in the lifetimes in the bulk and at the negative electrode. These lifetimes, however, are still lower than the average H-bond lifetime at the positive electrode, which can extend to ∼ 10^2 ps. Simulations indicate that the orientation of water molecules at charged surfaces determines the propensity for heterogeneous ice crystallisation, with positively charged silver iodide surfaces suggested to promote ice nucleation. <cit.> A stabilisation of the H-bond lifetime at positively charged surfaces potentially has additional ramifications on the ability of these charged substrates to promote ice nucleation. In light of our findings, it would be useful to test how these phenomena and the presence of ions in solution control ice crystallisation rates. Water dipole moments We characterised the orientation of water molecules with respect to the graphite surface plane by computing the angle between water molecule dipole moments and the normal to the surface, θ_D. The calculation was implemented such that if the dipole moment was perfectly perpendicular to the surface normal and pointing towards the surface (see the water dipole moment arrow inset of Figure <ref> D), the value of θ_D is zero and 180^∘ if the dipole moment vector points away from the surface. θ_D=90^∘, therefore, indicates water molecules with dipole moments that are, on average, aligned parallel with the basal surface, and/or indicating no preference for the orientation of water molecules at the surface. Figure <ref> D provides the mean θ_D as a function of Δ x. Before discussing the effect of surface charge and solution concentration on these results, it is useful to discuss an important feature of the curves at 0 M. In these simulations, no ions are present, and we did not adopt CμMD to study the effect of charged surfaces on the water structure. As such, there is no ionic reservoir in this system. When equal but opposite signs of surface charge are applied to opposite faces of the graphite slab, this induces an electric field that spans the periodic boundaries in x. This effect is evident in the θ_D curves at 0 M, where the neutral graphite case (see the symmetric blue curves at both surfaces) indicates that θ_D=90^∘ in the bulk but where positive and negative deviations in the mean θ_D are found at the positive and negative electrode, respectively. It is possible to avoid these electrical artefacts by removing the periodic boundaries in x and/or by including artificial electrical insulating layers parallel to the graphite surface. For the purposes of this study, this was not necessary, and, importantly, these effects are removed by the presence of the ionic reservoir. In terms of the discussion of θ_D at 0 M, we compare the features of the distributions in the EDL with respect to the values in the bulk in order to understand how the surface controls the orientation of water; furthermore, we do not believe that the dipole associated with the graphite slab has a significant effect on the analysis of EDL solution thermodynamics, discussed in the following section. The θ_D distributions indicate that the surface, in the absence of ions, leads to no significant ordering of water molecule dipole moments with respect to the surface plane, as expected with the adopted force field<cit.> and shown from simulations elsewhere. <cit.> Furthermore, negative surface charges give rise to increased θ_D in the first solution layer(s), confirming that the water molecule tends to point away from the surface (compared to the bulk) in order to maximise the interactions between H atoms and the surface. At the positive electrode, a maximum occurs at Δ x ≈ -0.5 nm; here, there is a depletion in the water density (see Figure <ref>), which perhaps allows water to restructure more freely to screen the surface potential. In the presence of ions, θ_D angles are obtuse in the first two solution layers, regardless of the sign of the surface charge; however, maximum values are observed on the negative electrode. In all curves, a minimum occurs at Δ x ≈ -0.65 nm, which is where the second peak in water density profiles is observed when moving away from the surface. An acute θ_D was also observed for 1 M solutions in contact with graphene using quantum mechanical MD simulations. <cit.> This result indicates that the mean orientation of water dipoles in the first and second solution layers differs by 40-50^∘. The complex EDL structure is evident in the θ_D curves, notably at 10 M, where large fluctuations in the angle distributions are evident within 0.5 nm from graphite. Finally, very little ordering of water is observed beyond ∼ 2 nm from the substrate at the highest concentrations. §.§ Solution thermodynamics As well as the capability to undergo rapid charge/discharge cycling, carbon-electrolyte interfaces can be exploited for applications to promote chemical reactions. <cit.> Understanding the thermodynamic properties of the interface is essential in this regard. In the following section, we characterise the electrochemical properties of the interface, with a focus on the solution side of the EDL. Electric potential at the interface The capacity for the interface to store charge, C=σ/Δψ^0, where Δψ^0 is the electric potential change (usually termed the `potential drop') across the interface with an applied surface charge minus the potential drop in the absence of a surface charge. Poisson's equation relates the potential drop to the charge density (ρ_q) according to, d^2ψ(x)/dx^2 = -dE(x)/dx = -ρ_q(x)/ε where E is the electric field and ε is the permittivity of the medium. Here we take ε = ε_0, the permittivity of free space, because the full solution charge density is used in the calculation. It is important to recognise that this analysis provides the capacitance associated with the solution side of the EDL in contact with a uniform surface charge. An additional contribution to the total capacitance comes from the density of electron states in the substrate, which represents a minor contribution to the interfacial capacitance under the conditions studied. <cit.> Figure <ref> provides ψ(x) for all systems and all applied charges. In the case of pure water, we find that the potential drop across the interface, defined as Δψ = ψ(Δ x=0)-ψ^b (where ψ^b is the electrostatic potential in the bulk), in the absence of surface charge is 0.23 V. As the surface charge is increased to ± 0.77 e nm^-2, Δψ was calculated as -1.2 V and 2.6 V on the negative and positive electrode, respectively. This result is due to water H atoms interacting favourably with the carbon substrate. Two minima are observed in Figure <ref> A, in accordance with the maxima in the water density profiles (see Figure <ref>). The application of surface charge perturbs these densities, as discussed above, and this also changes the position of the first minima in ψ(x) (from the surface) and makes the second minimum shallow when σ< 0. The presence of ions induces additional fluctuations in the ψ(x) when compared to the pure water case. These extend to around 1.5 nm from the carbon surface at the highest concentrations, although the amplitude of the fluctuations is not particularly correlated with concentration, due to the fact that local electric fields are determined by the total solution charge density. Some notable features of the ψ(x) curves are the fact that cation adsorption in the absence of surface charge increases Δψ to ∼ 0.42  V, in good agreement with studies of aqueous solutions at carbon surfaces using different models for the interface. <cit.> In addition, a minimum emerges at Δ x ≈ -0.3 nm at the positive electrode, associated with a depletion of cations and accumulation of anions. Furthermore, a deep minimum is observed at 10 M when Δ x ≈ -0.4 nm on the negative electrode, which can be attributed to the increasing anion concentration and water restructuring that occurs at this surface. When σ=+0.77 e nm^-2, Δψ at the positive electrode ranges from 2.85-2.9 V across the range of concentrations investigated. Using the expression for capacitance reported above, this equates to a solution-side capacitance of ∼5 µF cm^-2. Δψ at the negatively charged electrode, instead, ranges from -0.92 to -1.34 V, corresponding to a capacitance of C=7-9 µF cm^-2, indicating that graphite has a greater capacity to store charge when negative charges are applied. C evaluated in these simulations is consistent with estimates from experiments. <cit.> Moreover, the increased capacity to store ionic charge at the negative electrode is consistent with results elsewhere. <cit.> A result worthy of note is that the concentration of ions has very little effect on the ability of graphite to store charge over the range of molarities considered, which was also found for graphene. <cit.> Indeed, increasing the charge capacity of the carbon-solution interface typically requires tuning the properties of the solute and the substrate to increase the overall capacitance of the system, rather than simply changing the concentration of the solution. <cit.> Ion activity coefficients The chemical potential of species i, μ_i, is defined as the change in free energy associated with a variation in the number of i molecules and represents the ability of that species to undergo a physical-chemical transformation. In the presence of an electric field, when i is a charged species, the same information is captured by the electrochemical potential, which accounts for the additional energetic contributions to insert/remove a charged particle to/from the system: μ̃_i = μ_i^0 + RT ln a_i + z_iF ψ = μ_i^0 + RT ln m_i + RT lnγ_i + z_iF ψ In the above equation, μ^0 is a reference chemical potential. The second term on the right of the equation provides the energy associated with particle exchange in non-ideal solutions, where R, T and a indicate the gas constant, temperature and solute activity, respectively. This term can be expanded to account for the ideal and excess chemical potential, which are functions of the total concentration, in this case, the solution molality, m (strictly, this is a unitless quantity defining the mole fraction of solute in solution compared to the standard state of 1 mol/kg), and the activity coefficient, γ. The final term defines the work to transfer a particle with charge z into the system with electrostatic potential, ψ. Faraday's constant, F, ensures that the term has the correct energy units. In order to determine μ̃_̃ĩ for ions in our simulations, an activity model is required. Zimmerman et al. provided an analytical formula to calculate μ for ions in NaCl(aq) as a function of ion molality, m_ion, by fitting to simulation data: <cit.> μ_ion = μ_ion^0 + 2RT ln m_ion + 2RT lnγ_ion where, log_10( γ_ion) = a √(m)/1+b √(m) + cm In these equations, μ^0_ion = -391.6 kJ mol^-1,<cit.> a=0.568 mol^-1/2 kg^1/2, b=1.17769 mol^1/2 kg^-1/2 and c=0.177157 mol^-1 kg. It is important to recognise that this activity model assumes changes to the solution density and dielectric constant are only a function of the solution composition. This model can be extended<cit.> to account for the effect of electric fields and associated varying ion molalities that occur on approach to the graphite surface according to, μ̃_ion(x) = μ_ion^0 + RT ln m_Na(x) + RT lnγ_ion(m_Na(x)) + ω F ψ(x) + RT ln m_Cl(x) + RT lnγ_ion(m_Cl(x)) - (1- ω)F ψ(x) where subscript labels indicate Na^+ or Cl^- molalities and ω(x) = m_Na(x)/(m_Na(x)+m_Cl(x)). In the limiting case where the molalities of cations and anions are equal locally, Equation <ref> reduces to Equation <ref>. For the case of 1 M NaCl(aq), we determined μ̃_ion(x) ≈ -392 kJ mol^-1 in the bulk where ψ(x)=0, in good agreement with the expected chemical potential from the model by Zimmerman et al.<cit.> for homogeneous solutions with m_ion≈ 1.3 mol kg^-1. γ_ion=0.9 under these conditions, which we approximate to a value of one for the subsequent analyses, such that μ̃^b_ion≈μ_ion^0, with μ̃^b_ion representing the electrochemical potential of ions in the bulk. The energy change associated with 2RT lnγ_ion when γ_ion=0.9 is ∼ 0.5 kJ mol^-1. In our simulations, the chemical potential of ions and water as a function of x in the steady state is constant.<cit.> Therefore, with knowledge of the ion molalities and electric potential in the EDL, Equation <ref> can be rearranged to determine how the presence of the surface—where the density of ions and dielectric constant of the solution are changing compared with the bulk—affects the activity of ions as captured by γ_ion: -RT ln[ γ_ion(m_Na(x)) γ_ion(m_Cl(x) ) ] = RT [ ln m_Na(x) m_Cl(x) ]+ (2ω -1)F ψ(x) We label this quantity Δμ̃^R_ion. Figure <ref> A provides -Δμ̃^R_ion/RT= 2 ln(γ_ion) at graphite with varying charge density when c(NaCl)=1 M. On approach to the surface, there is a small minimum around Δ x = -0.8 nm when σ=0, which is consistent with the position of a second cation-rich solution layer above the surface. This is followed by a gradual decrease in 2 ln(γ_ion) towards a second minimum around Δ x = -0.4 nm. This minimum resides between the maxima for the densities of the first cation- and anion-rich solution layers in the EDL (see Figure <ref>). When a positive charge is applied to graphite, as shown in the bottom panel of Figure <ref> A, 2 ln(γ_ion) becomes more negative to around -25RT; this is due to the positive surface charge pushing and pulling Na^+ and Cl^- away from and towards the surface (see Figure <ref>). In addition, there is a small increase in 2 ln(γ_ion) at Δ x = -0.65 nm, due to a relatively high mole fraction of Na^+ in this region. On the negatively charged surface, the applied potential displaces Cl^-, which leads to a decrease in the anion density at Δ x = -0.4 nm and an increase at Δ x = -0.55 nm compared to the case when σ=0 (see Figure <ref>); this results in positive and negative increases to 2 ln(γ_ion), respectively. Figure <ref> provides the contributions to Δμ̃_ion(x) for a single case where c=1 M and σ=+0.77 e nm^-2. This shows that it is essential to account for the effect of the surface excluded volume and charge on the structure of the solution when determining the activities of ions. In particular, in the presence of an applied surface potential, the contribution to Δμ̃_ion(x) from the electric potential drop can be as significant as the changing mole fraction of solute at the surface. Water activity coefficients We now consider how charged surfaces affect water activity coefficients in the EDL when compared to the bulk. To determine the chemical potential of SPC/E water in NaCl(aq) without an accurate activity model, we make use of the fact that in our CμMD simulations, the chemical potential for water molecules in the EDL and bulk are equal. We can estimate the potential of mean force (𝒲_wat) to transfer water molecules from the bulk to the EDL according to, Δ𝒲_wat(x) = -RT ln( p_wat(x)/p_wat^b) where p_wat represents the probability density of observing water molecules at position x, and the superscript b indicates the probability density at a point in x representative of the bulk solution. As such, Δ𝒲_wat provides a proxy for Δ A: the Helmholtz free energy change for the transformation under question. Given that Δ A = n Δμ_wat = -RT ln K, where n is the number of moles of water and K=a_wat(x)/a_wat^b (where a indicates activity) which is the equilibrium constant for the reaction, we can also write, Δ𝒲_wat(x) = -RT ln( χ_wat (x)/χ_wat ^b) -RT ln( γ_wat(x)/γ_wat^b) = Δμ_wat^I + Δμ_wat^R where χ_wat and γ_wat are the mole fraction and activity coefficient for water molecules in solution, respectively. Hence, by combining equations <ref> and <ref> we can evaluate Δμ_wat^R, which indicates how γ_wat changes in comparison to γ_wat^b. Figure <ref> B–D provides -Δμ_wat^R(x)/RT for systems where the ion concentration varies from 1 to 10 M. At 1 M, two minima are observed in ln(γ_wat/γ^b_wat) at Δ x = -0.5 and -0.75 nm that are within RT of the bulk value. The contributions to Δ𝒲(x) in the case where c=1 M and σ=+0.77 e nm^-2 are provided in Figure <ref>, which indicate that the first minimum from the surface arises due to a relatively high value of Δμ_wat^I(x) in this region, associated with a decreased water mole fraction, as well as effects associated with the electric fields close to the carbon basal plane. The minimum at Δ x = -0.75 nm, however, occurs in a region where the water mole fraction is not significantly different from the bulk value and is due to the structuring of ions and electric fields locally. As the magnitude of the applied charge increases, small shifts to the position of the minima occur, associated with changes to the water density. The maximum in ln(γ_wat/γ^b_wat) occurs around Δ x =- 0.3 nm; this conforms to the minimum in Δ𝒲, and represents the first water layer adsorbed at the graphite surface (see Figures <ref> <ref>). This indicates that γ_wat is 3-4 greater in the innermost EDL solution layer than in the bulk. As the concentration of ions is increased, the positions of the maxima and minima in -Δμ_wat^R(x)/RT are unchanged. Additional fluctuations beyond Δ x ≈ -1 nm are observed at the highest concentrations, which are only partly associated with changes to Δμ_wat^I(x) (see Figure <ref>). At 10 M, the most negative minimum in Figure <ref> D suggests that γ_wat/γ^b_wat=0.2. It is also apparent, at the highest concentration, that an additional minimum in -Δμ_wat^R(x)/RT occurs around Δ x = -0.35 nm. This can be attributed to an increase in Δμ_wat^I when compared with lower concentrations, concomitant with a decrease in χ_wat compared with χ^b_wat. Changes to the local structure of the solution on increasing surface charge density tend to be greatest at the highest bulk concentrations (see Figure <ref>), so it is perhaps not surprising that a richer behaviour in the Δμ_wat^R curves emerges at 10 M as a function of σ. § CONCLUSIONS Understanding the properties of the carbon-electrolyte interface is important for a range of applications of these systems to facilitate, e.g., charge storage and chemical reactions. The CμMD simulations we have performed in this work provide an atomic scale resolution of the interface of graphite with NaCl(aq) where the concentration of ions and surface charge was varied. The simulations allow us to investigate how the asymmetric but cooperative adsorption of ions in the EDL, under the effect of an applied potential, affects the structure, dynamics and thermodynamic properties of the interface at a constant thermodynamic driving force for adsorption (defined by the chemical potential of ions in the bulk solution). Our simulations indicate that increasing the magnitude of the surface charge is analogous to increasing the concentration of ions in solution; both changes give rise to a complex, multi-layered solution structure comprising cation- and anion-rich solution layers due to the finite size of ions and the partial saturation of solution layers with ions, that is not readily captured by mean-field models of the EDL. Perturbations to the solution structure typically extend 1-2 nm from the surface. Interestingly, the presence of a relatively low concentration of ions decreases the intensity of fluctuations in water densities in the EDL compared to the case where no ions are present. Liquid-like NaCl clusters have been observed in bulk NaCl(aq) solutions at relatively high concentrations <cit.>, and our previous work on graphite identified that these clusters are stabilised in the EDL.<cit.> In the present study, we demonstrated that negative surface charges increase the number and size of these networks, which can include tens of ions. Furthermore, at negative electrodes, the local ion density in these networks increases, as indicated by changes to the average cation-anion coordination number. This result raises important questions regarding the ability of charged surfaces to induce NaCl crystallisation. Our analyses of water structure in the EDL and the lifetime of H-bonds indicate that positive electrodes can induce the reorientation of water molecules at the surface and increase the lifetime of H-bonded networks in this region. These effects, in turn, have potential implications for the crystallisation of ice in the presence of charged carbon substrates, as well as for the role that these interfaces play in catalysis. It is important to note that the water model we adopt is constrained to its equilibrium, bulk liquid water geometry and the partial charges on O and H atoms are fixed. Future studies should consider how constrained geometry, non-polarisable water models affect the trends found in this work, and how different models for carbon-water interactions affect the thermodynamic properties evaluated here. Although our analysis of the electrical properties of the interface indicates a small increase in the capacity of the negative electrode to store charge, the difference in capacitance was 2-4 µF cm^-2, with solution concentration playing only a small role in increasing the ability for the negative electrode to accumulate ions. This is perhaps not surprising because, at molar concentrations, the affinity of graphite for cations and the cooperative adsorption of anions lends the first solution layers already partially saturated with ions in the absence of surface charge. Analysis of how water and ion activity coefficients deviate from the bulk values, when the electric potential in the EDL is accounted for, indicates how the excess chemical potential for water decreases in the first solution layers adjacent to the surface, concomitant with an increase of the water density in this region, as well as how the changing ion densities induces fluctuations in water activity ratios in the EDL as the concentration of ions increases. In summary, the complex interplay of solution concentration and surface charge effects provides a picture of the EDL that is difficult to obtain in experiments and from mean-field models. Our study is the first of its kind—as far as the authors are aware—which provides atomistic insight into the carbon-electrolyte interface over a wide range of concentrations and applied surface charge. We hope that the questions raised in this work provide inspiration for further simulation and experimental studies of this system. The authors acknowledge funding from the Crystallization in the Real World EPSRC Programme Grant (Grant EP/R018820/1) and the ht-MATTER UKRI Frontier Research Guarantee Grant (EP/X033139/1). The authors acknowledge the use of the UCL Myriad High Throughput Computing Facility (Myriad@UCL), and associated support services, in the completion of this work. Additional figures are included in the associated supporting information. § DATA AVAILABILITY GROMACS input and example output files, including the force field parameters necessary to reproduce the simulation results reported in this paper, are available on github (see https://github.com/aaronrfinney/CmuMD-NaCl_at_graphite). The PLUMED input files are also accessible via PLUMED-NEST (www.plumed-nest.org <cit.>), the public repository for the PLUMED consortium, using the project ID, plumID:23.027. Details on how to use and implement the CμMD method within PLUMED is available on github (see https://github.com/mme-ucl/CmuMD). § AUTHOR CONTRIBUTIONS A.R.F. and M.S. designed the research. A.R.F. performed the research and analyses. A.R.F. and M.S. wrote and edited the paper. § CONFLICT OF INTEREST The authors declare no conflict of interest. arabic    § PROPERTIES OF AQUEOUS ELECTROLYTE SOLUTIONS AT CARBON ELECTRODES: EFFECTS OF CONCENTRATION AND SURFACE CHARGE ON SOLUTION STRUCTURE, ION CLUSTERING AND THERMODYNAMICS IN THE ELECTRIC DOUBLE LAYER Supporting Information Aaron R. Finney and Matteo Salvalaglio Thomas Young Centre and Department of Chemical Engineering, University College London, London WC1E 7JE, United Kingdom E-mail: [email protected]; [email protected] .tocmtappendix mtchapternone mtappendixparagraph paragraph    § ADDITIONAL FIGURES tocsectionAdditional Figures
http://arxiv.org/abs/2307.01332v1
20230703201027
Integrating holomorphic sectional curvatures
[ "Gunnar Þór Magnússon" ]
math.DG
[ "math.DG", "math.AG", "math.CV" ]
We calculate the L^2-norm of the holomorphic sectional curvature of a Kähler metric by representation-theoretic means. This yields a new proof that the holomorphic sectional curvature determines the whole curvature tensor. We then investigate what the holomorphic sectional curvature of a Hermitian metric determines and calculate the L^2-norm of the holomorphic bisectional curvature. The alpha particle charge radius, the radion and the proton radius puzzle A. S. Lemos Received Month day, 2023; accepted June day, 2023 ========================================================================= § INTRODUCTION Let X be a complex manifold of dimension n and let h be a Kähler metric on X. We denote by R the curvature tensor of the Chern connection of h. One of the main ways to simplify this complicated tensor is to consider the holomorphic sectional curvature of h, that is H(ξ) = R(ξ, ξ, ξ, ξ) / |ξ|^4 for nonzero tangent fields ξ. If s is the scalar curvature of h, it is well-known that 1/ S(T_X,x)∫_S(T_X,x) H(ξ) σ̣= s/n+12, where S(T_X,x) is the unit ball in T_X,x <cit.>. It is also well-known that the holomorphic sectional curvature determines the whole curvature tensor, in the sense that if H = 0 then R = 0 <cit.>. The existing proofs of these facts are by brute force calculations. The integral over the sphere is broken up into polynomial components that are evaluated separately and summed together. This works out in the end because of the seeming coincidence that the integrals of |z_j|^4 and |z_j|^2 |z_k|^2 over the unit sphere agree up to a factor of 2. The proof of the determination of the curvature tensor is by clever algebraic manipulation that leaves at least myself no wiser as to why this is true at all. In this article we propose a new route to these facts. Our starting point is a representation-theoretic identity that is known to quantum information theorists, namely that 1/ S(V)∫_S(V) (v ⊗ v^*)^⊗ dσ̣= 1/n+d-1dΠ_d, where V is an n-dimensional complex vector space, Π_d : V^⊗ d→ V^⊗ d is the projection onto the subspace of symmetric d-tensors and v^* := u ↦ h(u, v̅). Berger's scalar curvature identity is an immediate consequence of this identity, and it and some routine linear algebra let us also evaluate the L^2 norm of the holomorphic sectional curvature and see that 1/ S(T_X,x)∫_S(T_X,x) H(ξ)^2 σ̣= |R|^2 + 4|r|^2 + s^2/n+12n+32, where r is the Ricci-tensor of h, which neatly explains why the holomorphic sectional curvature determines the whole tensor. Slightly more bookkeeping lets us also consider the same integrals for the curvature tensors of Hermitian metrics. There it is no longer true that the holomorphic sectional curvature determines the whole tensor, but we can say what it does determine, which turns out to be essentially the “symmetric” part of the tensor when viewed as a Hermitian form on V^⊗ 2 = ⋀^2 V ⊕^2 V. As an application we then calculate the L^2-norm of the holomorphic bisectional curvature of a Kähler metric. §.§ Acknowledgements Many thanks to Kyle Broder for his excellent comments on an earlier version of this note and for several references to the literature. § ALGEBRAIC CURVATURE TENSORS Let V be a complex vector space of dimension n, which we think of as the tangent space of a complex manifold at a given point. The curvature tensor R of a Hermitian metric on the manifold identifies with a Hermitian form q on V ⊗ V, defined by R(x, y, z, w) = q(x ⊗ z, y ⊗ w). If the metric is Kähler we get an additional symmetry R(x, y, z, w) = R(z, y, x, w) (and the ones induced by conjugating). A nice alternate reference for what we discuss here is <cit.>. We write (V) for the real vector space of Hermitian forms on V. The curvature tensor of a Hermitian metric is then just a member of (V ⊗ V). We call such an element an algebraic Hermitian curvature tensor, and one that satisfies the additional symmetry of a Kähler curvature tensor an algebraic Kähler curvature tensor. The decomposition V ⊗ V = ⋀^2 V ⊕^2 V is standard. It implies that a Hermitian form on V ⊗ V decomposes into components q = [ q_∧^2 V q_(^2V, ∧^2 V); q_(∧^2 V, ^2V) q_^2 V, ] where q_(∧^2 V, ^2V)^† = q_(^2V, ∧^2 V). Denote the symmetrization map by Π_2 : V ⊗ V →^2 V, x ⊗ y ↦ 12 (x ⊗ y + y ⊗ x). It is a surjective linear morphism that realizes the space of symmetric tensors as a subspace of V ⊗ V. The usual definition of that space is as a quotient of V ⊗ V by the ideal generated by x ⊗ y - y ⊗ x. As the field has characteristic zero these spaces are isomorphic, so the difference between them isn't very important to us. A tensor R ∈(V ⊗ V) is an algebraic Kähler curvature tensor if and only if there exists an element R̂∈(^2 V) such that R = Π_2^* R̂. Suppose R ∈(V ⊗ V) is an algebraic Kähler curvature tensor. We define R̂(x ⊙ z, y ⊙ w) = R(x, y, z, w), which is well-defined because R is Kähler, and we have R = Π_2^* R̂. Conversely, let R̂∈(^2 V), and define R = Π_2^* R̂. Then R(x, y, z, w) = R̂(x ⊙ z, y ⊙ w) = R̂(z ⊙ x, w ⊙ y) = R(z, y, x, w) is an algebraic Kähler curvature tensor. As an aside, this explains why we only ever talk about Griffits positivity of Kähler metrics. A Hermitian metric is Nakano positive if its curvature tensor is positive-definite as a Hermitian form on V ⊗ V. A form that's a pullback by a morphism with nontrivial kernel is never positive-definite. Therefore a Kähler metric can only be Nakano positive when Π_2 is injective, which happens when Π_2 = ⋀^2 V = 0, that is, when V = 1. There should then be a notion of positivity for Kähler curvature tensors that interpolates between Griffiths positivity and Nakano positivity and is perhaps more geometrically motivated than Griffiths positivity, where we would say that such a tensor is positive if its Hermitian form on ^2 V is positive-definite. However such a metric would also be Griffiths-positive, so Siu and Yau's theorem would imply that the underlying space is a projective space. This should offer something different from m-positivity (see <cit.>) that might be interesting in the negatively curved case. § PROJECTION FORMULA Our starting point involves vector-valued integration, so let's recall some basic facts. Let V and W be finite-dimensional complex vector spaces equipped with their Lebesgue measures. If f : V → W is a continuous function and X ⊂ V a measurable subset we define ∫_X f(v) μ̣(v) ∈ W to be the vector we get after picking bases and integrating coordinate by coordinate. If L : W → Z is a linear map then L ∫_X f(v) μ̣(v) = ∫_X Lf(v) dμ(v), because that holds at every step of the definition of the integral of a measurable function. This implies the integral is independent of the choice of basis. It also implies that if f : V → V is continuous then the trace commutes with the integral. Let's suppose V is equipped with a Hermitian inner product h and let's write dμ for the induced volume form on the unit sphere S(V) in V. For v ∈ V we define v ⊗ v^* to be the linear map h^*( v) v = x ↦ h(x, v) v. Note that if f ∈ V then f ∘ v ⊗ v^* = h^*( v) f(v) and thus (f ∘ v ⊗ v^*) = h(fv, v). The following is known to quantum information theorists; see <cit.> and <cit.>. Denote by Π_d : V^⊗ d→ V^⊗ d the projection onto ^d V. Then 1/ S(V)∫_S(V) (v ⊗ v^*)^dμ̣(v) = 1/n+d-1dΠ_d. Let's denote the map defined by the integral by L. If g ∈ V is unitary with respect to h we have h(g x, v) = h(x, g^-1 v) and | g| = 1 so the change of basis formula implies that L g^d = g^d L. The map L is then an interleaving operator of the representation V^⊗ d of the unitary group. The integral is thus a sum of multiples of the projections onto the irreducible factors of V^⊗ d. Note however that the integral takes values in ^d V because the integrand is invariant under the symmetric group S_d, so L = λΠ_d for some scalar λ. For a unit vector v we have v ⊗ v^* = |v|^2 = 1. Taking the trace we then find that 1 = λn+d-1d, which implies the result. § KÄHLER METRICS We work locally on the manifold X and will write V = T_X,x. We suppose here that h is a Kähler metric, which just means that we have a Hermitian inner product on V, and that the associated algebraic curvature tensor R arises as the pullback of a tensor in (^2 V). (In particular our results also apply to Kähler-like metrics; see <cit.>.) 1/ S(V)∫_S(V) H(v) μ̣(v) = s/n+12. If f ∈ V^⊗ 2 then taking the trace of the equation in Proposition <ref> for d = 2 gives 1/ S(V)∫_S(V) h^⊗ 2(f(v ⊗ v), v ⊗ v) μ̣(v) = 1/n+12(f ∘Π_2). If q ∈(^2) is the Hermitian form defined by the curvature tensor R, we apply this to f = (h^⊗ 2)^-1Π_2^* q. As Π_2^2 = Π_2 the result follows. 1/ S(V)∫_S(V) H(v)^2 μ̣(v) = |R|^2 + 4|r|^2 + s^2/n+12n+32. We consider f ⊗ f ∈ V^⊗ 4 where f is as before and get that 1/ S(V)∫_S(V) h^⊗ 2(f(v ⊗ v), v ⊗ v)^2 μ̣(v) = 1/n+34 (f ⊗ f ∘Π_4). We know that Π_d = 1/d!∑_σ∈ S_d W_σ, where S_d is the symmetric group on d letters and W_σ(v_1 ⊗⋯⊗ v_d) = v_σ(1)⊗⋯⊗ v_σ(d). In what follows we're going to recall that f is Hermitian; use Einstein notation for sums; use the partial trace functions _jk : V^⊗ 4→ V^⊗ 2 defined by taking the trace along indices j and k; and refer to the Frobenius inner product on endomorphisms. When we write out the elements of S_4 as cycles on the letters (jklm) we get the 24 traces (f ⊗ f ∘ W_(jklm)) = f_jk,jk f_lm,lm = _13 f_13 f = ( f)^2, (f ⊗ f ∘ W_(jkml)) = f_jk,jk f_ml,lm = _13 f_14 f = (_14 f) f, (f ⊗ f ∘ W_(jlkm)) = f_jl,jk f_km,lm = (_13 f)_kl (_24 f)_kl = ⟨_24 f, _13 f⟩, (f ⊗ f ∘ W_(jlmk)) = f_jl,jk f_mk,lm = (_13 f)_kl (_14 f)_kl = ⟨_14 f, _13 f⟩, (f ⊗ f ∘ W_(jmkl)) = f_jm,jk f_kl,lm = (_13 f)_km (_23 f)_km = ⟨_23 f, _13 f⟩, (f ⊗ f ∘ W_(jmlk)) = f_jm,jk f_lk,lm = (_13 f)_km (_13 f)_km = |_13 f|^2, (f ⊗ f ∘ W_(kjlm)) = f_kj,jk f_lm,lm = _14 f_13 f = (_14f) f, (f ⊗ f ∘ W_(kjml)) = f_kj,jk f_ml,lm = _14 f_14 f = (_14 f)^2, (f ⊗ f ∘ W_(kljm)) = f_kl,jk f_jm,lm = (_23 f)_jl (_24 f)_jl = ⟨_24 f, _23 f⟩, (f ⊗ f ∘ W_(klmj)) = f_kl,jk f_mj,lm = (_23 f)_jl (_14 f)_jl = ⟨_14 f, _23 f⟩, (f ⊗ f ∘ W_(kmjl)) = f_km,jk f_jl,lm = (_23 f)_jm (_23 f)_jm = |_23 f|^2, (f ⊗ f ∘ W_(kmlj)) = f_km,jk f_lj,lm = (_23 f)_jm (_13 f)_jm = ⟨_13 f, _23 f⟩, (f ⊗ f ∘ W_(ljkm)) = f_lj,jk f_km,lm = (_14 f)_kl (_24 f)_kl = ⟨_24 f, _14 f⟩, (f ⊗ f ∘ W_(ljmk)) = f_lj,jk f_mk,lm = (_14 f)_kl (_14 f)_kl = |_14 f|^2, (f ⊗ f ∘ W_(lkjm)) = f_lk,jk f_jm,lm = (_24 f)_jl (_24 f)_jl = |_24 f|^2, (f ⊗ f ∘ W_(lkmj)) = f_lk,jk f_mj,lm = (_24 f)_jl (_14 f)_jl = ⟨_14 f, _24 f⟩, (f ⊗ f ∘ W_(lmjk)) = f_lm,jk f_jk,lm = f_jk,lm f_jk,lm = |f|^2, (f ⊗ f ∘ W_(lmkj)) = f_lm,jk f_kj,lm = f_jk,lm f_kj,lm = ⟨ f ∘ W_(12), f ⟩ , (f ⊗ f ∘ W_(mjkl)) = f_mj,jk f_kl,lm = (_14 f)_km (_23 f)_km = ⟨_23 f, _14 f⟩, (f ⊗ f ∘ W_(mjlk)) = f_mj,jk f_lk,lm = (_14 f)_km (_13 f)_km = ⟨_13 f, _14 f⟩, (f ⊗ f ∘ W_(mkjl)) = f_mk,jk f_jl,lm = (_24 f)_jm (_23 f)_jm = ⟨_23 f, _24 f⟩, (f ⊗ f ∘ W_(mklj)) = f_mk,jk f_lj,lm = (_24 f)_jm (_13 f)_jm = ⟨_13 f, _24 f⟩, (f ⊗ f ∘ W_(mljk)) = f_ml,jk f_jk,lm = f_jk,ml f_jk,lm = ⟨ f, W_(12)∘ f⟩, (f ⊗ f ∘ W_(mlkj)) = f_ml,jk f_kj,lm = f_jk,ml f_kj,lm = ⟨ f ∘ W_(12), W_(12)∘ f⟩. As R is a Kähler curvature tensor, we have _13 f = _14 f = _23 f = r, where r is the Ricci tensor of the metric, and W_(12)∘ f = f = f ∘ W_(12). Adding everything up we get (f ∘Π_4) = 1/24 (4|R|^2 + 16 |r|^2 + 4 s^2) and the result follows after we notice that 6 n+34 = n+12n+32. These formulas let us calculate the variance of the holomorphic sectional curvature over the unit sphere. Recall that if c_j are the Chern forms of the metric h, ω is its Kähler form, and ωk̂ := ω^k/k! we have |r|^2 ωn̂ = -c_1^2 ∧ωn̂-̂2̂ + s^2 ωn̂, |R|^2 ωn̂ = (2c_2 - c_1^2) ∧ωn̂-̂2̂ + |c_1|^2 ωn̂ = 2(c_2 - c_1^2) ∧ωn̂-̂2̂ + s^2ωn̂. Then we get 0 ≤Var H ωn̂ = 2(c_2-3c_1^2)∧ωn̂-̂2̂ + 6 s^2 ωn̂/n+12n+32 - s^2 ωn̂/n+12^2 and after some calculations end up with 0 ≤ (c_2-3c_1^2)∧ωn̂-̂2̂ + (5n+6)(n-1)/2n(n+1) s^2 ωn̂. When s = 0 we get ∫_X c_2 ∧ωn̂-̂2̂≥∫_X 3c_1^2 ∧ωn̂-̂2̂ with equality if and only if h is flat. This makes for a fairly poor detector of zero scalar curvature metrics. It is known that if a complex surface admits a Kähler metric of zero scalar curvature then the surface is either flat, a K3 surface, or the blowup of a rational surface, and it is not known whether the last type carries such a metric <cit.>. On the projective plane blown up in d distinct points we have c_1^2 = 9 - d and c_2 = 3, so c_2 > 3c_1^2 if and only if 3 > 27 - 3d or d > 8. For d ≤ 8 there is thus no Kähler metric of zero scalar curvature on the surface. But we already knew this since those are just del Pezzo surfaces, whose c_1 is positive, so no Kähler metric can have zero scalar curvature on those. On a Hirzebruch surface blown up in d distinct points we have c_1^2 = 8 - d and c_2 = 4, so c_2 > 3 c_1^2 if and only if 4 > 24 - 3d or d > 6. For d ≤ 6 there is thus no Kähler metric of zero scalar curvature on the surface. But again the resulting surface still has positive c_1, so this is not news. § HERMITIAN METRICS We still work locally on the manifold X and still write V = T_X. We now suppose only that h is a Hermitian metric, which again gives a Hermitian inner product on V, but now consider an arbitrary algebraic Hermitian curvature tensor R on V ⊗ V. Recall that such a tensor has four Ricci tensors r_1(u, v̅) = ∑_j=1^n R(e_j, e̅_j, u, v̅), r_2(u, v̅) = ∑_j=1^n R(e_j, u, v̅, e̅_j), r_3(u, v̅) = ∑_j=1^n R(u, v̅, e_j, e̅_j), r_4(u, v̅) = ∑_j=1^n R(u, e_j, e̅_j, v̅), where (e_j) is an orthonormal basis. Note that r_1 and r_3 are Hermitian tensors, and r_2(u, v̅) = r_4(v, u̅). We then get two real-valued scalar curvatures s_1 = ∑_j,k=1^n R(e_j, e̅_j, e_k, e̅_k), s_2 = ∑_j,k=1^n R(e_j, e_k, e̅_k, e̅_j). 1/ S(V)∫_S(V) H(v) μ̣(v) = 1/n+12s_1 + s_2/2 . We consider the Hermitian endomorphism f = (h^⊗ 2)^-1 R of V ⊗ V. The projection formula and some calculations give 1/ S(V)∫_S(V) H(v) μ̣(v) = 1/n+12(f ∘Π_2). As before we know that Π_2 = 1/2(𝕀 + W_(12)) and we can check that f = s_1 and (f ∘ W_(12)) = s_2. Recall the splitting V^⊗ 2 = ⋀^2 V ⊕^2 V. Under it any Hermitian form R on V^⊗ 2 can be written as R = [ R_∧^2 V R_(^2V, ∧^2 V); R_(∧^2 V, ^2V) R_^2 V ]. We write r^† for the adjoint of r. 1/ S(V)∫_S(V) H(v)^2 μ̣(v) = ( 4 |R_^2 V|^2 + |r_1 + r_2 + r_3 + r_4|^2 + (s_1 + s_2)^2 ) /4! n+34. The proof begins exactly as before and we calculate our way to the 24 traces. Once there the sum of the 16 different ⟨_jk f, _ml f⟩ factors is |_13 f + _14 f + _23 f + _24 f|^2. Note that as f is Hermitian we have (_13 f)^† = _13 f, (_14 f)^† = _23 f, (_24 f)^† = _24 f and the four Ricci forms of R are r_1 = _13 f, r_2 = _14 f, r_3 = _24 f and r_4 = _23 f. This gives the middle factor above. The sum of the 4 different total trace factors is likewise ( f + (_14 f))^2 = (s_1 + s_2)^2 (as the total traces as real because f is Hermitian). This gives the third factor. That leaves the sum |f|^2 + f ∘ W_(12), f + f, W_(12)∘ f + f ∘ W_(12), W_(12)∘ f. Write σ = W_(12)∈ V^⊗ 2. Clearly σ^2 = 𝕀 so its eigenvalues are 1 and -1, and in fact the splitting V^⊗ 2 = ^2 V ⊕⋀^2 V is according to the eigenspaces of σ, which acts as 𝕀 on ^2V and -𝕀 on ⋀^2 V. Writing σ out according to this splitting we have σ = [ 𝕀 0; 0 -𝕀 ] from which it is clear that σ^† = σ. Because of this and σ^2 = 𝕀 we get f σ, g = f, gσ, σ f , g = f, σ g, σ f, σ g = f, g for any endomorphisms f,g of V^⊗ 2. Note that Π_2 = 1/2(𝕀 + σ). For any f ∈ V^⊗ 2 its symmetric part f_ according to the splitting V^⊗ 2 = ⋀^2 V ⊕^2 V is Π_2 ∘ f ∘Π_2 and Π_2 ∘ f ∘Π_2 = 14 (𝕀 + σ) f (𝕀 + σ) = 14 (f + f σ + σ f + σ f σ). Then 16 |Π_2 ∘ f ∘Π_2|^2 = |f|^2 + f, f σ + f, σ f + f, σ f σ + f σ, f + f σ, f σ + f σ, σ f + f σ, σ f σ + σ f, f + σ f, f σ + σ f, σ f + σ f, σ f σ + σ f σ, f + σ f σ, f σ + σ f σ, σ f + σ f σ, σ f σ = |f|^2 + f σ, f + σ f, f + σ f σ, f + f σ, f + |f|^2 + σ f σ, f + σ f , f + σ f, f + σ f σ, f + |f|^2 + fσ , f + σ f σ, f + σ f, f + f σ, f + |f|^2 = 4(|f|^2 + f σ, f + σ f, f + σ f σ, f ). Then finally 1/ S(V)∫_S(V) H(v)^2 dμ(v) = ( 4 |R_^2 V|^2 + |r_1 + r_2 + r_3 + r_4|^2 + (s_1 + s_2)^2 ) /4! n+34 as announced. So what can we say about the curvature tensor R of a Hermitian metric that has zero holomorphic sectional curvature? First off, the symmetric part R_^2 V = 0, so according to the splitting above we can write R = [ R_∧^2 V R_(^2V, ∧^2 V); R_(∧^2 V, ^2V) 0 ]. We also see that s_1 = -s_2, and that r_1 + r_2 + r_3 + r_4 = 0. It's interesting that we could have scalar curvatures of differing sign associated to the same connection (Balas <cit.> already noted that s_1 + s_2 = 0 in this case), but apart from that it's not clear that there is further water to be squeezed from this stone. § HOLOMORPHIC BISECTIONAL CURVATURE The holomorphic bisectional curvature of a Kähler metric h with curvature tensor R is B(ξ,η) = R(ξ, ξ, η, η)/|ξ|^2 |η|^2. Berger noted that it dominates the Ricci curvature by calculating 1/ S(T_X)∫_S(T_X) B(ξ, η) dσ(ξ) = 1/n r(η, η). Kyle Broder asked if we could calculate the L^2-norm of the holomorphic bisectional curvature using these methods, and we can. 1/ S(V)^2∫_S(V)^2 B(u, v)^2 dσ(u) dσ(v) = |R|^2 + (n+2)|r|^2/4n+12^2. This amounts to calculating 1/ S(V)^2∫_S(V)^2 R(u, u̅, v, v̅)^2 dσ(u) dσ(v) for an algebraic Kähler curvature tensor R. Fix v and let T_v(a, b̅, c, d̅) = R(a, b̅, v, v̅) R(c, d̅, v, v̅). Then T_v is an algebraic curvature tensor, that is, a Hermitian form on ⋀^2 V. It does not have the symmetries of a Kähler curvature tensor, but we have 1/ S(V)^2∫_S(V)^2 R(u, u̅, v, v̅)^2 dσ(u) dσ(v) = 1/ S(V)∫_S(V)( 1/ S(V)∫_S(V) T_v(u, u̅, u, u̅) dσ(u) ) dσ(v). As we know, 1/ S(V)∫_S(V) T_v(u, u̅, u, u̅) dσ(u) = 1/n+12s_1(T_v) + s_2(T_v)/2. In an orthonormal basis (e_j) we have s_1(T_v) = ∑_j,k=1^n T_v(e_j, e̅_j, e_k, e̅_k) = ∑_j,k=1^n R(e_j, e̅_j, v, v̅) R(e_k, e̅_k, v, v̅) = r(v, v̅)^2, s_2(T_v) = ∑_j,k=1^n T_v(e_j, e̅_k, e_k, e̅_j) = ∑_j,k=1^n |R(e_j, e̅_k, v, v̅)|^2. Now 1/ S(V)∫_S(V) r(v,v̅)^2 dσ(v) = |r|^2/n by standard linear algebra. Playing the same game again with U(a, b̅, c, d̅) = ∑_j,k=1^n R(e_j, e̅_k, a, b̅) R(e_k, e̅_j, c, d̅) we get 1/ S(V)∫_S(V) s_2(T_v) dσ(v) = 1/n+12s_1(U) + s_2(U)/2 and s_1(U) = ∑_j,k,l,m=1^n R(e_j, e̅_k, e_l, e̅_l) R(e_k, e̅_j, e_m, e̅_m) = ∑_j,k=1^n |r(e_j, e̅_k)|^2 = |r|^2 s_2(U) = ∑_j,k,l,m=1^n R(e_j, e̅_k, e_l, e̅_m) R(e_k, e̅_j, e_m, e̅_l) = |R|^2. All together we get 1/ S(V)^2∫_S(V)^2 B(u,v)^2 dσ(u) dσ(v) = 1/2n+12( |r|^2/n + 1/2n+12( |r|^2 + |R|^2 ) which simplifies to what we claimed. For Hermitian metrics there is a variety of possible holomorphic bisectional curvatures to consider; we get four from the various permutations of the positions of ξ and η (these are not real-valued, which seems odd), and then convex combinations of those (the ones that correspond to averages can be real-valued). Given any of these we can repeat the above to calculate its L^2-norm and should get a similar linear combination of the norms of the various scalar and Ricci curvatures. We leave this to the interested reader. plainurl
http://arxiv.org/abs/2307.02261v1
20230705130033
Security Risk Analysis Methodologies for Automotive Systems
[ "Mohamed Abouelnaga", "Christine Jakobs" ]
cs.CR
[ "cs.CR" ]
IEEEexample:BSTcontrol Security Risk Analysis Methodologies for Automotive Systems Mohamed Abouelnaga, Christine Jakobs August 1, 2023 =========================================================== plain plain Nowadays, systematic security risk analysis plays a vital role in the automotive domain. The demand for advanced driver assistance systems and connectivity of vehicles to the internet makes cyber-security a crucial requirement for vehicle manufacturers. This paper summarizes the risk analysis method stated in the recently released automotive security standard ISO/SAE 21434 <cit.>, which lays the high-level principles for threat analysis and risk assessment (TARA) methods. Following, we introduce a specific use case to compare different security analysis approaches which OEMs can benefit from to achieve compliance with the standard. threat analysis and risk assessment (TARA), automotive, cybersecurity, STRIDE, EVITA, HEAVENS § INTRODUCTION The constant increase of software functionalities for vehicles to satisfy consumers' demand and the current trend of developing connected vehicles and autonomous driving led to the development of more advanced systems. This requires a connection with the outside world of the vehicle, either the infrastructure or the other road vehicles. Also, the connection to the internet became an essential part of the infotainment systems. Therefore, the vehicles are not closed systems anymore. The urgency to deal with the cybersecurity attacks and vulnerabilities <cit.>, <cit.>, <cit.>, <cit.> has become as important as developing new functionalities <cit.> as the violation of security goals is no more affecting data only. It may lead to financial problems, affect the reputation of the manufacturers, or even affect the safety of humans <cit.>, <cit.>. An essential step toward building security-aware systems is obtaining security requirements from threat modeling and risk assessment. In risk analysis, we analyze different attack possibilities and calculate their impact on various categories. This process also enables better testing and validation as we can use attack paths as test cases. However, the complexity of functionalities has needed a more systematic way to conduct the risk analysis. Although different organizations have developed different standards to serve the automotive field, such as ISO 26262 <cit.> for safety, there was still a need for a common standard for security. That has led the International Organization for Standardization (ISO) and the Society of Automotive Engineers (SAE) to work on ISO/SAE 21434 <cit.> since 2016 and successfully release it in August 2021. This standard deals with high-level steps to be applied in risk analysis; however, it does not specify use cases or how to automate this process. As a result, many projects were already working on the problem of risk analysis for different use cases, such as EVITA <cit.>, <cit.>, and SecForCARS <cit.>. This paper will summarize the TARA method as stated in the standard. Also, we introduce a use case to compare both EVITA <cit.> and HEAVENS <cit.> approaches. Moreover, we will see how the SecForCARS <cit.> project participated in building a well-organized database for attack paths. § THREAT ANALYSIS AND RISK ASSESSMENT (TARA) We present a detailed overview of high-level activities that we should conduct in the context of risk analysis. §.§ Prerequisite The standard introduces the steps we should use to systematically tackle the risk analysis problem. However, it only presents WHAT, not HOW. For example, it does not provide what different values exactly mean and how we should map them for impact and attack feasibility ratings. We carry out TARA steps after defining the Target of Evaluation (TOE) or item definition. This step includes: * Item boundary: it distinguishes the item from other internal or external items to the vehicle and defines the interfaces between the item and the other items * Item functions: this describes the item's behavior during different phases (concept, development, production, maintenance) * Preliminary architecture: this describes the various components of the item, their connections, and external interfaces of the item * Assumptions: relevant information regarding the security assumptions, e.g., using encrypted messages §.§ TARA activities According to ISO/SAE 21434 <cit.>, TARA comprises six activities, as shown in Fig. 1; each has inputs, different artifacts as its outputs, and recommended methods: Asset identification: an asset is any resource that has value. In a vehicle, assets can be in-vehicle devices such as ECUs, sensors and actuators, applications running on in-vehicle devices, and communication data. We can identify assets using the preliminary architecture and the assumptions obtained from the item identification activity. We can infer the damage scenarios from asset identification by associating the asset with specific cybersecurity properties. The ISO/SAE 21434 <cit.> deals with the following properties: * Confidentiality: data must not be revealed to unauthorized parties * Integrity: data is complete and intact, so it should not be modified unauthorizedly or accidentally * Availability: data or system must be accessible when needed However, in practice, we can use the following properties, e.g., when dealing with STRIDE <cit.>: * Non-repudiation: protection against an entity that falsely denies performing a particular action * Authenticity: proof of identity to verify that information originates from the claimed source * Authorization: protection against privilege elevation through unauthorized access For example, an asset would be data communication for brakes, and its cybersecurity property is integrity. The damage scenario is a collision with another vehicle when traveling at high speed resulting from unintended full brakes. Threat scenario identification: the threat scenario is the potential cause of compromise of assets' cybersecurity properties, which leads to the damage scenarios. Threat scenarios can be identified either by group discussion of experts of different domains or by using systematic threat modeling approaches such as EVITA <cit.>, TVRA <cit.>, PASTA <cit.>, and STRIDE <cit.>. For example, spoofing of CAN messages for brakes ECU leads to loss of integrity of those messages and thereby the loss of integrity of the braking functionality. Impact rating: we assess damage scenarios against potential consequences for road users in four different categories: safety (S), financial (F), operational (O), and privacy (P). We can add more categories with a suitable justification. The impact rating for each category has to be one of four values: "severe," major," moderate," or "negligible." We have to derive the safety rating from ISO 26262-3:2018 <cit.>. Attack path analysis: besides the preliminary architecture and predefined environmental assumptions, we analyze the threat scenarios to identify the attack paths. We can achieve attack path identification by either top-down approaches such as attack trees, which analyze each threat scenario to deduce attack paths that realize it, or bottom-up approaches using vulnerability or weakness analysis. Attack feasibility rating: we should assess each attack path according to four categories: * High: if the attack path utilizes a low effort * Medium: if the attack path utilizes a medium effort * Low: if the attack path utilizes a high effort * Very low: if the attack path utilizes a very high effort This rating should be determined using one of the following approaches: * Attack potential-based approach: as defined in ISO/IEC 18045 <cit.>, it measures the effort needed for successfully performing the attack. That relies on the potential of the attacker and used resources. Five core factors determine it: * Elapsed time: the time required to identify the vulnerability and perform a successful attack * Knowledge of the item or the component: acquired by the attacker * Attacker expertise: related to the skill and the experience of the attacker * The window of the opportunity: related to the access conditions as access type, whether it is physical or logical, and the access time for the attacker to perform a successful attack * The equipment: available to the attacker to discover the vulnerability and perform the attack * Attack vector-based approach: we can determine the feasibility rating according to the logical and physical distance between the attacker and the item or the component; the more remote, the higher rating <cit.> * Common Vulnerability Scoring System (CVSS) <cit.>: we can define the feasibility rating by the exploitability metric (E), which depends on attack vector (V) (ranging from 0.2 to 0.75), attack complexity (C) (ranging from 0.44 to 0.77), the privileges required (P) (ranging from 0.27 to 0.85), and the user interaction (U) (ranging from 0.62 to 0.85). The proposed formula for calculating the exploitability value is as follows: E=8.22 × C × V × P × U Risk value determination: The risk of a threat scenario can be determined using two parameters, the likelihood of its attack path (consider the maximum feasibility rating in case of many associated attack paths). And the impact of the associated damage scenario (if more than one damage scenario results from the threat, we consider a different risk value for each damage scenario). The risk value is an integer ranging from "1" (minimum risk) to "5" (maximum risk). This value can be determined using user-defined formulas or risk matrices (the row rep-resents impact rating; the column represents the attack feasibility rating or vice versa). § TARA IN PRACTICE Although several security models are concerned with threat analysis and risk assessment, they do not focus on the automotive field. For example, Threat, Vulnerability and Risk Analysis (TVRA) <cit.> deals with security threats in the telecommunications domain, and Open Web Application Security Project (OWASP) <cit.> focuses on web application security. Specific and detailed use cases would be a good approach for identifying threats and security requirements. They reflect on how to realize the users' goals by building scenarios describing the required system's functional properties. Use cases should focus on what the system should do instead of how to be done. We present Road Speed Limit (RSL) use case to two different risk assessment frameworks, which are dedicated to identifying security issues for vehicles, EVITA <cit.> and HEAVENS <cit.>. The well-defined assessment in both frameworks and the impact on autonomous driving are the main reasons behind the choice of this use case. Also, applying the same use case to both approaches would clarify the differences. Also, we use attack trees in both models for attack path analysis. The tree's root represents the attack's goal, and the leaves represent the ways to achieve this goal. Attack trees can contain AND- or OR- nodes to model the dependencies among the steps of the attack path <cit.>. §.§ RSL use case The road speed limit (RSL) restricts the vehicle's speed. Different parties can impose the speed limit, e.g., an authority or a fleet owner. According to EVITA <cit.>, RSL is directly related to traffic information from/to other entities. An entity can be another car, a traffic light, or a roadside unit. This process includes exchanging traffic messages through Communication Unit (CU) between entities and collecting data through in-vehicle sensors, as shown in Fig. 2. §.§ EVITA model The E-safety vehicle intrusion protected applications (EVITA) <cit.> project designed an efficient security solution for automotive onboard networks. Therefore, it introduces a risk analysis framework to protect security-relevant components against attacks. §.§.§ Asset identification The possible affected assets are the wired infrastructure, roadside units, wireless communication data, and authorization keys. §.§.§ Threat scenarios and attack paths Before proceeding with threat identification, we can consider EVITA <cit.> as an attacker-centric approach, as it pays attention to attacker goals when deriving the possible threats. Categories for possible attack motivations are as follows: * Harming the driver or the passengers: does not have to be physical damage. The attacker may want to implicate the driver in legal troubles, e.g., by manipulating speed limits that do not necessarily lead to an accident but may result in fines that legally and financially harm the driver * Gaining a reputation as a hacker: the attacker's primary goal is not to harm the system or the users but rather to publish the results of a successful attack to gain a reputation * For financial gain: for example, the attacker may tamper with the vehicle for insurance fraud; he attacks the steering or brakes of another vehicle to provoke an accident * To gain personal (non-financial) advantages: for example, going faster in the traffic, e.g., switching all traffic lights to green or directing other vehicles to alternative routes to make the way clear in front of the attacker * Gain information about the manufacturer: one purpose can be that an attacker or another competing manufacturer wants to reveal the technical specifications of a specific item. Another goal can be destroying the reputation of a particular manufacturer, e.g., manipulating the safety to harm random users or violating the users' privacy * Harm to the economy: attacking the infrastructure, which may lead to accidents, generate traffic jams, or disrupt the normal state of roads * Mass terrorism: most of the attacks to achieve this are similar to other categories but with a large scale of impact. As the attacker belongs to an organization, the availability of resources would not be a problem Regarding modeling the threats, we use attack trees. As shown in Fig. 3, an attack tree has the following structure: the tree's root (level 0) represents an abstract attack goal. Its child nodes (level 1) describe the attack objectives satisfying the attack goal. We can estimate the severity of the attack's result at this level as these objectives harm the stakeholders (vehicle and other road users, authorities, service operators, and vehicle manufacturers). We can introduce some attack methods (level 2) to achieve each attack objective. Each attack method is composed of (AND/OR) logical combinations of attacks against assets known as "asset attacks, " representing the tree's leaves. One attack goal for RSL is manipulating the speed limits. Two possible objectives can be: slowing down other vehicles behind the attacker by tampering with the infrastructure to be notified falsely of lower speed limits or increasing the speed limit by attacking the speed limit enforcing equipment (e.g., radars). The attack tree based on this goal is shown in Fig. 4. §.§.§ Impact rating We assess the attack objectives against four categories: safety, financial, operational, and privacy. Nevertheless, we use five values ("0"," 1"," 2"," 3"," 4") to assess the severity of each category instead of four, as shown in the standard. We map the rating for the four categories as the following: For the safety category (S_S): It is related to the Abbreviated Injury Scale <cit.> * "0": for no injuries * "1": for light or moderate injuries * "2": for severe injuries (survival is probable), light or moderate injuries for multiple vehicles * "3": for life-threatening or fatal injuries (survival is uncertain), severe injuries for multiple vehicles * "4": for life-threatening or fatal injuries for multiple vehicles For the financial impact (S_F): It is related to the financial losses experienced by the road users * "0": if there is no financial loss * "1": for low-level financial loss (≈ €10) * "2": for moderate-level financial loss (≈ €100) or low losses for multiple vehicles * "3": for heavy financial loss (≈ €1000) or moderate losses for multiple vehicles * "4": for heavy financial loss for multiple vehicles For the operational category (S_O): It is related to the impact on the functionalities of the vehicle that does not affect the functional safety * "0": if there is no impact on the operational performance * "1": if the impact is not resilient to the driver * "2": if the driver is aware of the impact, but the impact is not resilient for multiple vehicles * "3": if there is a significant impact on the performance or the impact is noticeable by multiple vehicles * "4": if there is a significant impact that is noticeable in multiple vehicles For the privacy category (S_P): It is related to the tracking and identification of the individuals or vehicles * "0": if there is no unauthorized access to data * "1": if there is unauthorized access but to anonymous data (no identification of the driver / the vehicle) * "2": if it results in the identification of the driver / the vehicle * "3": if there is a tracking of the driver / the vehicle or identification of multiple drivers / multiple vehicles * "4": if there is a tracking of multiple drivers / multiple vehicles §.§.§ Attack feasibility analysis According to ISO/IEC 18045 <cit.>, the attack potential-based approach determines the feasibility rating of performing a successful attack. It describes the effort needed to mount a successful attack; the lower values for the attack potential, the higher likelihood of a successful attack. The attack feasibility rating is as the following: * "5": requires "basic" attack potential with values (0-9) * "4": requires "enhanced-basic" attack potential with values (10-13) * "3": requires "moderate" attack potential with values (14-19) * "2": requires "high" attack potential with values (20-24) * "1": requires "beyond-high" attack potential with values (≥25) These values are obtained by summing up the values of the five different potential categories. These values can be derived as follows: For elapsed time: * "0": for (≤1 day) * "1": for (≤1 week) * "4": for (≤1 month) * "10": for (≤6 months) * "19": for (>6 months) For expertise: * "0": for layman level; the attacker is unknowledgeable compared to professionals or experts * "3": for proficient level; the attacker is familiar with the security behavior of the system * "6": for expert-level; familiar with security algorithms, hardware, different attack technique, necessary tools, cryptography, … * "8": if multiple experts in different fields are required For knowledge of the system: * "0": if the information is publicly available * "3": if the information is restricted (e.g., between organizations) * "7": if the information is sensitive (e.g., internal to the organization * "11": if the information is critical (e.g., restricted to a limited number of individuals) For the window of opportunity: * "0": if the access is highly available with no time limitation * "1": if the required access time (≤1 day) and the number of targets needed to be accessed to perform the attack (≤10) * "4": if the required access time (≤1 month) and the number of targets needed to be accessed to perform the attack (≤100) * "10": if the required access time (>1 month) and the number of targets needed to be accessed to perform the attack (>100) For the equipment: * "0": if it is already available to the attacker (standard) * "4": if it is not available but can be obtained without noticeable effort (specialized) * "7": if it is specially produced (bespoke) * "9": if different bespoke equipment is needed (multiple bespoke) §.§.§ Risk determination The risk level (R) is a vector determined by the risk matrix method, as shown in <cit.>. This vector contains one scalar per impact category as EVITA does not specify a formula to combine the vector components into one scalar. The risk matrix has two parameters (the attack feasibility rating (A), which is a scalar value, and the attack severity (S), which is a vector). We add the third parameter if the severity vector has a non-zero safety component. This parameter represents the driver's potential to control the output's severity. According to ISO 26262-3:2018 <cit.> and MISRA guidelines for safety analysis <cit.>, We refer to it as "controllability." It has four values as follows: * "C1": avoiding an accident is possible by the average human response * "C2": the avoidance of an accident is possible by the sensible human response * "C3": avoiding an accident is very difficult, but under appropriate circumstances, an experienced human response can achieve that * "C4": It is not possible to avoid an accident The risk values range from (R0 to R7+) with (R0) being the lowest value and (R7+) representing the safety hazards which means the threats that are unlikely to be acceptable. The higher risk level indicates a lower severity value and higher attack feasibility rating (lower attack potential) coupled with a lower controllability level. The risk level (R) is attached to each attack method. We link the attack severity (S) to each attack objective. The attack feasibility rating (A) is attached to each attack method; it is a combined value of the feasibility ratings of its associated asset attacks. We calculate the combined attack feasibility rating using the following two rules: * "OR-rule": if we implement the attack method using the"OR" relationship between different asset attacks; the combined attack feasibility rating (A) of the attack method is the highest feasibility rating of the associated asset attacks (P_i) as follows: A=max{P_i} * "AND-rule": if we implement the attack method using the"AND" relationship between different asset attacks; the combined attack feasibility rating (A) of the attack method is the lowest feasibility rating of the associated asset attacks (P_i) as follows: A=min{P_i} Regarding the severity of generating bogus speed limits notifications; this attack does not impose any privacy issues (S_P=0). However, it has the potential for light or moderate incidents for multiple vehicles resulting from the drivers' confusion and distraction (S_S=2). Nevertheless, controllability can be considered reasonable (C2). There can be moderate financial loss due to minor speeding fines (S_F=2). All of that would lead to confidence loss in the system (S_O=4). The asset attacks (modifying roadside units and illegally acquiring authorization keys by a physical attack) are out of the scope of EVITA <cit.>. Each roadside unit's attack (exploiting the configuration errors, exploiting the protocol implementation flaws, and gaining the root access) has an attack feasibility rating (P=1). For the wireless communication asset, replaying the speed limit message has (P=2), and faking traffic conditions has (P=5). Therefore the combined attack potential of the attack method (impersonating the authority) is (A=2), and its risk levels are (R_S=R2, R_F=R1, R_O=R3), the combined attack potential of the attack method (influencing the roadside equipment) is (A=5), and its risk levels are (R_S=R5, R_F=R4, R_O=R6) and the combined attack potential of the attack method (taking control of the roadside units) is (A=1), and its risk levels are (R_S=R1, R_F=R0, R_O=R2). In case of severity for the other attack objective, which is increasing the speed enforced by roadside units; that would lead to severe injuries for multiple road users, including pedestrians (S_S=3) with poor controllability (C3). There could be higher speeding fines due to high speeds (S_F=3). There should not be a performance degradation (S_O=0). The asset attack (faking the speed limit messages of the wired infrastructure) is out of the scope of EVITA <cit.>. Also, the attack method (faking wired speed update messages) is out of scope. Therefore the combined attack potential of the attack method (taking control of the roadside units) is (A=1), and its risk levels are (R_S=R3, R_F=R1). §.§ HEAVENS model HEAling Vulnerabilities to ENhance Software Security and Safety (HEAVENS) <cit.> is a project which aims to derive security requirements for a given item through threat analysis and risk assessment. Although it does not suggest any security mechanisms or countermeasures to fulfill those security requirements, it provides many improvements to existing risk analysis models, specially EVITA <cit.>, to be compatible with the standard. §.§.§ STRIDE and DFD One of the essential techniques used widely in threat modeling is STRIDE <cit.>. We use it in various risk analysis frameworks, e.g., HEAVENS. STRIDE stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege. To be noted, STRIDE is not a threat model or framework; instead, it is more about the categorization of threats to be considered during software development. Table <ref> describes different parts of STRIDE synonyms and the corresponding violated cybersecurity property. Within STRIDE, we first identify system entities, components, data flows, events, and trust boundaries. Moreover, we can visualize the system by Data Flow Diagrams (DFD) <cit.>. * Trust boundaries: we use them to identify logical or physical interactions that pose a possibility for an attack, displayed as dashed lines in DFD. * Running code or Service: drawn as a circle or rounded rectangle. * External entities: e.g., users, are displayed as sharp-corners rectangles. * Datastore: e.g., a database. It is displayed as a label with two parallel lines. * Data flow: communication between processes or external entities and commands is displayed as a line with an arrow representing the direction of data flow. A simple example of DFD showing the different entities and interactions is shown in Fig. 5. STRIDE is easy to learn and execute, especially with the Microsoft Threat Modeling Tool (MTMT) <cit.>. §.§.§ Item definition and asset identification As HEAVENS uses the DFD to visualize the system, Fig. 6 shows the diagram elements representing the different assets. §.§.§ Threat scenario and damage scenario identification Instead of specifying every specific attack as in EVITA, HEAVENS uses STRIDE to focus on generic attacks to conclude several specific ones <cit.>. For example, the first attack objective regarding generating bogus notifications of speed limits to the other vehicles can be realized after the mappings of threats in EVITA as follows: * spoofing an input signal to the communication ECU (impersonating authority) by faking the speed limit message (information disclosure and repudiation) or illegal acquisition of the authorization keys (elevation of privilege). * tampering with the roadside units through physical access or gaining root access (elevation of privilege) As a result of the threat analysis. We can link a damage scenario of this attack objective to lowering the speed, although the legal speed limit is not reached. We will limit the risk assessment to this scenario as that would be enough to spot the differences between HEAVENS and EVITA. §.§.§ Attack path analysis We have identified four attack paths for lowering the speed damage scenario. We can visualize them by the attack tree shown in Fig. 7. §.§.§ Impact rating HEAVENS proposes four values (0,1,10,100) for each impact category: safety, financial, operational and privacy. The usage of the logarithmic scale reflects the exponential increasing impact. Also, financial and safety categories are more important than operational and privacy since they lead to injuries and direct financial losses; therefore, operational and privacy factors are decreased by a magnitude of 1. For the safety category: values are based on ISO 26262 <cit.> * "0": for no injuries * "1": for light or moderate injuries * "10": for severe injuries (survival is probable) * "100": for life-threatening or fatal injuries (survival is uncertain) For the financial impact: In contrast to EVITA, which quantifies financial losses, HEAVENS determines the losses qualitatively based on BSI-Standard 100-4 <cit.>. This is more practical as €100000 can be trivial for an organization but threatens the existence of another one. Values are as follows: * "0": no noticeable effect * "1": the financial loss is tolerable * "10": there are substantial financial losses without affecting the existence of the organization * "100": the financial losses threaten the existence of the organization For the operational category: HEAVENS adapts the Failure Mode and Effects Analysis (FMEA) <cit.> to classify the operational damages * "0": if there is no impact on the operational performance * "1": if the impact is not resilient to the driver * "10": if there is degradation or a loss of a secondary function or even degradation of a primary function. However, the vehicle is still operable * "100": a loss of a primary function which leads to the inoperability of the vehicle For the privacy category: values are as follows: * "0": no noticeable effect * "1": privacy violation of a specific stakeholder with-out potential abuses (e.g., impersonating a victim to perform illegal actions) * "10": privacy violation of a specific stakeholder leads to abuses and media coverage * "100": privacy violation of multiple stakeholders leads to abuses and extensive media coverage As there is one impact level for each damage scenario, HEAVENS performs normalization of the values. This enables embedding more categories into the rating (e.g., the impact of violating the legislation/regulations) or the impact on different stakeholders without changing the ranges as in the regular sum of the values. We calculate the impact level as follows: I = ∑_i^n w_i I_i/100 ×∑_i^n w_i where I is the overall impact level, w_i is the weight for each category (w=10 for both safety and financial classes), I_i is the impact level for i^th category, and n is the number of impact categories That leads to the following impact levels: * Negligible: 0.00≤ I<0.01 * Moderate: 0.01≤ I<0.05 * Major: 0.05≤ I < 0.45 * Severe: 0.45≤ I ≤ 1.00 Regarding the previous damage scenario, the same justification for different impact categories is the same, although the impact values differ. The following impact levels apply: I_P=0,I_S=1,I_F=10,I_O=100 According to (3), that would lead to an impact value of ≈ 0.095, corresponding to a "Major" impact rating. §.§.§ Attack feasibility rating and risk determination HEAVENS also uses the attack potential-based approach according to ISO/IEC 18045 <cit.> but with modifications to attach each attack path with an attack feasibility rating. HEAVENS only uses four parameters: expertise, knowledge of the item, the window of opportunity, and the equipment. Elapsed time is excluded as it is not considered a first-order parameter, and its value is proportional to the value of the other four parameters. However, HEAVENS enables the parameter extension by normalizing the values instead of direct sum; we calculate the attack feasibility rating as follows: A = ∑_i^n a_i /3 × n where A is the attack feasibility rating, n is the number of parameters, and "3" is used for normalization as parameters have linear values (0-3) with equal weights. HEAVENS uses a matrix to determine the window of opportunity from two sub-parameters; "access means," which determines whether the attack must be conducted remotely or through physical access, and "asset exposure time," which specifies the amount of the time needed to perform the attack. These sub-parameters can be derived according to <cit.> as follows: Regarding access means: * physical 1: electronic tools are required to disassemble the vehicle components * physical 2: physical tools are needed to disassemble the vehicle components * physical 3: no disassembly is required for the physical access * remote 1: access to the local network of the vehicle is required * remote 2: remote access via the internet or telecommunications is required for asset exposure time: * rare: for deficient availability of the TOE * sporadic: for a sporadic moment of exposure * frequent: for a considerable exposure time of the TOE * unlimited: for unlimited time of exposure That yields the window of opportunity values using the matrix specified in <cit.>. HEAVENS also reverses the values of the attack potential parameters to be proportional to the likelihood of performing a successful attack. The values of the four parameters can be summarized as follows: * "0": when multiple experts, critical knowledge of the item, a small window of opportunity, and multiple bespoke types of equipment are required * "1": when an expert, sensitive knowledge of the item, a medium window of opportunity, and bespoke types of equipment are required * "2": when a proficient attacker, restricted knowledge of the item, a large window of opportunity, and specialized types of equipment are required * "3": when a layman, public knowledge of the item, an unlimited window of opportunity, and standard types of equipment are required The attack feasibility ranges can be specified as follows: * Very low: 0.00 ≤ A < 0.30 * Low: 0.30≤ A < 0.60 * Medium: 0.60≤ A<0.80 * High: 0.80≤ A≤ 1.00 Returning to our damage scenario and instead of applying the detailed attack potential approach, HEAVENS proposes an approach based on required proximity to perform the attack similar to the attack vector approach, which assigns "high" for physical attacks and "low" for remote attacks. Although this oversimplifies the process, it allows for dismissing threat scenarios without spending a lot of effort and time, also paying more attention to the impact rating as a first try. If we can realize the attack remotely or physically, we choose remotely. Therefore the attack feasibility rating of the selected attack paths can have the following values: "high," for faking the speed limit message. And "low" for the other three asset attacks (the elevation of privilege attacks and the physical access). We also calculate the combined feasibility value for each threat scenario as in EVITA. With the aid of the risk matrix in <cit.>, we can conclude the risk values for the selected threats in Table <ref>. §.§ EVITA vs. HEAVENS As we have shown how both models apply the threat analysis and risk assessment process, we have summarized the differences and commonalities in Table <ref>. In contrast to HEAVENS, which complies with ISO/SAE 21434 <cit.> regarding the number of impact levels, attack feasibility rating, and risk values, EVITA has five values for impact rating and attack feasibility rating and nine risk values (R0 to R7+). Also, EVITA has one risk value per impact category for each damage scenario, without mentioning a formula to combine those values to get one risk value. Both frameworks use the attack potential-based approach for attack feasibility rating. HEAVENS has redefined the window of opportunity parameter by splitting it into two subparameters, as we have shown previously. Both models use the attack tree for attack path analysis. Moreover, HEAVENS uses DFD for asset identification from the TOE. Although EVITA adds a controllability parameter that affects the risk values, HEAVENS does not mention it. HEAVENS introduces the normalization formulas, which enable the scalability of impact and feasibility parameters without redefining the levels. Both models use the same security goals; Authenticity, Integrity, Non-Repudiation, Confidentiality, Availability, and Authorization. Although EVITA tries to find each threat independently without any abstract model, HEAVENS uses STRIDE for threat modeling. §.§ SecForCARS taxonomy The security risk analysis is not easy as it requires security professionals and effort. We can facilitate this process by using the already identified threats and attacks. The attacks and their impact should be recorded according to a taxonomy beneficial for TARA. Although some taxonomies exist, they do not apply to the automotive domain or do not support TARA. For instance, CERT/CC taxonomy <cit.> is used in IT forensics and is not detailed enough to be suitable for supporting TARA <cit.>. SecForCARs <cit.> is another project researching methods and tools to secure vehicle communication. It introduces a detailed taxonomy <cit.> of threat scenarios, affected assets, impact levels and risk values to record attacks for future projects in the automotive domain. As a base step, we can use the national vulnerability database (NVD) <cit.>, which contains more than 100000 vulnerabilities. NVD uses common vulnerabilities and exposures enumeration (CVE) <cit.>. A CVE entry includes an identifier, a description of the vulnerability and a reference to the source. The taxonomy of the SecForCARs project consists of 23 categories. Each category may contain three levels of abstraction; level 1 up to level 3; the latter represents the lowest level of abstraction, as shown in Fig. 8. The 23 categories are as the following: the description of the attack, the reference to the publication source, the year (when the attack was published), the attack class (e.g., we can classify the attack using STRIDE), the attack base (e.g., software, hardware, AI, …), the attack type (analysis, simulation, real attack), the violated security property (e.g., according to STRIDE), the affected asset, the vulnerability (which made the attack possible), the interface (used by attacker e.g., CAN, Ethernet, …), the consequences, the attack path, the requirement (to carry out a successful attack), the restrictions (making the execution of the attack more difficult), the attack level (remote, physical, …), the acquired privileges (by the attacker to perform the attack), the vehicle (description of vehicle, e.g., the manufacturer), the component (description of the attack's target ), the tools (the attacker used to perform the attack), the attack motivation, the entry in vulnerability database (if the attack is registered in a database e.g., NVD), the exploitability (the attack feasibility rating), the rating (contains the impact rating and the risk value). § CONCLUSION We introduced a summary of the different methods we could use to perform the threat analysis and risk assessment (TARA) according to ISO/SAE 21434. Also, we used the Road Speed Limit (RSL) use case to spot the differences between two popular risk analysis frameworks, EVITA and HEAVENS. We have seen that, in contrast to HEAVENS, the assessment values of EVITA do not comply with the standard. We also showed an additional work of the SecForCARs project to build a detailed taxonomy for recording the attacks to support TARA in the automotive domain. IEEEtran
http://arxiv.org/abs/2307.00866v2
20230703090806
Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
[ "Shuzheng Si", "Shuang Zeng", "Baobao Chang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Notes on weight-shifting operators and unifying relations Qi Chen,^a Yi-Xiao Tao^b Received xxxx; accepted xxxx ========================================================= Incomplete utterance rewriting has recently raised wide attention. However, previous works do not consider the semantic structural information between incomplete utterance and rewritten utterance or model the semantic structure implicitly and insufficiently. To address this problem, we propose a QUEry-Enhanced Network (QUEEN). Firstly, our proposed query template explicitly brings guided semantic structural knowledge between the incomplete utterance and the rewritten utterance making model perceive where to refer back to or recover omitted tokens. Then, we adopt a fast and effective edit operation scoring network to model the relation between two tokens. Benefiting from proposed query template and the well-designed edit operation scoring network, QUEEN achieves state-of-the-art performance on several public datasets. ^*Equal contribution. ^†Corresponding author. § INTRODUCTION UTF8gbsn Multi-turn dialogue modeling, a classic research topic in the field of human-machine interaction, serves as an important application area for pragmatics <cit.> and Turing Test. The major challenge in this task is that interlocutors tend to use incomplete utterances for brevity, such as referring back to (i.e., coreference) or omitting (i.e., ellipsis) entities or concepts that appear in dialogue history. As shown in Table <ref>, the incomplete utterance u_3 refers to “Smith” (“史密斯”) from u_1 and u_2 using a pronoun “He” (“他”) and omits “the type of cuisine” (“菜肴的类型”) from u_2. This may cause referential ambiguity and semantic incompleteness problems if we only read this single utterance u_3, which is a common case of downstream applications like retrieval-based dialogue systems <cit.>. Moreover, previous studies <cit.> also find that coreference and ellipsis exist in more than 70% of the utterances, especially in pro-drop languages like Chinese. These phenomena make it imperative to effectively model dialogue in incomplete utterance scenarios. To cope with this problem, previous works <cit.> propose the Incomplete Utterance Rewriting (IUR) task. It aims to rewrite an incomplete utterance into a semantically equivalent but self-contained utterance by mining semantic clues from the dialogue history. Then the generated utterance can be understood without reading dialogue history. For example, in Table <ref>, after recovering the referred and omitted information from u_3 into u_3', we could better understand this utterance comprehensively than before. Early works use coreference resolution methods <cit.> to identify the entity that a pronoun refers to. However, they ignore the more common cases of ellipsis. So the text generation-based methods <cit.> are introduced to generate the rewritten sequence from the incomplete sequence by jointly considering coreference and ellipsis problems. Though effective, generation models neglect a key trait of the IUR task, where the main semantic structure of a rewritten utterance is usually similar to the original incomplete utterance. So the inherent structure-unawareness and uncontrollable feature of generation-based models impede their performances. For semantic structure-aware methods, <cit.> utilize an edit operation matrix (e.g., substitution, insertion operations) to convert an incomplete utterance into a complete one. They formulate this task as a semantic segmentation problem with a CNN-based model <cit.> on the matrix to capture the semantic structural relations between words implicitly. <cit.> attempt to add additional semantic information to language models <cit.> by annotating semantic role information but it is time-consuming and costly. <cit.> propose a semi-autoregressive generator using a tagger to model the some considerable overlapping regions between the incomplete utterance and rewritten utterance, yet only implicitly learn the difference between them. Although these methods maintain some similarities between the incomplete utterance and the rewritten utterance (i.e., the overlap between them), it is difficult for these methods to explicitly model the semantic structure, especially the difference between the two utterances, ignoring the information in the incomplete utterance, such as which tokens are more likely to be replaced and which positions are more likely to require the insertion of new tokens. Therefore, there are still limitations of existing methods for IUR task, especially in jointly considering coreference and ellipsis cases and better utilizing semantic structural information. This paper proposes a simple yet effective QUEry-Enhanced Network (QUEEN) to solve the IUR task. QUEEN jointly considers coreference and ellipsis problems that frequently happen in multi-turn utterances. Specifically, we propose a straightforward query template featuring two linguistic properties and concatenate this query with utterances as input text. This query explicitly brings semantic structural guided information shared between the incomplete and the rewritten utterances, i.e., making model perceive where to refer back to or recover omitted tokens. We regard the rewritten utterance as the output from a series of edit operations on the incomplete utterance by constructing a token-pair edit operation matrix, which attempts to model the the overlap between the incomplete utterance between the rewritten utterance. Different from <cit.>, we adopt a well-designed edit operation scoring network on the matrix to perform incomplete utterance rewriting, which is faster and more effective. QUEEN brings semantic structural information from linguistics into the model more explicitly and avoids unnecessary overheads of labeled data from other tasks. Experiments on several IUR benchmarks show that QUEEN outperforms previous state-of-the-art methods. Extensive ablation studies also confirm that the proposed query template makes key contributions to the improvements of QUEEN. § METHODOLOGY UTF8gbsn Overview Our QUEEN mainly consists of two modules: query template construction module (Sec. <ref>) and edit operation scoring network module (Sec. <ref>). From two linguistic perspectives, the former module aims to generate a query template for each incomplete utterance, i.e., coreference-ellipsis-oriented query template, to cope with coreference and ellipsis problems. This query template explicitly hints the model where to refer back and recover omitted tokens. The latter module tries to capture the semantic structural relations between tokens by constructing an edit operation matrix. As shown in Figure <ref>, our goal is to learn a model to generate correct edit operations on this matrix and compute edit operation scores between token pairs so as to convert the incomplete utterance into the complete one. §.§ Query Template Construction Module By observing incomplete and rewritten utterance pairs in existing datasets, we find that pronouns and referential noun phrases in the incomplete utterance often need to be substituted by text spans in dialogue history. And ellipsis often occurs in some specific positions of incomplete utterance, conforming to a certain syntactic structure. In this module, we expect to encode these linguistic prior knowledge into the input of QUEEN. The query template is constructed as follows: Coreference-oriented Query Template In order to make QUEEN perceive the positions of coreference that need to be substituted by text spans from dialogue history, we use a special token to replace pronouns and referential noun phrases in the incomplete utterance so as to get our coreference-oriented query template. For example, the coreference-oriented query template of the incomplete utterance “No, he does not care” (“不,他不关心”) is “No, does not care” (“不,不关心”) . To get the target complete utterance, this query explicitly tells the model we should replace the “He” (“他”) with text spans (such as 'Smith'(“史密斯”) ) from dialogue history, rather than replacing other words. Here, we find all pronouns that required to be replaced using a predefined pronoun collection. Ellipsis-oriented Query Template To make QUEEN perceive the positions of ellipsis that need to be inserted by text spans from dialogue history, we define a special token and put it in a linguistically right place of the incomplete utterance. Since a self-contained utterance usually contains a complete S-V-O (Subject-Verb-Object) structure, if an incomplete utterance lack any of these key elements, we could assume there is a case of ellipsis in its corresponding text position. So we perform dependency parsing on the incomplete utterance to get the structure of the incomplete utterance. For example, the parsing result of the incomplete utterance “No, he does not care” (“不,他不关心”) is an S-V structure and lack object element, thus we put at the end of the sentence to get the ellipsis-oriented query template as “No, he does not care ” (“不,他不关心”). Then we fuse these two query templates into the final coreference-ellipsis-oriented query template. For incomplete utterance “No, he does not care” (“不,他不关心”), we get “No, does not care ' (“不, 不关心 ”) as our final query template. Under supervised setting, the models will perceive the positions to refer back and recover omitted tokens for this utterance. For a multi-turn dialogue d=(u_1,...,u_N-1,u_N) containing N utterances where u_1∼ u_N-1 are dialogue history and the last utterance u_N needs to be rewritten, we could get the dialogue history text s = (w^1_1,...,w^n_i,...,w^N_L_N) where w^n_i is the i-th token in the n-th utterance and L_n is the length of n-th utterance. We then concatenate our coreference-ellipsis-oriented query template with the dialogue history text to get our final input text s' = (w^q_1,...,w^q_k,...,w^q_M, w^1_1,...,w^n_i,...,w^N_L_N) where w^q_k is the k-th token of the query template and M is the length of query template. §.§ Edit Operation Scoring Network Module Since pre-trained language models have been proven to be strongly effective on several natural language processing tasks, we employ BERT <cit.> to encode our input text to get the contextualized hidden representation H=(h^q_1,...,h^q_k,...,h^q_M, h^1_1,...,h^n_i,...,h^N_L_N). Our model attempts to predict whether there is an edit operation between each token pair. To this end, we define an operation scoring function as follows. Since the order of utterance is also important for dialogue, we further use RoPE <cit.> to provide relative position information : q^α_i = W^αh_i + b^α k^α_j = W^αh_j + b^α s^α_ij = (R_iq^α_i)^T(R_jk^α_j) where α is edit operation type including Substitution and Pre-Insertion. For different operations, we use different trainable parameters W^α and b^α. R is a transformation matrix from RoPE to inject position information and s^α_ij is the score for α-th edit operation from i-th token in dialogue history to j-th token in incomplete utterance. During decoding for α-th operation, edit operation label 𝒴^α_ij satisfies: 𝒴^α_ij={[ 1 s^α_ij >= θ; 0 s^α_ij < θ ]. where θ is a hyperparameter. Once 𝒴^α_ij equals to 1, the edit operation α should be performed between token i and token j. Since the label distribution of edit operation is very unbalanced (most elements are zeros), we employ Circle Loss <cit.> to mitigate this problem: log(1+∑_(i,j)∈Ω_pose^-s^α_i,j)+log(1+∑_(i,j)∈Ω_nege^s^α_i,j) where Ω_pos is the positive sample set for edit operation α, and Ω_pos is the negative sample set. We tune Circle Loss the same as <cit.> and <cit.>. We refer readers to their paper for more details. § EXPERIMENTS UTF8gbsn §.§ Experimental Setup Datasets We evaluate our model on four IUR benchmarks from different domains and languages: REWRITE (Chinese, ), Restoration-200K (Chinese, ), TASK (English, ), CANARD (English, ). Some statistics are shown in Table <ref>. REWRITE and Restoration-200K are constructed from Chinese Open-Domain Dialogue. TASK is from English Task-oriented Dialogue. CANARD is constructed from English Context Question Answering. We follow the same data split as their original paper. Evaluation We use BLEU <cit.>, ROUGE <cit.> and the exact match (EM score) as our evaluation metrics. Baseline Models We compare our model with a large number of baselines and SOTA models. (i) Baselines and Generation models include L-Gen <cit.>, the hybrid pointer generator network (L-Ptr-Gen) <cit.>, the basic transformer model (T-Gen) <cit.> and the transformer-based pointer generator (T-Ptr-Gen) <cit.>, Syntactic <cit.>, PAC <cit.>, L-Ptr-λ and T-Ptr-λ <cit.>, GECOR <cit.>. Above methods need to generate rewritten utterances from scratch, neglecting the semantic structure between a rewritten utterance and the original incomplete one. (ii) Structure-aware models include CSRL <cit.>, RUN <cit.>, SARG <cit.>. Hyper-parameters We implement our model on top of a BERT-base model <cit.>. We initialize QUEEN with bert-base-uncased for English and bert-base-chinese for Chinese. We use Adam <cit.> with learning rate 1e-5. The batch size is set to 16 for REWRITE and TASK, 12 for Restoration-200K, 4 for on CANARD. Meanwhile, θ in Equation <ref> is set to 0.1 for REWRITE and TASK, 0.05 for Restoration-200K and CANARD. §.§ Experimental Details Constructing Supervision The expected supervision for our model is the edit operation matrix, but existing datasets only contain rewritten utterances. So we adopt Longest Common Subsequence (LCS) and 'Distant Supervision' <cit.> to get correct supervision, which contains edit operations Substitute and Pre-Insert. Coreference-oriented Query During the training stage, We use the ground truth of pronouns and referential noun phrases to construct the coreference-oriented query. During the inference, we use the constructed pronoun collection to construct the coreference-oriented query, which contains pronouns and referential noun phrases from training data and common pronouns. Ellipsis-oriented Query Construction If the parsing result of the incomplete utterance is an S-V (Subject-Verb) structure and lacks subject element, we insert an at the end of the incomplete utterance as the query. When there is not the S-V structure after parsing, we insert an at the beginning of the incomplete utterance as the query. In other cases, we insert at both the beginning and end of the incomplete utterance as the query. We use spaCy[https://spacy.io/] for English and LTP <cit.> for Chinese to get the result of parsing. Extra Findings During the experiment, we find two interesting points: (i) As and are sparse respectively, we use a unified token to replace and in the query to relieve the sparsity. (ii) In most cases, if there is the referring back in the utterance, there is generally no ellipsis in the utterance. Redundant tokens can't bring correct guided information in this case. Therefore, once we construct Coreference-oriented Query Template successfully, we will not try to construct the Ellipsis-oriented Query Template. Our experimental results are improved by the above two tricks. §.§ Results and Analysis Main Results We report the experiment results in Table <ref>, Table <ref>, Table<ref> and Table <ref>. On all datasets with different languages and evaluation metrics, our approach outperforms all previous state-of-the-art methods. The improvement in EM shows that our model has a stronger ability to find the correct span, due to our model making full use of the prior information of semantic structure from our coreference-ellipsis-oriented query template. On the Chinese datasets Table <ref> and Table <ref>, QUEEN outperforms previous methods. Since Chinese is a pro-drop language where coreference and ellipsis often happen, the improvement confirms that QUEEN is superior in finding the correct ellipsis and referring back positions. The results on data sets of different domains and languages also show that our model is robust and effective. Ablation Study To verify the effectiveness of the query in our proposed model and different modules, we present an ablation study in Table <ref>. It is clear that query is important to improve performance on all evaluation metrics. Meanwhile, only using coreference-oriented or ellipsis-oriented template still improves the performance, as it can also bring semantic structure information. Inference Speed Meanwhile, to compare the inference speed between the current fastest model RUN and our Edit Operation Scoring Network, we conduct experiments using the code released [https://github.com/microsoft/ContextualSP/tree/master/ incomplete_utterance_rewriting]. Both models are implemented in PyTorch on a single NVIDIA V100. The batch size is set to 16. Meanwhile, In order to fairly compare the speed of the two networks, we performed Distant Supervision and Query Construction before comparing. The results are shown in Table <ref>. Case Study We also conduct case study for our proposed model. Our model avoids the uncontrolled situations that the generation-based model is prone to, and our model can more easily capture the correct semantic span. Table <ref> gives 3 examples that indicate the representative situations as <cit.>. The first example illustrates the cases when RUN inserts unexpected characters into the wrong places. T-Ptr-Gen just copies the incomplete utterance. Due to our generated query, the position that needs to be inserted has been explicitly promoted by the query. The second example shows a common situation for generation-based models. T-Ptr-Gen messes up by repeating stupidly. However, this situation doesn't happen to our model, as it is not a generation-based model. The last example refers to a long and complex entity. For these cases, it is easier for our model to get the correct span. This is because our model learns the span boundaries from the edit operation matrix. Compared to the generation-based model, we don't generate sentences from scratch and this reduces the difficulty. Meanwhile, our model is not based on CNN as RUN, which suffers from the limitation of receptive-field to find a longer span. § CONCLUSION We propose a simple yet effective query-enhanced network for IUR task. Our proposed well-designed query template explicitly brings guided semantic structural knowledge between the incomplete utterance and the rewritten utterance. Benefiting from extra semantic structural information from proposed query template and well-designed edit operation scoring network, QUEEN achieves state-of-the-art performance on several public datasets. Meanwhile, the experimental results on data sets of different domains and languages also show that our model is robust and effective. Overall, experiments show that our proposed model with this well-designed query achieves promising results than previous methods. § ACKNOWLEDGEMENTS This paper is supported by the National Science Foundation of China under Grant No.61876004, 61936012, the National Key R&D Program of China under Grand No.2020AAA0106700. § ETHICS CONSIDERATION We use public datasets to perform our experiments. The used open-source tools are freely accessible online without copyright conflicts. § LIMITATION One limitation of current edit-based IUR models, is that only tokens that have appeared in the history dialogue can be selected. Therefore, these models, including ours, cannot generate novel words, e.g., conjunctions and prepositions, to cater to other metrics, like fluency. However, this can be alleviated by incorporating an additional word dictionary as <cit.> and <cit.> deals with the out-of-vocabulary (OOV) words to improve fluency. For fairness, we keep the same words during the experiment as RUN to mitigat it. We will consider this question as a promising direction for future works. UTF8gbsn acl_natbib
http://arxiv.org/abs/2307.02880v1
20230706093528
Endomorphisms of Artin groups of type D
[ "Fabrice Castel", "Luis Paris" ]
math.GR
[ "math.GR" ]
ℕ Aut ad ℤ𝕀id Ker Inn Out 𝒫 Homeo ℳ 𝒞 Å𝒜 𝕊 𝒮 Im 𝔻 In this paper we determine a classification of the endomorphisms of the Artin group of type D_n for n≥ 6. In particular we determine its automorphism group and its outer automorphism group. We also determine a classification of the homomorphisms from the Artin group of type D_n to the Artin group of type A_n-1 and a classification of the homomorphisms from the Artin group of type A_n-1 to the Artin group of type D_n for n≥ 6. The results are algebraic in nature but the proofs are based on topological arguments (curves on surfaces and mapping class groups). AMS Subject Classification Primary: 20F36. Secondary: 57K20. Keywords Artin groups of type D, endomorphisms, automorphisms, mapping class groups. Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications [ August 1, 2023 ================================================================================= § INTRODUCTION Let S be a finite set. A Coxeter matrix over S is a square matrix M=(m_s,t)_s,t∈ S indexed by the elements of S, with coefficients in ∪{∞}, such that m_s,s=1 for all s∈ S and m_s,t=m_t,s≥ 2 for all s,t∈ S, s≠ t. Such a matrix is usually represented by a labeled graph Γ, called a Coxeter graph, defined as follows. The set of vertices of Γ is S. Two vertices s,t∈ S are connected by an edge if m_s,t≥ 3, and this edge is labeled with m_s,t if m_s,t≥4. If a,b are two letters and m is an integer ≥2, then we denote by Π(a,b,m) the word aba⋯ of length m. In other words Π(a,b,m)=(ab)^m/2 if m is even and Π(a,b,m)=(ab)^m-1/2a if m is odd. Let Γ be a Coxeter graph and let M=(m_s,t)_s,t∈ S be its Coxeter matrix. With Γ we associate a group A[Γ], called the Artin group of Γ, defined by the following presentation. A[Γ]=⟨ S|Π(s,t,m_s,t)=Π(t,s,m_s,t) for s,t∈ S , s≠ t , m_s,t≠∞⟩ . The Coxeter group of Γ, denoted W[Γ], is the quotient of A[Γ] by the relations s^2=1, s∈ S. Despite the popularity of Artin groups little is known on their automorphisms and even less on their endomorphisms. The most emblematic cases are the braid groups and the right-angled Artin groups. Recall that the braid group on n+1 strands is the Artin group A[A_n] where A_n is the Coxeter graph depicted in Figure <ref>, and an Artin group A[Γ] is called a right-angled Artin group if m_s,t∈{2,∞} for all s,t∈ S, s≠ t. The automorphism group of A[A_n] was determined by Dyer–Grossman <cit.> and the set of its endomorphisms by Castel <cit.> for n≥ 6 and by Chen–Kordek–Margalit <cit.> for n≥ 5 (see also Bell–Margalit <cit.>). On the other hand there are many articles studding automorphism groups of right-angled Artin groups (see Charney–Vogtmann <cit.>, Day <cit.>, Laurence <cit.> and Bregman–Charney–Vogtmann <cit.> for example), but almost nothing (or too much) is known on endomorphisms of these groups. Apart from these two families little is known on automorphisms of Artin groups. The automorphism groups of two generators Artin groups were determined in Gilbert–Howie–Metaftsis–Raptis <cit.>, the automorphism groups of the Artin groups of type B_n, Ã_n and C̃_n were determined in Charney–Crisp <cit.>, the automorphisms groups of some 2-dimensional Artin groups were determined in Crisp <cit.>, and the automorphism group of A[D_4] was determined in Soroko <cit.>. On the other hand, as far as we know the set of endomorphisms of an Artin group is not determined for any Artin group except for those of type A. Recall that an Artin group A[Γ] is of spherical type if W[Γ] is finite. The study of spherical-type Artin groups began in the early 1970s with works by Brieskorn <cit.>, Brieskorn–Saito <cit.> and Deligne <cit.>, works that marked in a way the beginning of the theory of Artin groups. This family and that of right-angled Artin groups are the two most studied and best understood families of Artin groups and, obviously, any question on Artin groups first arises for Artin groups of spherical type and for right-angled Artin groups. Here we are interested in Artin groups of spherical type and more particularly in those of type D. An Artin group A[Γ] is called irreducible if Γ is connected. If Γ_1,…,Γ_ℓ are the connected components of Γ, then A[Γ]=A[Γ_1]×⋯× A[Γ_ℓ] and W[Γ]=W[Γ_1]×⋯× W[ Γ_ℓ]. In particular A[Γ] is of spherical type if and only if A[Γ_i] is of spherical type for all i∈{1,…,ℓ}. So, to classify Artin groups of spherical type it suffices to classify those which are irreducible. Finite irreducible Coxeter groups and hence irreducible Artin groups of spherical type were classified by Coxeter <cit.>. There are four infinite families, A_n (n≥1), B_n (n≥ 2), D_n (n≥ 4) and I_2(m) (m≥ 5), and six “sporadic” groups, E_6, E_7, E_8, F_4, H_3 and H_4. As mentioned above, the automorphism group of A[Γ] for Γ of type A_n (n≥ 1), B_n (n≥ 2) and I_2 (m) (m≥ 5) is known. The next step is therefore to understand the automorphism group of A[D_n] for n≥ 5 (the case Γ=D_4 is known by Soroko <cit.>). The Coxeter graph D_n is illustrated in Figure <ref>. In this paper we determine a complete and precise classification of the endomorphisms of A[D_n] for n≥ 6 (see Theorem <ref>). In particular we determine the automorphism group and the outer automorphism group of A[D_n] for n≥6 (see Corollary <ref>). We also determine a complete and precise classification of the homomorphisms from A[D_n] to A[A_n-1] (see Theorem <ref>) and a complete and precise classification of the homomorphisms from A[A_n-1] to A[D_n] (see Theorem <ref>). Note that all these results were announced but not proved in Castel <cit.>; actually the proofs turn out to be much more difficult than the first author thought when he announced them. Note also that our techniques cannot be used to treat the cases n=4 and n=5. In particular we do not know how to determine (A[D_5]). A geometric representation of an Artin group is a homomorphism from the group to a mapping class group (see Section <ref> for more details). In order to achieve our goals we make a study of a particular geometric representation of A[D_n]. In particular we prove that this representation is faithful (see Theorem <ref>). This geometric representation is a variation of a geometric representation previously introduced by Perron–Vannier <cit.>, and its study is interesting by itself independently of its applications to Artin groups of type D. Overall, although the results of the paper are algebraic in nature, the proofs are mostly based on topological arguments (on curves on surfaces and mapping class groups). The paper is organized as follows. In Section <ref> we give the main definitions and precise statements of the main results. Section <ref> is dedicated to the study of some geometric representations of Artin groups of type A and type D. In Section <ref> we determine the homomorphisms from A[D_n] to A[A_n-1], in Section <ref> we determine the homomorphisms from A[A_n-1] to A[D_n], and in Section <ref> we determine the endomorphisms of A[D_n]. The authors would like to thank Bruno A Cisneros de la Cruz and Juan González-Meneses for helpful comments and conversations. The second author is partially supported by the French project “AlMaRe” (ANR-19-CE40-0001-01) of the ANR. § DEFINITIONS AND STATEMENTS For n≥4 we denote by s_1,…,s_n-1 the standard generators of A[A_n-1] numbered as in Figure <ref> and by t_1,…,t_n the standard generators of A[D_n] numbered as in Figure <ref>. Let Γ be a Coxeter graph. For X⊂ S we denote by A_X=A_X[Γ] the subgroup of A=A[Γ] generated by X, by W_X=W_X[Γ] the subgroup of W=W[Γ] generated by X, and by Γ_X the full subgraph of Γ spanned by X. We know from van der Lek <cit.> that A_X is the Artin group of Γ_X and from Bourbaki <cit.> that W_X is the Coxeter group of Γ_X. A subgroup of the form A_X is called a standard parabolic subgroup of A and a subgroup of the form W_X is called a standard parabolic subgroup of W. For w∈ W we denote by (w) the word length of w with respect to S. A reduced expression for w is an expression w=s_1s_2⋯ s_ℓ of minimal length, that is, such that ℓ=(w). Let ω:A→ W be the natural epimorphism which sends s to s for all s∈ S. This epimorphism has a natural set-section τ:W→ A defined as follows. Let w∈ W and let w=s_1s_2⋯ s_ℓ be a reduced expression for w. Then τ(w)=s_1s_2⋯ s_ℓ∈ A. We know from Tits <cit.> that the definition of τ(w) does not depend on the choice of its reduced expression. Assume Γ is of spherical type. Then W has a unique element of maximal length, denoted w_S, which satisfies w_S^2=1 and w_SSw_S=S. The Garside element of A is defined to be Δ=Δ[Γ]=τ(w_S). We know that Δ SΔ^-1=S and, if Γ is connected, then the center Z(A) of A is an infinite cyclic group generated by either Δ or Δ^2 (see Brieskorn–Saito <cit.>). For X⊂ S we denote by w_X the element of maximal length in W_X and by Δ_X=Δ_X[Γ]=τ(w_X) the Garside element of A_X. If Γ=A_n-1, then Δ=(s_n-1⋯ s_1)(s_n-1⋯ s_2)⋯(s_n-1s_n-2)s_n-1 , Δ s_iΔ^-1=s_n-i for all 1≤ i≤ n-1 and Z(A) is generated by Δ^2. If Γ=D_n, then Δ=(t_1⋯ t_n-2t_n-1t_nt_n-2⋯ t_1)(t_2⋯ t_n-2t_n-1t_nt_n-2 ⋯ t_2) ⋯(t_n-2t_n-1t_nt_n-2)(t_n-1t_n) . If n is even, then Δ t_iΔ^-1=t_i for all 1≤ i≤ n and Z(A) is generated by Δ. If n is odd, then Δ t_iΔ^-1=t_i for all 1≤ i≤ n-2, Δ t_n-1Δ^-1=t_n, Δ t_nΔ^-1=t_n-1, and Z(A) is generated by Δ^2. If G is a group and g∈ G, then we denote by _g:G→ G, h↦ ghg^-1, the conjugation map by g. We say that two homomorphisms φ_1,φ_2:G→ H are conjugate if there exists h∈ H such that φ_2=_h∘φ_1. A homomorphism φ:G→ H is called abelian if its image is an abelian subgroup of H. A homomorphism φ:G→ H is called cyclic if its image is a cyclic subgroup of H. If G=A[A_n-1], then φ:A[A_n-1]→ H is abelian if and only if it is cyclic, if and only if there exists h∈ H such that φ(s_i)=h for all 1≤ i≤ n-1. Similarly, if G=A[D_n], then φ:A[D_n]→ H is abelian if and only if it is cyclic, if and only if there exists h∈ H such that φ(t_i)=h for all 1≤ i≤ n. Two automorphisms ζ,χ∈(A[D_n]) play a central role in our study. These are defined by ζ(t_i)=t_i for 1≤ i≤ n-2 , ζ(t_n-1)=t_n , ζ(t_n)=t_n-1 , χ(t_i)=t_i^-1 for 1≤ i≤ n . Both are of order 2 and commute, hence they generate a subgroup of (A[D_n]) isomorphic to /2×/2. If n is odd, then ζ is the conjugation map by Δ=Δ[D_n]. On the other hand, if n is even, then ζ is not an inner automorphism (see Paris <cit.>). The automorphism χ is never inner. Two other homomorphisms play an important role in our study. The first, π:A[D_n]→ A[A_n-1], is defined by π(t_i)=s_i for 1≤ i≤ n-2 , π(t_n-1)=π(t_n)=s_n-1 . The second, ι:A[A_n-1]→ A[D_n], is defined by ι(s_i)=t_i for 1≤ i≤ n-1 . Observe that π∘ι=𝕀_A[A_n-1], hence π is surjective, ι is injective, and A[D_n]≃(π)⋊ A[A_n-1]. We refer to Crisp–Paris <cit.> for a detailed study on this decomposition of A[D_n] as a semi-direct product. Let n≥ 4. For p∈ we define a homomorphism α_p:A[D_n]→ A[A_n-1] by α_p(t_i)=s_iΔ^2p for 1≤ i≤ n-2 , α_p(t_n-1)=α_p(t_n)=s_n-1Δ^2p , where Δ=Δ[A_n-1] is the Garside element of A[A_n-1]. Note that α_0=π. Set Y={t_1,…,t_n-1}. For p,q∈ we define a homomorphism β_p,q:A[A_n-1]→ A[D_n] by β_p,q(s_i)=t_iΔ_Y^2pΔ^κ q for 1≤ i≤ n-1 , where Δ=Δ[D_n] is the Garside element of A[D_n], Δ_Y=Δ_Y[D_n], κ=2 if n is odd, and κ=1 if n is even. Note that β_0,0=ι. Note also that, by Paris <cit.>, the centralizer of Y in A[D_n] is the free abelian group of rank 2 generated by Δ_Y^2 and Δ^κ. For p∈ we define the homomorphism γ_p:A[D_n]→ A[D_n] by γ_p(t_i)=t_iΔ^κ p for 1≤ i≤ n , where Δ=Δ[D_n] is the Garside element of A[D_n], κ=2 if n is odd, and κ=1 if n is even. Note that γ_0=𝕀. The main results of this paper are the following. Let n≥6. Let φ:A[D_n]→ A[A_n-1] be a homomorphism. Then up to conjugation we have one of the following two possibilities. (1) φ is cyclic. (2) There exist ψ∈⟨χ⟩ and p∈ such that φ=α_p∘ψ. Let n≥6. Let φ:A[A_n-1]→ A[D_n] be a homomorphism. Then up to conjugation we have one of the following two possibilities. (1) φ is cyclic. (2) There exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ=ψ∘β_p,q. Let n≥6. Let φ:A[D_n]→ A[D_n] be a homomorphism. Then up to conjugation we have one of the following three possibilities. (1) φ is cyclic. (2) There exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ=ψ∘β_p,q∘π. (3) There exist ψ∈⟨ζ,χ⟩ and p∈ such that φ=ψ∘γ_p. From Theorem <ref> we deduce a classification of the injective endomorphisms and of the automorphisms of A[D_n] as follows. Let n≥6. Let φ:A[D_n]→ A[D_n] be an endomorphism. Then φ is injective if and only if there exist ψ∈⟨ζ,χ⟩ and p∈ such that φ is conjugate to ψ∘γ_p. Let φ:A[D_n]→ A[D_n] be an endomorphism. By Theorem <ref> we have one of the following three possibilities up to conjugation. (1) φ is cyclic. (2) There exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ=ψ∘β_p,q∘π. (3) There exist ψ∈⟨ζ,χ⟩ and p∈ such that φ=ψ∘γ_p. If φ is cyclic, then φ(t_n-1)=φ(t_n), hence φ is not injective. If there exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ=ψ∘β_p,q∘π, then, again, φ(t_n-1)=φ(t_n), hence φ is not injective. So, if φ is injective, then there exist ψ∈⟨ζ,χ⟩ and p∈ such that φ is conjugate to ψ∘γ_p. It remains to show that, if ψ∈⟨ζ,χ⟩ and p∈, then ψ∘γ_p is injective. Since the elements of ⟨ζ,χ⟩ are automorphisms, it suffices to show that γ_p is injective. We denote by z:A[D_n]→ the homomorphism which sends t_i to 1 for all 1≤ i≤ n. It is easily seen that γ_p(u)=uΔ^κ p z(u) for all u∈ A[D_n]. Let u∈(γ_p). Then 1=γ_p(u)=uΔ^κ p z(u), hence u=Δ^q where q=-κ p z(u). We have z(Δ)=n(n-1), hence z(u)=qn(n-1), thus 1=γ_p(u)=Δ^qΔ^κ pqn(n-1)=Δ^q(1+κ pn(n-1)) . Since 1+κ pn(n-1)≠ 0, this equality implies that q=0, hence u=1. So, γ_p is injective. Let n≥6. Let φ:A[D_n]→ A[D_n] be an endomorphism. Then φ is an automorphism if and only if it is conjugate to an element of ⟨ζ,χ⟩. Clearly, if φ is conjugate to an element of ⟨ζ,χ⟩, then φ is an automorphism. Conversely, suppose that φ is an automorphism. We know from Corollary <ref> that there exist ψ∈⟨ζ,χ⟩ and p∈ such that φ is conjugate to ψ∘γ_p. Thus, up to conjugation and up to composing on the left by ψ^-1, we can assume that φ=γ_p. It remains to show that p=0. Let again z:A[D_n]→ be the homomorphism which sends t_i to 1 for all 1≤ i≤ n. Recall that γ_p(u)=uΔ^κ p z(u) for all u∈ A[D_n]. For u∈ A[D_n], we have (z∘γ_p)(u)=(1+n(n-1)κ p)z(u)∈(1+n(n-1)κ p) . Since γ_p is an automorphism, z∘γ_p is surjective, hence =(z∘γ_p)⊂ (1+n(n-1)κ p). It follows that (1+n(n-1)κ p)∈{± 1}, hence p=0. By combining Corollary <ref> with Crisp–Paris <cit.> we immediately obtain the following. Let n≥ 6. (1) If n is even, then (A[D_n])=(A[D_n])⋊⟨ζ,χ⟩≃(A[D_n]/Z(A[D_n]))⋊( /2×/2) , and (A[D_n])≃/2×/2, where Z(A[D_n]) denotes the center of A[D_n]. (2) If n is odd, then (A[D_n])=(A[D_n])⋊⟨χ⟩≃(A[D_n]/Z(A[D_n]))⋊(/2) , and (A[D_n])≃/2. § GEOMETRIC REPRESENTATIONS Let Σ be an oriented compact surface possibly with boundary, and let be a finite set of punctures in the interior of Σ. We denote by ^+(Σ,) the group of homeomorphisms of Σ that preserve the orientation, that are the identity on a neighborhood of the boundary of Σ, and that setwise leave invariant . The mapping class group of the pair (Σ,), denoted (Σ,), is the group of isotopy classes of elements of ^+(Σ,). If =∅, then we write (Σ,∅)=(Σ), and if ={x} is a singleton, then we write (Σ,)=(Σ,x). We only give definitions and results on mapping class groups that we need for our proofs and we refer to Farb–Margalit <cit.> for a complete account on the subject. Recall that a geometric representation of an Artin group A is a homomorphism from A to a mapping class group. Their study is the main ingredient of our proofs. Important tools for constructing and understanding them are Dehn twists and essential reduction systems. So, we start by recalling their definitions and their main properties. A circle of (Σ,) is an embedding a:^1↪Σ∖(∂Σ∪). It is called generic if it does not bound any disk containing 0 or 1 puncture and if it is not parallel to any boundary component. The isotopy class of a circle a is denoted by [a]. We denote by (Σ,) the set of isotopy classes of generic circles of (Σ,). The intersection number of two classes [a],[b]∈(Σ,) is i([a],[b])=min{|a'∩ b'|| a'∈[a] and b'∈[b]}. The set (Σ,) is endowed with a simplicial complex structure, where a finite set Å is a simplex if i([a],[b])=0 for all [a],[b]∈Å. This complex is called the curve complex of (Σ,). In this paper the Dehn twist along a circle a of (Σ,) will be denoted by T_a. The following is an important tool for constructing and understanding geometric representations of Artin groups. Its proof can be found in Farb–Margalit <cit.>. Let Σ be a compact oriented surface and let be a finite collection of punctures in the interior of Σ. Let a,b be two generic circles of (Σ,). (1) We have T_aT_b=T_bT_a if and only if i([a],[b])=0. (2) We have T_aT_bT_a=T_bT_aT_b if and only if i([a],[b])=1. Let f∈(Σ,). A simplex Å of (Σ,) is called a reduction system for f if f(Å)=Å. In that case any element of Å is called a reduction class for f. A reduction class [a] is an essential reduction class if, for all [b]∈(Σ,) such that i([a],[b])≠ 0 and for all m∈∖{0}, we have f^m([b])≠[b]. In particular, if [a] is an essential reduction class and [b] is any reduction class, then i([a],[b])=0. We denote by (f) the set of reduction classes for f. The following gathers some key results on (f) that will be useful later. Let Σ be a compact oriented surface and let be a finite set of punctures in the interior of Σ. Let f∈(Σ,). (1) If (f)≠∅, then (f) is a reduction system for f. In particular, if (f)≠∅, then (f) is a simplex of (Σ,). (2) We have (f^n)=(f) for all n∈∖{0}. (3) We have (gfg^-1) = g((f)) for all g∈(Σ,). The following is well-known and it is a direct consequence of Birman–Lubotzky–McCarthy <cit.>. It will be often used in our proofs. Let Σ be an oriented compact surface of genus ≥2 and let a finite set of punctures in the interior of Σ. Let f_0∈ Z((Σ,)) be a central element of (Σ,), let Å={[a_1],…,[a_p]} be a simplex of (Σ,), and let k_1,…,k_p be nonzero integers. Let g=T_a_1^k_1T_a_2^k_2⋯ T_a_p^k_pf_0. Then (g)=Å. Let n≥ 4. If n is even, then Σ_n denotes the surface of genus n-2/2 with two boundary components, and if n is odd, then Σ_n denotes the surface of genus n-1/2 with one boundary component. Consider the circles a_1,…,a_n-1 drawn in Figure <ref>. Then by Proposition <ref> we have a geometric representation ρ_A:A[A_n-1]→(Σ_n) which sends s_i to T_a_i for all 1≤ i≤ n-1. The following is well-known, it is a direct consequence of Birman–Hilden <cit.>, and its proof is explicitly given in Perron–Vannier <cit.>. Let n≥4. Then ρ_A:A[A_n-1]→(Σ_n) is injective. We denote by χ̅:A[A_n-1]→ A[A_n-1] the automorphism defined by χ̅(s_i)=s_i^-1 for 1≤ i≤ n-1 . On the other hand, for p∈ we denote by γ̅_p:A[A_n-1]→ A[A_n-1] the homomorphism defined by γ̅_p(s_i)=s_iΔ^2p for 1≤ i≤ n-1 , where Δ is the Garside element of A[A_n-1]. The following is proved in Castel <cit.> for n≥6 using the geometric representation ρ_A defined above. It is proved in Chen–Kordek–Margalit <cit.> for n≥5 with a different method. Let n≥5. Let φ:A[A_n-1]→ A[A_n-1] be a homomorphism. Then up to conjugation we have one of the following two possibilities. (1) φ is cyclic. (2) There exist ψ∈⟨χ̅⟩ and p∈ such that φ=ψ∘γ̅_p. Let n≥6. Pick a puncture x in the interior of Σ_n and consider the circles d_1,…,d_n drawn in Figure <ref>. Then by Proposition <ref> we have a geometric representation ρ_D:A[D_n]→(Σ_n,x) which sends t_i to T_d_i for all 1≤ i≤ n. On the other hand, the embedding of ^+(Σ_n,x) into ^+(Σ_n) induces a surjective homomorphism θ:(Σ_n,x)→(Σ_n) whose kernel is naturally isomorphic to π_1(Σ_n,x) (see Birman <cit.>). It is easily seen that θ(T_d_i)=T_a_i for 1≤ i≤ n-2 , θ(T_d_n-1)=θ(T_d_n)=T_a_n-1 , hence we have the following commutative diagram: 1[r] (π)[r][d] A[D_n][r]^π[d]^ρ_D A[A_n-1][r][d]^ρ_A 1 1[r] (θ)[r] (Σ_n,x)[r]^θ (Σ_n)[r] 1 We denote by ρ̅:(π)→(θ) the restriction of ρ_D to (π). The following is implicitly proved in Perron–Vannier <cit.>. Let n≥4. (1) The homomorphism ρ̅:(π)→(θ) is an isomorphism. (2) The geometric representation ρ_D:A[D_n]→(Σ_n,x) is injective. Part (2) is a consequence of Part (1) because of the following. Suppose ρ̅ is an isomorphism. Then, since ρ_A is injective, ρ_D is injective by the five lemma applied to the diagram of Equation (<ref>). Now, we prove Part (1). We know from Crisp–Paris <cit.> that (π) is a free group of rank n-1. We also know from Birman <cit.> that (θ)=π_1(Σ_n,x), which is also a free group of rank n-1. Recall that a group G is Hopfian if every surjective endomorphism G→ G is an isomorphism. It is well-known that free groups of finite rank are Hopfian (see de la Harpe <cit.>), hence in order to show that ρ̅ is an isomorphism it suffices to show that ρ̅ is surjective. Set f_n-1=T_d_n-1^-1T_d_n. Note that t_n-1^-1t_n∈(π) and f_n-1=ρ̅(t_n-1^-1t_n). In particular f_n-1∈(ρ̅)⊂(θ)=π_1(Σ_n,x). This element, seen as an element of π_1(Σ_n,x), is represented by the loop drawn in Figure <ref>. For 2≤ i≤ n-1 we define f_n-i∈π_1(Σ_n,x)⊂(Σ_n,x) by induction on i by setting f_n-i=T_d_n-i f_n-i+1T_d_n-i^-1 f_n-i+1^-1. The element f_n-i, viewed as an element of π_1(Σ_n,x), is represented by the loop drawn in the left-hand side of Figure <ref> if i=2j is even, and by the loop drawn in the right-hand side of Figure <ref> if i=2j+1 is odd, where we compose paths from right to left. Observe that f_1,…,f_n-1 generate π_1(Σ_n,x). So, in order to show that ρ̅ is surjective, it suffices to show that f_n-i∈(ρ̅) for all i∈{1,…,n-1}. We argue by induction on i. We already know that f_n-1=ρ̅(t_n-1^-1t_n)∈(ρ̅). Suppose i≥ 2 and f_n-i+1∈(ρ̅). Let u∈(π) such that f_n-i+1=ρ̅(u). Since (π) is a normal subgroup of A[D_n], we have t_n-iut_n-i^-1∈(π), hence t_n-iut_n-i^-1u^-1∈(π), and therefore f_n-i=T_d_n-i f_n-i+1T_d_n-i^-1 f_n-i+1^-1=ρ̅(t_n-iut_n-i^-1u^-1) ∈(ρ̅) . Our last preliminary on geometric representations is a particular case of Theorems 0.2.1 and 0.2.2 of Castel <cit.>, and it is where we need the hypothesis n≥6. Let n≥6. Let φ:A[A_n-1]→(Σ_n,x) be a non-cyclic homomorphism. Then there exist generic circles c_1,…,c_n-1 in Σ_n∖{x}, ε∈{± 1} and g∈(Σ_n,x) such that (a) |c_i∩ c_j|=1 if |i-j|=1 and |c_i∩ c_j|=0 if |i-j|≥2, for all 1≤ i,j≤ n-1, (b) g commutes with T_c_i for all 1≤ i≤ n-1, (c) φ(s_i)=T_c_i^εg for all 1≤ i≤ n-1. § HOMOMORPHISMS FROM A[D_N] TO A[A_N-1] Let n≥6. Let φ:A[D_n]→ A[A_n-1] be a homomorphism. By Theorem <ref> we know that one of the following two possibilities holds. * φ∘ι is cyclic. * There exist ψ∈⟨χ̅⟩ and p∈ such that φ∘ι is conjugate to ψ∘γ̅_p. Suppose φ∘ι is cyclic. Then there exists u∈ A[A_n-1] such that (φ∘ι)(s_i)=φ(t_i)=u for all 1≤ i≤ n-1. Moreover, φ(t_n)=φ(t_n-2t_n) φ(t_n-2) φ(t_n^-1t_n-2^-1)=φ(t_n-2t_n) φ(t_1) φ(t_n^-1t_n-2^-1)=φ(t_1)=u , hence φ is cyclic. So, up to conjugating and replacing φ by φ∘χ if necessary, we can assume that there exists p∈ such that φ∘ι=γ̅_p. This means that φ(t_i)=(φ∘ι)(s_i)=s_iΔ^2p for all 1≤ i≤ n-1, where Δ is the Garside element of A[A_n-1]. Now we turn to show that φ=α_p. Set Y={s_1,…,s_n-3}. By Paris <cit.> the centralizer of ⟨ s_1,…,s_n-3,s_n-1⟩ in A[A_n-1] is generated by Δ^2, Δ_Y^2 and s_n-1, where Δ_Y=Δ_Y[A_n-1]. These three elements pairwise commute and generate a copy of ^3. Set u=φ(t_n). Since u commutes with φ(t_i)=s_iΔ^2p for all i∈{1,…,n-3,n-1} and Δ^2 in central in A[A_n-1], u belongs to the centralizer of ⟨ s_1,…,s_n-3,s_n-1⟩, hence there exist k_1,k_2,k_3∈ such that u=s_n-1^k_1Δ_Y^2k_2Δ^2k_3. It is well-known that A[A_n-1] is naturally isomorphic to the mapping class group (,), where denotes the disk and ={x_1,…,x_n} is a set of n punctures in the interior of . In this identification s_n-1^2 corresponds to the Dehn twist along the circle c_1 depicted in Figure <ref>, Δ_Y^2 corresponds to the Dehn twist along the circle c_2 depicted in the same figure, and Δ^2 corresponds to the Dehn twist along a circle parallel to ∂. By Proposition <ref> we have (u^2)⊆{c_1,c_2}, where c_1∈(u^2) if and only if k_1≠0 and c_2∈(u^2) if and only if k_2≠0. We know that φ(t_1^2)=s_1^2Δ^4p, hence (φ(t_1^2)) is formed by a single circle containing two marked points in its interior. Since t_1^2 and t_n^2 are conjugate, φ(t_1^2) and φ(t_n^2)=u^2 are conjugate, hence, by Theorem <ref>, (u^2) is also formed by a single circle containing two marked points in its interior. It follows that (u^2)={c_1}, hence k_1≠0 and k_2=0. It remains to show that k_1=1 and k_3=p. From the equality t_n-2t_nt_n-2=t_nt_n-2t_n it follows that s_n-2s_n-1^k_1s_n-2Δ^4p+2k_3=s_n-1^k_1s_n-2s_n-1^k_1Δ^2p+4k_3, hence (s_n-2s_n-1^k_1s_n-2)(s_n-1^k_1s_n-2s_n-1^k_1)^-1=Δ^2k_3-2p. We know from Paris <cit.> that A_{s_n-2,s_n-1}[A_n-1]∩⟨Δ⟩={1}, hence (s_n-2s_n-1^k_1s_n-2)(s_n-1^k_1s_n-2s_n-1^k_1)^-1=Δ^2k_3-2p=1. Let z:A[A_n-1]→ be the homomorphism which sends s_i to 1 for all 1≤ i≤ n-1. We have 0=z(1)=z((s_n-2s_n-1^k_1s_n-2)(s_n-1^k_1s_n-2s_n-1^k_1)^-1)=1-k_1 , hence k_1=1. Moreover, Δ^2k_3-2p=1 and Δ is of infinite order, thus k_3=p. § HOMOMORPHISMS FROM A[A_N-1] TO A[D_N] Lemmas <ref> to <ref> are preliminaries to the proof of Theorem <ref>. Let n≥6. Let φ:A[A_n-1]→ A[D_n] be a homomorphism. If π∘φ:A[A_n-1]→ A[A_n-1] is cyclic, then φ is cyclic. Assume π∘φ is cyclic. Then there exists u∈ A[A_n-1] such that (π∘φ)(s_i)=u for all 1≤ i≤ n-1. For 3≤ i≤ n-1 we set v_i=φ(s_is_1^-1). We have π(v_i)=uu^-1=1, hence v_i∈(π). We have (s_3s_1^-1)(s_4s_1^-1)(s_3s_1^-1)=s_3s_4s_3s_1^-3=s_4s_3s_4s_1^-3=(s_4s_1^-1)(s_3s_1^-1) (s_4s_1^-1) , hence v_3v_4v_3=v_4v_3v_4. Since (π) is a free group (see Crisp–Paris <cit.>) and two elements in a free group either freely generate a free group or commute, the existence of such equality implies that v_3v_4=v_4v_3. It follows that v_3v_4v_3=v_3v_4^2, hence v_3=v_4, and therefore φ(s_3) φ(s_1)^-1=v_3=v_4=φ(s_4) φ(s_1)^-1 . So, φ(s_3)=φ(s_4). We conclude by Castel <cit.> that φ is cyclic. Let n≥6. If n is odd, then Σ_n has one boundary component that we denote by ∂, and we denote by T_∂ the Dehn twist along ∂. If n is even, then Σ_n has two boundary components that we denote by ∂_1 and ∂_2, and we denote by T_∂_1 and T_∂_2 the Dehn twists along ∂_1 and ∂_2, respectively. It is known that the center of (Σ_n), denoted Z((Σ_n)), is the cyclic group generated by T_∂ if n is odd, and it is a free abelian group of rank two generated by T_∂_1 and T_∂_2 if n is even (see Paris–Rolfsen <cit.> for example). Let n≥6. Let f∈(Σ_n) such that fT_a_i^2=T_a_i^2f for all 1≤ i≤ n-1. Then f^2∈ Z((Σ_n)). Assume n is odd. The case where n is even can be proved in the same way. Let f∈(Σ_n) such that fT_a_i^2=T_a_i^2f for all 1≤ i≤ n-1. Since fT_a_i^2f^-1=T_a_i^2 we have f([a_i])=[a_i] (see Farb–Margalit <cit.>). The mapping class f may reverse the orientation of each a_i up to isotopy, but f^2 preserves the orientation of all a_i up to isotopy, hence f^2 can be represented by an element of ^+(Σ_n) which is the identity on a (closed) regular neighborhood Σ' of ⋃_i=1^n-1 a_i. We observe that Σ' is a surface of genus n-1/2 with one boundary component, ∂', and that ∂∪∂' bounds a cylinder C. This implies that f^2∈(C)⊂(Σ_n). Since (C)=⟨ T_∂⟩=Z((Σ_n)), we conclude that f^2∈ Z((Σ_n)). Let n≥6. We set m=n-1 if n is odd and m=n-2 if n is even. Let 1≤ k≤ m. Let c be a generic circle of Σ_n∖{x} such that c∩ d_i=∅ for 1≤ i ≤ k-2, |c∩ d_k-1|=1 if k≥ 2, c∩ d_k=∅, and c is isotopic to d_k in Σ_n. Then there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ k-1 and g([c])=[d_k]. We identify D_3 with A_3 in this proof to treat the cases k=2 and k=1. We first assume that k is even. If c is isotopic in Σ_n∖{x} to d_k, then it suffices to take g=𝕀. So, we can assume that c and d_k are not isotopic in Σ_n∖{x}. Since c and d_k are isotopic in Σ_n, by Epstein <cit.> there exists a cylinder C in Σ_n whose boundary components are d_k and c. Since c and d_k are not isotopic in Σ_n∖{x}, this cylinder must contain the puncture x. Let Σ' be a regular neighborhood of (⋃_i=1^k-1d_i)∪ C. Then Σ' is a surface of genus k/2 with one boundary component where the circles d_1,…, d_k-1,d_k,c are arranged as shown in Figure <ref>. By Proposition <ref> there are homomorphisms ψ_1:A[D_k+1]→(Σ_n,x) and ψ_2:A[A_k]→(Σ_n,x) defined by ψ_1(t_i)=T_d_i for 1≤ i≤ k , ψ_1(t_k+1)=T_c , ψ_2(s_i)=T_d_i for 1≤ i≤ k-1 , ψ_2(s_k)=T_c . We denote by Δ_D,k the Garside element of A[D_k+1] and by Δ_A,k the Garside element of A[A_k], and we set g=ψ_1(Δ_D,k) ψ_2(Δ_A,k^-2). We have Δ_D,kt_iΔ_D,k^-1=t_i for all 1≤ i≤ k-1, Δ_D,kt_k+1Δ_D,k^-1=t_k and Δ_A,k^2s_iΔ_A,k^-2=s_i for all 1≤ i≤ k, hence gT_d_ig^-1=T_g(d_i)=T_d_i for all 1≤ i≤ k-1 and gT_cg^-1=T_g(c)=T_d_k. It follows that g([d_i])=[d_i] for all 1≤ i≤ k-1 and g([c])= [d_k] (see Farb–Margalit <cit.>). Moreover, Δ_D,k=(t_1⋯ t_k-1t_kt_k+1t_k-1⋯ t_1)⋯(t_k-1t_kt_k+1t_k-1)(t_kt_k+1) , Δ_A,k^2=(s_1⋯ s_k-1s_k^2s_k-1⋯ s_1)⋯(s_k-1s_k^2s_k-1) s_k^2 , hence θ(g)=1, that is, g∈(θ). Now, assume k is odd. If c is isotopic in Σ_n∖{x} to d_k, then we can take g=𝕀. So, we can assume that c and d_k are not isotopic in Σ_n∖{x}. Since c and d_k are isotopic in Σ_n, there exists a cylinder C in Σ_n whose boundary components are d_k and c. Since c and d_k are not isotopic in Σ_n∖{x}, this cylinder must contain the puncture x. Let Σ' be a closed regular neighborhood of (⋃_i=1^k-1d_i)∪ C. Then Σ' is a surface of genus k-1/2 with two boundary components and the circles d_1,…, d_k-1,d_k,c are arranged as shown in Figure <ref>. Since k≤ m and k is odd, k-1/2 is strictly less than the genus of Σ_n, hence we can choose a sub-surface Σ” of Σ_n of genus k+1/2, with one boundary component, and containing Σ'. We can also choose a generic circle e in Σ”∖{x} such that |e∩ d_1|=1, |e∩ c|=1 if k=1, e∩ d_i=∅ for all 2≤ i≤ k and e∩ c=∅ if k≥ 2 (see Figure <ref>). By Proposition <ref> there are homomorphisms ψ_1:A[D_k+2]→(Σ_n,x) and ψ_2:A[A_k+1]→(Σ_n,x) defined by ψ_1(t_1)=T_e , ψ_1(t_i)=T_d_i-1 for 2≤ i≤ k+1 , ψ_1(t_k+2)=T_c , ψ_2(s_1)=T_e , ψ_2(s_i)=T_d_i-1 for 2≤ i≤ k , ψ_2(s_k+1)=T_c . We denote by Δ_D,k+1 the Garside element of A[D_k+2] and by Δ_A,k+1 the Garside element of A[A_k+1], and we set g=ψ_1(Δ_D,k+1) ψ_2(Δ_A,k+1^-2). Then, as in the case where k is even, we have g([d_i])=[d_i] for all 1≤ i ≤ k-1, g([c])=[d_k], and g∈(θ). Let n≥6. Set m=n-1 if n is odd and m=n-2 if n is even. Let 1≤ k≤ m. Let c be a generic circle of Σ_n∖{x} such that c∩ d_i=∅ for 1≤ i≤ k-2, |c∩ d_k-1|=1 if k≥ 2, and c is isotopic to d_k in Σ_n. Then there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ k-1 and g([c])=[d_k]. We argue by induction on i([c],[d_k]). The case i([c],[d_k])=0 is proved in Lemma <ref>, hence we can assume that i([c],[d_k])≥ 1 and that the induction hypothesis holds. Note that now c and d_k cannot be isotopic in Σ_n∖{x} since i([c],[d_k])≠ 0. We can assume without loss of generality that i([c],[d_k])=|c∩ d_k|. Since c and d_k are isotopic in Σ_n, there exists a bigon D in Σ_n cobounded by an arc of d_k and an arc of c as shown in Figure <ref>. We can choose this bigon to be minimal in the sense that its interior intersects neither c nor d_k. The bigon D cannot intersect d_i for 1≤ i≤ k-2 and one can easily modify c so that D does not intersect d_k-1 either. Since c and d_k are not isotopic in Σ_n∖{x}, D necessarily contains the puncture x in its interior. We choose a circle c' parallel to c except in the bigon D where it follows the arc of d_k which borders D as illustrated in Figure <ref>. By construction c'∩ d_i=∅ for 1≤ i≤ k-2, |c'∩ d_k-1|=1 if k≥ 2, and c' is isotopic to d_k in Σ_n. Moreover i([c'],[d_k])≤ |c'∩ d_k|<|c∩ d_k|=i([c],[d_k]). By the induction hypothesis there exists g_1∈(θ) such that g_1([d_i])=[d_i] for all 1≤ i≤ k-1 and g_1([c'])=[d_k]. We choose G_1∈^+(Σ_n,x) which represents g_1 such that G_1(d_i)=d_i for all 1≤ i≤ k-1 and G_1(c')=d_k. We set c”=G_1(c). Then c”∩ d_i=∅ for 1≤ i≤ k-2, |c”∩ d_k-1|=1 if k≥ 2, c”∩ d_k=∅, and c” is isotopic to d_k in Σ_n. By Lemma <ref> there exists g_2∈(θ) such that g_2([d_i])=[d_i] for all 1≤ i≤ k-1 and g_2([c”])=[d_k]. We set g=g_2∘ g_1. Then g∈(θ), g([d_i])=[d_i] for all 1≤ i≤ k-1 and g([c])=[d_k]. Let n be an even number, n≥6. Let c be a generic circle of Σ_n∖{x} such that c∩ d_i=∅ for all 1≤ i≤ n-3, |c∩ d_n-2|=1, c∩ d_n-1=∅, and c is isotopic to d_n-1 in Σ_n. Then we have one of the following two possibilities. (1) c is isotopic to d_n-1 in Σ_n∖{x}. (2) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([c])=[d_n]. The surface Σ_n is a surface of genus n-2/2 with two boundary components ∂_1 and ∂_2. We assume that the circles d_1,…,d_n-1,d_n are arranged as in Figure <ref>. Let Ω be the surface obtained by cutting Σ_n along ⋃_i=1^n-1d_i. Then Ω has two connected components Ω_1 and Ω_2. Each of these components is a cylinder that we represent by a square with a hole in the middle as shown in Figure <ref>. Two opposite sides of each square represent arcs of d_n-2, one side represents an arc of d_n-1, and the last side represents a union of arcs of d_1,… ,d_n-3. The boundary of the hole represents ∂_1 for Ω_1 and ∂_2 for Ω_2. The puncture x sits inside Ω_2. The trace of the circle c in Ω is a simple arc ℓ, either in Ω_1 or in Ω_2. Suppose ℓ is in Ω_1. Let q be the intersection point of c with d_n-2. Then q is represented in Ω_1 by two points q_1 and q_2 on two opposite sides of Ω_1 as shown in Figure <ref>, and ℓ is a simple arc connecting q_1 with q_2. Up to isotopy pointwise fixing the boundary of Ω_1, there exist exactly two simple arcs in Ω_1 connecting q_1 to q_2 that are represented by the arcs ℓ_1 and ℓ_2 depicted in Figure <ref>. The arc ℓ cannot be isotopic to ℓ_1, otherwise c would not be isotopic to d_n-1 in Σ_n. So, ℓ is isotopic to ℓ_2 in Ω_1 which implies that c is isotopic to d_n-1 in Σ_n∖{x}. Now suppose ℓ is in Ω_2. Let q be the intersection point of c with d_n-2. Then q is represented in Ω_2 by two points q_3 and q_4 on two opposite sides of Ω_2 as shown in Figure <ref>, and ℓ is a simple arc connecting q_3 with q_4. Up to isotopy (in Ω_2 and not in Ω_2∖{x}) pointwise fixing the boundary of Ω_2, there exist exactly two simple arcs in Ω_2 connecting q_3 to q_4 that are represented by the arcs ℓ_3 and ℓ_4 depicted in Figure <ref>. The arc ℓ cannot be isotopic to ℓ_3 in Ω_2, otherwise c would not be isotopic to d_n-1 in Σ_n. So, ℓ is isotopic to ℓ_4 in Ω_2. Let {F_t:Ω_2→Ω_2}_t∈[0,1] be an isotopy such that F_0=𝕀, F_1(ℓ)=ℓ_4 and F_t is the identity on the boundary of Ω_2 for all t∈[0,1]. The arc ℓ_4 divides Ω_2 into two parts, the lower one which does not contain the hole bordered by ∂_2 and the puncture x, and the upper one which contains the hole bordered by ∂_2 and the puncture x, as shown in Figure <ref>. Suppose F_1(x) is in the upper part. Let C be the domain of Ω_2 bounded by ℓ_4, two arcs of d_n-2 and an arc of d_n-1 as shown in Figure <ref>. Let C'=F_1^-1(C). Then C' is a domain of Ω_2 bounded by ℓ, two arcs of d_n-2 and an arc of d_n-1 and C' does not contain the puncture x. The existence of such a domain implies that c is isotopic to d_n-1 in Σ_n∖{x}. Now, suppose F_1(x) is in the lower part. We can assume without loss of generality that the trace of d_n on Ω_2 is the simple arc ℓ_5 drawn in Figure <ref>. We can choose an isotopy {F_t':Ω_2→Ω_2}_t∈[0,1] such that F_0'=𝕀, F_1'(ℓ_4)= ℓ_5, F_t' is the identity on the boundary of Ω_2 for all t∈ [0,1], and F_1'(F_1(x))=x. Let F̃:Σ_n→Σ_n be the homeomorphism which is F_1'∘ F_1 on Ω_2 and is the identity outside Ω_2, and let g∈(Σ_n,x) be the mapping class represented by F̃. Then g∈(θ), g([d_i])=[d_i] for all 1≤ i≤ n-1, and g([c])=[d_n]. Let n be an even number, n≥6. Let c be a generic circle of Σ_n∖{x} such that c∩ d_i=∅ for all 1≤ i ≤ n-3, |c∩ d_n-2|=1, and c is isotopic to d_n-1 in Σ_n. Then there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-2, and either g([c]) =[d_n-1] or g([c])=[d_n]. We can assume that |c∩ d_n-1|=i([c],[d_n-1]) and |c∩ d_n|=i([c],[d_n]). We argue by induction on |c∩ d_n-1|+|c∩ d_n|=i([c],[d_n-1])+i([c],[d_n]). The case |c∩ d_n-1|=0 follows directly from Lemma <ref>, and the case |c∩ d_n|=0 is proved in the same way by replacing d_n-1 with d_n. So we can assume that i([c],[d_n-1])=|c∩ d_n-1|≥ 1, i([c],[d_n])=|c∩ d_n|≥1, and that the induction hypothesis holds. Note that now c and d_n-1 cannot be isotopic in Σ_n∖{x}. Since c and d_n-1 are isotopic in Σ_n, there exists a bigon D in Σ_n cobounded by an arc of d_n-1 and an arc of c (see Figure <ref>). Since c and d_n-1 are not isotopic in Σ_n∖{x}, this bigon necessarily contains the puncture x. We can choose D to be minimal in the sense that its interior does not intersect c and d_n-1. Moreover, up to exchanging the roles of d_n-1 and d_n if necessary, we can also suppose that d_n does not intersect the interior of D. Clearly, D does not intersect d_i for any 1≤ i≤ n-3 and, up to replacing c with an isotopic circle, we can assume that D does not intersect d_n-2 either. Let c' be a circle parallel to c except in the bigon D where it follows the arc of d_n-1 which borders D as illustrated in Figure <ref>. We have c'∩ d_i=∅ for all 1≤ i≤ n-3, |c'∩ d_n-2|=1 and c' is isotopic to d_n-1 in Σ_n. We also have i([c'],[d_n-1])<i([c],[d_n-1]) and i([c'],[d_n])≤ i([c],[d_n]), hence by the induction hypothesis there exists g_1∈(θ) such that g_1([d_i])=[d_i] for all 1≤ i≤ n-2, and either g_1([c'])=[d_n-1] or g_1([c'])=[d_n]. Without loss of generality we can assume that g_1([c'])=[d_n-1]. We choose G_1∈^+(Σ_n,x) which represents g_1 such that G_1(d_i)=d_i for all 1≤ i≤ n-2 and G_1(c')=d_n-1. We set c”=G_1(c). Then c”∩ d_i=∅ for all 1≤ i≤ n-3, |c”∩ d_n-2|=1, c”∩ d_n-1=∅, and c” is isotopic to d_n-1 in Σ_n. By Lemma <ref> there exists g_2∈(θ) such that g_2([d_i])=[d_i] for all 1≤ i≤ n-2, and either g_2([c”])=[d_n-1] or g_2([c”])=[d_n]. We set g=g_2∘ g_1. Then g∈(θ), g([d_i])=[d_i] for all 1≤ i≤ n-2, and either g([c])=[d_n-1] or g([c])=[d_n]. Let n≥6. Let c_1,…,c_n-1 be generic circles in Σ_n∖{x} such that (a) |c_i∩ c_j|=1 if |i-j|=1 and |c_i∩ c_j|=0 if |i-j|≥2, for all 1≤ i,j≤ n-1, (b) c_i is isotopic to d_i in Σ_n for all 1≤ i≤ n-1. Then: (1) If n is odd, then there exists g∈(θ) such that g([c_i])=[d_i] for all 1≤ i≤ n-1. (2) If n is even, then there exists g∈(θ) such that g([c_i])=[d_i] for all 1≤ i≤ n-2, and either g([c_n-1])=[d_n-1] or g([c_n-1])=[d_n]. We prove Part (2). Part (1) can be proved in the same way. For 1≤ k≤ n-2 we construct by induction on k an element g_k∈(θ) such that g_k([c_i])=[d_i] for all 1≤ i≤ k. Assume k=1. Then, by Lemma <ref> applied to k=1, there exists g_1∈(θ) such that g_1([c_1])=[d_1]. Suppose 2≤ k≤ n-1 and g_k-1 is constructed. We choose G_k-1∈^+(Σ_n,x) which represents g_k-1 and such that G_k-1(c_i)=d_i for all 1≤ i≤ k-1, and we set c_k'=G_k-1(c_k). Note that, since g_k-1∈(θ), the circle c_k' is isotopic to c_k in Σ_n. Then, by Lemma <ref>, there exists h_k∈(θ) such that h_k([d_i])=[d_i] for all 1≤ i≤ k-1 and h_k([c_k'])=[d_k]. We set g_k=h_k∘ g_k-1. Then g_k([c_i])=[d_i] for all 1≤ i≤ k. Note that when n is odd we can extend the induction to k=n-1 and conclude the proof here by setting g=g_n-1. The case where n is even requires an extra argument. We choose G_n-2∈^+(Σ_n,x) which represents g_n-2 and such that G_n-2(c_i)=d_i for all 1≤ i≤ n-2, and we set c_n-1'=G_n-2(c_n-1). Again, since g_n-2∈(θ), the circle c_n-1' is isotopic to c_n-1 in Σ_n. By Lemma <ref> there exists h_n-1∈(θ) such that h_n-1([d_i])=[d_i] for all 1≤ i≤ n-2, and either h_n-1([c_n-1'])=[d_n-1] or h_n-1([c_n-1'])=[d_n]. We set g=h_n-1∘ g_n-2. Then g([c_i])=[d_i] for all 1≤ i≤ n-2, and either g([c_n-1])=[d_n-1] or g([c_n-1])=[d_n]. Let φ:A[A_n-1]→ A[D_n] be a homomorphism. We know by Theorem <ref> that we have one of the following possibilities. * π∘φ is cyclic. * There exist ψ∈⟨χ̅⟩ and p∈ such that π∘φ is conjugate to ψ∘γ̅_p. By Lemma <ref>, if π∘φ is cyclic, then φ is cyclic. So, we can assume that there exists ψ∈⟨χ̅⟩ and p∈ such that π∘φ is conjugate to ψ∘γ̅_p. Up to conjugating and composing φ on the left by χ if necessary, we can assume that π∘φ=γ̅_p, that is, (π∘φ)(s_i)=s_iΔ_A^2p, where Δ_A denotes the Garside element of A[A_n-1]. Set U=ρ_A(Δ_A^2). If n is odd, then U^2=T_∂, where ∂ is the boundary component of Σ_n, and if n is even, then U=T_∂_1T_∂_2, where ∂_1 and ∂_2 are the two boundary components of Σ_n (see Labruère–Paris <cit.>). In particular U^2∈ Z((Σ_n)) in both cases. By Theorem <ref> we know that there exist generic circles c_1,…,c_n-1 in Σ_n∖{x}, ε∈{± 1} and f_0∈(Σ_n,x) such that (a) |c_i∩ c_j|=1 if |i-j|=1 and |c_i∩ c_j|=0 if |i-j|≥2, for all 1≤ i,j≤ n-1, (b) f_0 commutes with T_c_i for all 1≤ i≤ n-1, (c) (ρ_D∘φ)(s_i)=T_c_i^εf_0 for all 1≤ i≤ n-1. For 1≤ i≤ n-1 we denote by b_i the circle in Σ_n obtained by composing c_i:^1→Σ_n∖{x} with the embedding Σ_n∖{x}↪Σ_n. In addition we set g_0=θ(f_0). Then (θ∘ρ_D∘φ)(s_i)=T_b_i^ε g_0 for all 1≤ i≤ n-1. Note that, since θ∘ρ_D=ρ_A∘π, we also have (θ∘ρ_D∘φ)(s_i)=T_a_iU^p for all 1≤ i ≤ n-1. Claim 1. We have ε=1, g_0=U^p and b_i is isotopic to a_i in Σ_n for all 1≤ i≤ n-1. Proof of Claim 1. Since g_0^2 commutes with T_b_i^2εg_0^2=T_a_i^2U^2p and U^2∈ Z((Σ_n)), g_0^2 commutes with T_a_i^2 for all 1≤ i≤ n-1. By lemma <ref> it follows that g_0^4∈ Z((Σ_n)). By Proposition <ref> we deduce that (T_a_i^4U^4p)=(T_b_i^4εg_0^4)={[a_i]}={[b_i]}, hence [a_i]=[b_i] for all 1≤ i≤ n-1. Then T_a_i^4-4ε=U^-4pg_0^4, hence, by Proposition <ref>, 4-4ε=0, and therefore ε=1. Finally, from the equality T_a_iU^p=T_a_ig_0 it follows that g_0=U^p. This finishes the proof of Claim 1. From Claim 1 it follows that c_i is isotopic to d_i in Σ_n hence, by Lemma <ref>, there exists g∈(θ) such that g([c_i]) =[d_i] for all 1≤ i≤ n-2, g([c_n-1])=[d_n-1] if n is odd, and either g ([c_n-1])=[d_n-1] or g([c_n-1])=[d_n] if n is even. These equalities imply that gT_c_ig^-1=T_d_i for 1≤ i≤ n-2, gT_c_n-1g^-1=T_d_n-1 if n is odd, and either gT_c_n-1g^-1=T_d_n-1 or gT_c_n-1g^-1=T_d_n if n is even. By Theorem <ref> (1) there exists v∈(π) such that ρ_D(v)=g. So, up to composing φ on the left by _v first, and composing on the left by ζ if necessary after, we can assume that (ρ_D∘φ)(s_i)=T_d_if_0 for all 1≤ i≤ n-1, where f_0 commutes with T_d_i for all 1≤ i≤ n-1. Since T_d_1=ρ_D(t_1)∈(ρ_D), we have f_0∈(ρ_D), hence there exists u_0∈ A[D_n] such that ρ_D(u_0)=f_0. Since ρ_D is injective (see Theorem <ref>), we deduce that φ(s_i)=t_iu_0 for all 1≤ i≤ n-1 and u_0 commutes with t_i for all 1≤ i≤ n-1. We set Y={t_1,…,t_n-1}, Δ_Y=Δ_Y[D_n], Δ_D=Δ[D_n], κ=2 if n is odd, and κ=1 if n is even. By Paris <cit.> the centralizer of Y in A[D_n] is generated by Δ_Y^2 and Δ_D^κ, hence there exists q,r∈ such that u_0=Δ_Y^2qΔ_D^κ r. We conclude that φ=β_q,r. § ENDOMORPHISMS OF A[D_N] The following is a preliminary to the proof of Theorem <ref>. Let n be an odd number, n≥ 7. Let c be a generic circle of Σ_n∖{x} such that c∩ d_i=∅ for 1≤ i ≤ n-3, |c∩ d_n-2|=1, c∩ d_n-1=∅, and c is isotopic to d_n-1 in Σ_n. Then we have one of the following three possibilities. (1) c is isotopic to d_n-1 in Σ_n∖{x}. (2) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([c])=[d_n]. (3) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-2, g([d_n-1])=[d_n] and g([c])=[d_n-1]. The surface Σ_n is a surface of genus n-1/2 with one boundary component, ∂. We assume that the circles d_1,…,d_n-1,d_n are arranged as shown in Figure <ref>. The circles d_n-3 and d_n-1 divide d_n-2 into two arcs, e_1 and e_2, where the arc e_1 intersects d_n and the arc e_2 does not intersect d_n (see Figure <ref>). Let Ω be the surface obtained by cutting Σ_n along ⋃_i=1^n-1d_i. Then Ω is a cylinder represented by an octagon with a hole in the middle (see Figure <ref>). Two opposite sides of this octagon represent arcs of d_n-1 and two opposite sides represent arcs of d_1,…,d_n-3, as shown in the figure. Two other sides represent arcs of e_1 and the last two sides represent arcs of e_2, arranged as shown in Figure <ref>. The boundary of the hole represents ∂. The circle c intersects d_n-2 in a point q, and q is either on the arc e_1 or on the arc e_2. Suppose first that q is on the arc e_1. Then q is represented on Ω by two points q_1 and q_2 lying on two different sides of Ω that represent e_1, and the trace of c in Ω is a simple arc ℓ connecting q_1 to q_2. Up to isotopy (in Ω and not in Ω∖{x}) pointwise fixing the bounfary of Ω, there are exactly two simple arcs in Ω connecting q_1 to q_2 represented by the arcs ℓ_1 and ℓ_2 depicted in Figure <ref>. The arc ℓ cannot be isotopic to ℓ_2, otherwise c would not be isotopic to d_n-1 in Σ_n. So, ℓ is isotopic to ℓ_1 in Ω. Let {F_t:Ω→Ω}_t∈[0,1] be an isotopy such that F_0=𝕀, F_1(ℓ)=ℓ_1 and F_t is the identity on the boundary of Ω for all t∈ [0,1]. The arc ℓ_1 divides Ω into two parts, the lower one which does not contain the hole bounded by ∂ and the puncture x, and the upper one which contains the hole bounded by ∂ and the puncture x, as shown in Figure <ref>. Suppose F_1(x) is in the upper part. Let C be the domain of Ω bounded by ℓ_1, two arcs of e_1 and an arc of d_n-1 as shown in Figure <ref>. Let C'=F_1^-1(C). Then C' is a domain of Ω bounded by ℓ, two arcs of e_1 and an arc of d_n-1 which does not contain the puncture x. The existence of such a domain implies that c is isotopic to d_n-1 in Σ_n∖{x}. Suppose F_1(x) is in the lower part. We can suppose that the trace of d_n on Ω is the arc ℓ_3 depicted in Figure <ref>. We can choose an isotopy {F'_t:Ω→Ω}_t∈[0,1] such that F'_0=𝕀, F_1'(ℓ_1)= ℓ_3, F_t' is the indentity on the boundary of Ω for all t∈ [0,1], and F_1'(F_1(x))=x. Let F̃:Σ_n→Σ_n be the homeomorphism which is F_1'∘ F_1 on Ω and is the identity outside Ω, and let g∈(Σ_n,x) be the mapping class represented by F̃. Then g∈(θ), g([d_i])=[d_i] for all 1≤ i≤ n-1, and g([c])=[d_n]. Suppose now that q is on the arc e_2. Then q is represented on Ω by two points q_3 and q_4 lying on two different sides of Ω which represent e_2, and the trace of c in Ω is a simple arc ℓ connecting q_3 to q_4. Up to isotopy (in Ω and not in Ω∖{x}) pointwise fixing the boundary of Ω, there are exactly two simple arcs in Ω connecting q_3 to q_4 represented by the arcs ℓ_4 and ℓ_5 depicted in Figure <ref>. The arc ℓ cannot be isotopic to ℓ_5, otherwise c would not be isotopic to d_n-1 in Σ_n. So, ℓ is isotopic to ℓ_4 in Ω. Let {F_t:Ω→Ω}_t∈[0,1] be an isotopy such that F_0=𝕀, F_1(ℓ)=ℓ_4 and F_t is the identity on the boundary of Ω for all t∈ [0,1]. The arc ℓ_4 divides Ω into two parts, the upper one which does not contain the hole bounded by ∂ and the puncture x, and the lower one which contains the hole bounded by ∂ and the puncture x, as shown in Figure <ref>. Suppose F_1(x) is in the lower part. Let D be the domain of Ω bounded by ℓ_4, two arcs of e_2 and an arc of d_n-1 as shown in Figure <ref>. Let D'=F_1^-1(D). Then D' is a domain of Ω bounded by ℓ, two arcs of e_2 and an arc of d_n-1 which does not contain the puncture x. The existence of such a domain implies that c is isotopic to d_n-1 in Σ_n∖{x}. Suppose F_1(x) is in the upper part. Let c' be the circle drawn in Figure <ref>. We can assume that the trace of c' on Ω is the arc ℓ_6 drawn in Figure <ref>. We can choose an isotopy {F'_t:Ω→Ω}_t∈[0,1] such that F'_0=𝕀, F_1'(ℓ_4)= ℓ_6, F_t' is the identity on the boundary of Ω for all t∈ [0,1], and F_1'(F_1(x))=x. Let F̃:Σ_n→Σ_n be the homeomorphism which is F_1'∘ F_1 on Ω and is the identity outside Ω, and let g_1∈(Σ_n,x) be the mapping class represented by F̃. Then g_1∈(θ), g_1([d_i])=[d_i] for all 1≤ i≤ n-1, and g_1([c])=[c']. Let g_2∈π_1(Σ_n,x)=(θ) be the element represented by the loop μ drawn in Figure <ref>. We have g_2([d_i])=[d_i] for all 1≤ i≤ n-2, g_2([d_n-1])=[d_n] and g_2([c'])=[d_n-1]. Set g=g_2∘ g_1. Then g∈(θ), g([d_i])=[d_i] for all 1≤ i≤ n-2, g([d_n-1])=[d_n] and g([c])=[d_n-1]. Let n≥6. Let φ:A[D_n]→ A[D_n] be an endomorphism. We know from Theorem <ref> that we have one of the following two possibilities up to conjugation. (1) φ∘ι is cyclic. (2) There exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ∘ι=ψ∘β_p,q. Suppose φ∘ι is cyclic. Then there exists u∈ A[D_n] such that φ(t_i)=(φ∘ι)(s_i)=u for all 1≤ i≤ n-1. We also have φ(t_n)=φ(t_n-2t_nt_n-2t_n^-1t_n-2^-1)=φ(t_n-2t_n ) φ(t_n-2) φ(t_n^-1t_n-2^-1)= φ(t_n-2t_n) φ(t_1) φ(t_n^-1t_n-2^-1)= φ(t_1)=u , hence φ is cyclic. So, we can assume that there exist ψ∈⟨ζ,χ⟩ and p,q∈ such that φ∘ι is conjugate to ψ∘β_p,q. We set Y={t_1,…,t_n-2,t_n-1}, Δ_Y=Δ_Y[D_n], Δ_D=Δ[D_n], κ=2 if n is odd, and κ=1 if n is even. Up to conjugating and composing φ on the left by ζ if necessary, we can assume that there exist ε∈{± 1} and p,q∈ such that φ(t_i)=(φ∘ι)(s_i)=t_i^εΔ_Y^2pΔ_D^κ q for all 1≤ i≤ n-1. The remainder of the proof is divided into four cases depending on whether p is zero or not and whether n is even or odd. Case 1: n is even and p≠ 0. Then Σ_n is a surface of genus n-2/2 with two boundary components, ∂_1 and ∂_2, and κ=1. We have ρ_D(t_i)=T_d_i for 1≤ i≤ n-1 and, by Labruère–Paris <cit.>, ρ_D(Δ_Y^2)=T_eT_∂_1 and ρ_D(Δ_D)=T_∂_1T_∂_2, where e is the circle drawn in Figure <ref>. Set f_i=(ρ_D∘φ)(t_i) for all 1≤ i≤ n. Then, by the above, f_i=T_d_i^ε T_e^pT_∂_1^p+qT_∂_2^q for all 1≤ i≤ n-1 . In particular, (f_i)={[d_i],[e]} for all 1≤ i≤ n-1. Since t_n is conjugate in A[D_n] to t_1, f_n is conjugate to f_1 in (Σ_n,x), hence f_n is of the form f_n=T_d'^ε T_e'^p T_∂_1^p+qT_∂_2^q, where d' is a non-separating circle and e' is a circle that separates Σ_n into two components, one being a cylinder containing x and the other being a surface of genus n-2/2 with two boundary components, ∂_1 and e', which does not contain x. Moreover, by Theorem <ref>, (π∘φ)(t_n-1)=(π∘φ)(t_n), hence θ(f_n-1)=θ(f_n). This implies that d' is isotopic to d_n-1 in Σ_n. We have f_1f_n=f_nf_1, hence, by Proposition <ref> (3) we have f_n((f_1))=(f_1), thus [e] is a reduction class for f_n, and therefore i([e],[e'])=0, because [e'] is an essential reduction class for f_n. This combined with the fact that e' and ∂_1 cobound a subsurface of genus n-2/2 with two boundary components which does not contain x implies that [e]=[e']. So, we can assume that e=e'. Then the fact that d' does not intersect e'=e and d' is isotopic to d_n-1 in Σ_n implies that d' is isotopic to d_n-1 in Σ_n∖{x}, hence we can also assume that d'=d_n-1. In conclusion we have (ρ_D∘φ)(t_n-1)=(ρ_D∘φ)(t_n)= T_d_n-1^ε T_e^pT_∂_1^p+qT_∂_2^q, hence φ(t_n-1)=φ(t_n)=t_n-1^εΔ_Y^2pΔ_D^q. We conclude that φ=β_p,q∘π if ε=1 and φ=χ∘β_-p,-q∘π if ε=-1. Case 2: n is odd and p≠ 0. Then Σ_n is a surface of genus n-1/2 with one boundary component, ∂, and κ=2. We have ρ_D(t_i)=T_d_i for 1≤ i≤ n-1 and, by Labruère–Paris <cit.>, ρ_D(Δ_Y^4)=T_e and ρ_D(Δ_D^2)=T_∂, where e is the circle drawn in Figure <ref>. Set f_i=(ρ_D∘φ)(t_i) for all 1≤ i≤ n. Then, by the above, f_i^2=T_d_i^2εT_e^pT_∂^2q for all 1≤ i≤ n-1 . In particular, (f_i)=(f_i^2)={[d_i],[e]} for all 1≤ i≤ n-1. The element t_n is conjugate to t_1 in A[D_n], hence φ(t_n) is conjugate to φ(t_1) in A[D_n], and therefore there exists v∈ A[D_n] such that φ(t_n)=v φ(t_1) v^-1=(vt_1v^-1)(vΔ_Y^2pv^-1)Δ^2q. The element ρ_D(vt_1v^-1) is conjugate to ρ_D(t_1)=T_d_1, hence ρ_D(vt_1v^-1)=T_d', where d' is a non-separating circle. The element ρ_D(vΔ_Y^4v^-1) is conjugate to ρ_D(Δ_Y^4)=T_e, hence ρ_D(vΔ_Y^4v^-1)=T_e', where e' is a circle that separates Σ_n into two components, one being a cylinder containing x and the other being a surface of genus n-1/2 with one boundary component, e', which does not contain x. We also have f_n^2=T_d'^2T_e'^pT_∂^2q and (f_n)=(f_n^2)={[d'],[e']}. By Theorem <ref> (π∘φ)(t_n-1)=(π∘φ)(t_n), hence θ(f_n-1^2)=θ(f_n^2). This implies that d' is isotopic to d_n-1 in Σ_n. Since f_1f_n=f_nf_1, by Proposition <ref> (3) we have f_n^2((f_1))=(f_1), hence [e] is a reduction class for f_n^2, and therefore i([e],[e'])=0, because [e'] is an essential reduction class for f_n^2. This combined with the fact that e' bounds a subsurface of genus n-1/2 with one boundary component which does not contain x implies that [e]=[e']. So, we can assume that e=e'. In particular, since ρ_D is injective, we have vΔ_Y^4v^-1=Δ_Y^4. Since d' does not intersect e'=e and d' is isotopic to d_n-1 in Σ_n, d' is isotopic to d_n-1 in Σ_n∖{x}, hence we can also assume that d'=d_n-1. Since ρ_D is injective, this last equality implies that vt_1v^-1=t_n-1. At this level of the proof we have that φ(t_n)=t_n-1^ε(vΔ_Y^2pv^-1)Δ_D^2q and (vΔ_Y^2pv^-1)^2=vΔ_Y^4pv^-1=Δ_Y^4p. It remains to show that vΔ_Y^2pv^-1=Δ_Y^2p. By Theorem <ref> there exists ψ∈⟨ζ,χ⟩ and r,s∈ such that φ∘ζ∘ι is conjugate to ψ∘β_r,s. The automorphism ζ is inner since n is odd, hence we can assume that ψ∈⟨χ⟩. So, there exist w∈ A[D_n], μ∈{± 1} and r,s∈ such that φ(t_i)=wt_i^μΔ_Y^2rΔ_D^2sw^-1 for all 1≤ i≤ n-2 and φ(t_n)= wt_n-1^μΔ_Y^2rΔ_D^2sw^-1. Set g=ρ_D(w). We have (ρ_D∘φ)(t_i^2)=T_d_i^2T_e^pT_∂^2q=gT_d_i^2T_e^rT_∂^2s g^-1 for all 1≤ i≤ n-2 and (ρ_D∘φ)(t_n^2)=T_d_n-1^2T_e^pT_∂^2q=gT_d_n-1^2T_e^rT_∂^2sg^-1. So, g^-1((T_d_i^2T_e^pT_∂^2q))=(T_d_i^2T_e^rT_∂^2s), hence g^-1({[d_i],[e]})⊂{[d_i],[e]} for all 1≤ i≤ n-1. This implies g^-1([d_i])=[d_i], hence g commutes with T_d_i, and therefore w commutes with t_i for all 1≤ i ≤ n-1. Since Δ_Y is in the subgroup of A[D_n] generated by Y={t_1,…,t_n-1} and Δ_D^2 is central, it follows that φ(t_i)=t_i^μΔ_Y^2rΔ_D^2s for all 1≤ i≤ n-2 and φ(t_n)=t_n-1^μΔ_Y^2rΔ_D^2s. From the equality φ(t_1)=t_1^εΔ_Y^2pΔ_D^2q=t_1^μΔ_Y^2rΔ_D^2s it easily follows that μ=ε, r=p and s=q, hence φ(t_n)=t_n-1^εΔ_Y^2pΔ_D^2q. We conclude that φ=β_p,q∘π if ε=1 and φ=χ∘β_-p,-q∘π if ε=-1. Case 3: n is even and p=0. Then, again, Σ_n is a surface of genus n-2/2 with two boundary components, ∂_1 and ∂_2, and κ=1. We have ρ_D(t_i)=T_d_i for 1≤ i ≤ n-1 and, by Labruère–Paris <cit.>, ρ_D(Δ_D)= T_∂_1T_∂_2. Set f_i=(ρ_D∘φ)(t_i) for all 1≤ i≤ n. Then, by the above, f_i=T_d_i^ε T_∂_1^qT_∂_2^q for all 1≤ i≤ n-1 . In particular, (f_i)={[d_i]} for all 1≤ i≤ n-1. Since t_n is conjugate in A[D_n] to t_1, f_n is of the form f_n=T_d'^ε T_∂_1^qT_∂_2 ^q where d' is a non-separating circle. For 1≤ i≤ n-3 we have t_it_n=t_nt_i, hence T_d_iT_d'=T_d'T_d_i, and therefore, by Proposition <ref>, i([d_i],[d'])=0. Similarly, we have i([d_n-1],[d'])=0. Since t_n-2t_nt_n-2=t_nt_n-2t_n, we have T_d_n-2T_d'T_d_n-2 =T_d'T_d_n-2T_d', hence, by Proposition <ref>, i([d_n-2],[d'])=1. So we can assume that d_i∩ d'=∅ for 1≤ i≤ n-3, d_n-1∩ d'=∅ and |d_n-2∩ d'|=1. Moreover, by Theorem <ref>, (π∘φ)(t_n-1)=(π∘φ)(t_n), hence θ(f_n-1)=θ(f_n), and therefore d' is isotopic to d_n-1 in Σ_n. By lemma <ref> it follows that we have one of the following two possibilities. (1) d' is isotopic to d_n-1 in Σ_n∖{x}. (2) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([d'])=[d_n]. Suppose d' is isotopic to d_n-1 in Σ_n∖{x}. Then (ρ_D∘φ)(t_n)=T_d_n-1^ε T_∂_1^qT_∂_2^q, hence, since ρ_D is injective, φ(t_n)=t_n-1^εΔ_D^q. We conclude that φ=β_0,q∘π if ε=1 and φ=χ∘β_0,-q∘π if ε=-1. Suppose there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([d'])=[d_n]. We have (ρ_D∘φ)(t_i)= T_d_i^ε T_∂_1^qT_∂_2^q= g^-1T_d_i^ε T_∂_1^qT_∂_2^qg for all 1≤ i≤ n-1 and (ρ_D∘φ)(t_n)= T_d'^ε T_∂_1^qT_∂_2^q= g^-1T_d_n^ε T_∂_1^qT_∂_2^qg . By Theorem <ref> there exists v∈(π)⊂ A[D_n] such that ρ_D(v)=g. Since, ρ_D is injective it follows that φ(t_i)=v^-1t_i^εΔ_D^qv for all 1≤ i≤ n . We conclude that φ=_v^-1∘γ_q if ε=1 and φ=_v^-1∘χ∘γ_-q if ε=-1. Case 4: n is odd and p=0. Then, again, Σ_n is a surface of genus n-1/2 with one boundary component, ∂, and κ=2. We have ρ_D(t_i)=T_d_i for 1≤ i≤ n-1 and, by Labruère–Paris <cit.>, ρ_D(Δ_D^2)=T_∂. Set f_i=(ρ_D∘φ)(t_i) for all 1≤ i≤ n. Then, by the above, f_i=T_d_i^ε T_∂^q for all 1≤ i≤ n-1 . In particular, (f_i)={[d_i]} for all 1≤ i≤ n-1. Since t_n is conjugate in A[D_n] to t_1, f_n is conjugate to f_1 in (Σ_n,x), hence f_n is of the form f_n=T_d'^ε T_∂^q where d' is a non-separating circle. For 1≤ i≤ n-3 we have t_it_n=t_nt_i, hence T_d_iT_d'=T_d'T_d_i, and therefore, by Proposition <ref>, i([d_i],[d'])=0. Similarly, we have i([d_n-1],[d'])=0. Since t_n-2t_nt_n-2=t_nt_n-2t_n, we have T_d_n-2T_d'T_d_n-2=T_d'T_d_n-2T_d', hence, by Proposition <ref>, i([d_n-2],[d'])=1. So, we can assume that d_i∩ d'=∅ for 1≤ i ≤ n-3, d_n-1∩ d'=∅ and |d_n-2∩ d'|=1. Moreover, by Theorem <ref>, (π∘φ)(t_n-1)=(π∘φ)(t_n), hence θ(f_n-1)=θ(f_n), and therefore d' is isotopic to d_n-1 in Σ_n. By lemma <ref> it follows that we have one of the following three possibilities. (1) d' is isotopic to d_n-1 in Σ_n∖{x}. (2) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([d'])=[d_n]. (3) There exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-2, g([d_n-1])=[d_n] and g([d'])=[d_n-1]. If d' is isotopic to d_n-1 in Σ_n∖{x}, then we prove as in the case where n is even that φ=β_0,q∘π if ε=1 and φ=χ∘β_0,-q∘π if ε=-1. Similarly, if there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-1 and g([d'])=[d_n], then we prove as in the case where n is even that φ=_v^-1∘γ_q if ε=1 and φ=_v^-1∘χ∘γ_-q if ε=-1, where v is an element of (π)⊂ A[D_n]. Suppose there exists g∈(θ) such that g([d_i])=[d_i] for all 1≤ i≤ n-2, g([d_n -1])=[d_n] and g([d'])=[d_n-1]. We have (ρ_D∘φ)(t_i)=T_d_i^ε T_∂^q=g^-1T_d_i^ε T_∂^qg for 1≤ i≤ n-2 , (ρ_D∘φ)(t_n-1)=T_d_n-1^ε T_∂^q=g^-1T_d_n^ε T_∂^qg , (ρ_D∘φ)(t_n)=T_d'^ε T_∂^q=g^-1T_d_n-1^ε T_∂^qg . By Theorem <ref> there exists v∈(π)⊂ A[D_n] such that ρ_D(v)=g. Since ρ_D is injective it follows that φ(t_i)=v^-1t_i^εΔ_D^2qv for 1≤ i≤ n-2 , φ(t_n-1)=v^-1t_n^εΔ_D^2qv , φ(t_n)=v^-1 t_n-1^εΔ_D^2qv . We conclude that φ=_v^-1∘ζ∘γ_q if ε=1 and φ=_v^-1∘ζ∘χ∘γ_-q if ε=-1. BelMar1 R W Bell, D Margalit, Injections of Artin groups, Comment. Math. Helv. 82 (2007), no. 4, 725–751. Birma1 J S Birman, Mapping class groups and their relationship to braid groups, Comm. Pure Appl. Math. 22 (1969), 213–238. BiLuMc1 J S Birman, A Lubotzky, J McCarthy, Abelian and solvable subgroups of the mapping class groups, Duke Math. J. 50 (1983), no. 4, 1107–1120. BirHil1 J S Birman, H M Hilden, On isotopies of homeomorphisms of Riemann surfaces, Ann. of Math. (2) 97 (1973), 424–439. Bourb1 N Bourbaki, Éléments de mathématique. Fasc. XXXIV. Groupes et algèbres de Lie. Chapitre IV: Groupes de Coxeter et systèmes de Tits. Chapitre V: Groupes engendrés par des réflexions. Chapitre VI: systèmes de racines, Actualités Scientifiques et Industrielles No 1337. Hermann, Paris, 1968. BrChVo1 C Bregman, R Charney, K Vogtmann, Outer space for RAAGs, Duke Math. J. 172 (2023), no. 6, 1033–1108. Bries2 E Brieskorn, Die Fundamentalgruppe des Raumes der regulären Orbits einer endlichen komplexen Spiegelungsgruppe, Invent. Math. 12 (1971), 57–61. Bries1 E Brieskorn, Sur les groupes de tresses [d'après V. I. Arnol'd], Séminaire Bourbaki, 24ème année (1971/1972), Exp. No. 401, pp. 21–44. Lecture Notes in Math., Vol. 317, Springer, Berlin, 1973. BriSai1 E Brieskorn, K Saito, Artin-Gruppen und Coxeter-Gruppen, Invent. Math. 17 (1972), 245–271. Caste2 F Castel, Représentations géométriques des groupes de tresses, Ph. D. Thesis, Université de Bourgogne, 2009. Caste1 F Castel, Geometric representations of the braid groups, Astérisque No. 378 (2016). ChaCri1 R Charney, J Crisp, Automorphism groups of some affine and finite type Artin groups, Math. Res. Lett. 12 (2005), no. 2-3, 321–333. ChaVog1 R Charney, K Vogtmann, Finiteness properties of automorphism groups of right-angled Artin groups, Bull. Lond. Math. Soc. 41 (2009), no. 1, 94–102. ChaVog2 R Charney, K Vogtmann, Subgroups and quotients of automorphism groups of RAAGs, Low-dimensional and symplectic topology, 9–27, Proc. Sympos. Pure Math., 82, Amer. Math. Soc., Providence, RI, 2011. ChKoMa1 L Chen, K Kordek, D Margalit, Homomorphisms between braid groups, preprint, arXiv:1910.00712. Coxet2 H S M Coxeter, Discrete groups generated by reflections, Ann. of Math. (2) 35 (1934), no. 3, 588–621. Coxet1 H S M Coxeter, The complete enumeration of finite groups of the form R_i^2=(R_iR_j)^k_ij=1, J. London Math. Soc. 10 (1935), 21–25. Crisp1 J Crisp, Automorphisms and abstract commensurators of 2-dimensional Artin groups, Geom. Topol. 9 (2005), 1381–1441. CriPar1 J Crisp, L Paris, Artin groups of type B and D, Adv. Geom. 5 (2005), no. 4, 607–636. Day1 M B Day, Peak reduction and finite presentations for automorphism groups of right-angled Artin groups, Geom. Topol. 13 (2009), no. 2, 817–855. Day2 M B Day, On solvable subgroups of automorphism groups of right-angled Artin groups, Internat. J. Algebra Comput. 21 (2011), no. 1-2, 61–70. Delig1 P Deligne, Les immeubles des groupes de tresses généralisés, Invent. Math. 17 (1972), 273–302. DyeGro1 J L Dyer, E K Grossman, The automorphism groups of the braid groups, Amer. J. Math. 103 (1981), no. 6, 1151–1169. Epste1 D B A Epstein, Curves on 2-manifolds and isotopies, Acta Math. 115 (1966), 83–107. FarMar1 B Farb, D Margalit, A primer on mapping class groups, Princeton Mathematical Series, 49. Princeton University Press, Princeton, NJ, 2012. GiHoMeRa1 N D Gilbert, J Howie, V Metaftsis, E Raptis, Tree actions of automorphism groups, J. Group Theory 3 (2000), no. 2, 213–223. Harpe1 P de la Harpe, Topics in geometric group theory, Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 2000. LabPar1 C Labruère, L Paris, Presentations for the punctured mapping class groups in terms of Artin groups, Algebr. Geom. Topol. 1 (2001), 73–114. Laure1 M R Laurence, A generating set for the automorphism group of a graph group, J. London Math. Soc. (2) 52 (1995), no. 2, 318–334. Lek1 H van der Lek, The homotopy type of complex hyperplane complements, Ph. D. Thesis, Nijmegen, 1983. Paris2 L Paris, Parabolic subgroups of Artin groups, J. Algebra 196 (1997), no. 2, 369–399. Paris1 L Paris, Centralizers of parabolic subgroups of Artin groups of type A_l, B_l, and D_l, J. Algebra 196 (1997), no. 2, 400–435. Paris3 L Paris, Artin groups of spherical type up to isomorphism, J. Algebra 281 (2004), no. 2, 666–678. ParRol1 L Paris, D Rolfsen, Geometric subgroups of mapping class groups, J. Reine Angew. Math. 521 (2000), 47–83. PerVan1 B Perron, J P Vannier, Groupe de monodromie géométrique des singularités simples, Math. Ann. 306 (1996), no. 2, 231–245. Sorok1 I Soroko, Artin groups of types F_4 and H_4 are not commensurable with that of type D_4, Topology Appl. 300 (2021), Paper No. 107770, 15 pp. Tits1 J Tits, Le problème des mots dans les groupes de Coxeter, 1969 Symposia Mathematica (INDAM, Rome, 1967/68), Vol. 1 pp. 175–185 Academic Press, London.
http://arxiv.org/abs/2307.01220v1
20230702103929
ARHNet: Adaptive Region Harmonization for Lesion-aware Augmentation to Improve Segmentation Performance
[ "Jiayu Huo", "Yang Liu", "Xi Ouyang", "Alejandro Granados", "Sebastien Ourselin", "Rachel Sparks" ]
eess.IV
[ "eess.IV", "cs.CV" ]
ARHNet J. Huo et al. School of Biomedical Engineering and Imaging Sciences (BMEIS), King's College London, London, UK [email protected] Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China ARHNet: Adaptive Region Harmonization for Lesion-aware Augmentation to Improve Segmentation Performance Jiayu Huo1 Yang Liu1 Xi Ouyang2 Alejandro Granados1 Sébastien Ourselin1 Rachel Sparks1 August 1, 2023 ======================================================================================================= Accurately segmenting brain lesions in MRI scans is critical for providing patients with prognoses and neurological monitoring. However, the performance of CNN-based segmentation methods is constrained by the limited training set size. Advanced data augmentation is an effective strategy to improve the model's robustness. However, they often introduce intensity disparities between foreground and background areas and boundary artifacts, which weakens the effectiveness of such strategies. In this paper, we propose a foreground harmonization framework (ARHNet) to tackle intensity disparities and make synthetic images look more realistic. In particular, we propose an Adaptive Region Harmonization (ARH) module to dynamically align foreground feature maps to the background with an attention mechanism. We demonstrate the efficacy of our method in improving the segmentation performance using real and synthetic images. Experimental results on the ATLAS 2.0 dataset show that ARHNet outperforms other methods for image harmonization tasks, and boosts the down-stream segmentation performance. Our code is publicly available at <https://github.com/King-HAW/ARHNet>. § INTRODUCTION Accurate brain lesion segmentation is essential for understanding the prognoses of neurological disorders and quantifying affected brain areas by providing information on the location and shape of lesions <cit.>. With advanced deep learning techniques, various brain lesion segmentation methods based on Convolutional Neural Networks (CNNs) have been proposed <cit.>. However, a noteworthy hurdle is the prerequisite of an adequate number of training samples to ensure the model's generalization ability. Utilizing small-scale datasets for the segmentation model training can result in over-fitting, thereby limiting its robustness to unseen samples. Due to the variance of lesion appearance and size, as well as the extreme data imbalance between foreground and background voxels, many deep learning models also struggle to perform the small lesion segmentation task. To this end, some data augmentation techniques have been proposed that aim to increase the diversity of the training set, which helps to boost the performance of the segmentation model for unseen images <cit.>. Often data augmentation is realized by basic image transformations such as rotation and flipping. As the diversity of the data generated through basic image transformations is deficient, advanced data augmentation approaches have been developed. For instance, Huo  <cit.> designed a progressive generative framework to synthesize brain lesions that can be inserted into normal brain scans to create new training instances. Zhang  <cit.> proposed a lesion-aware data augmentation strategy to increase the sample diversity. However, these methods often inevitably introduce boundary artifacts that may cause the intensity distribution to shift, resulting in segmentation performance degradation <cit.>. Recently, some image harmonization frameworks <cit.> have been developed to solve the boundary and style discontinuities between the foreground and background for natural images. However, these frameworks have limitations when applied to brain MRI scans, where the smooth transition between the lesion and surrounding tissues is more critical than natural images. In this paper, we tackle the problem of foreground intensity and style mismatch created by data augmentation, so that plausible images can be generated. As we do not have paired real and simulated images, we create simulated images by taking real images and introducing foreground disparities to use for training the image harmonization network (ARHNet). We further present an Adaptive Region Harmonization (ARH) module to align foreground feature maps guided by the background style information. Finally, we train a segmentation model based on the mixture of real and synthetic images produced by ARHNet to demonstrate its effectiveness for improving down-stream segmentation performance. § METHODOLOGY The purpose of ARHNet is to harmonize the foreground in augmented images created by a data augmentation technique such as Copy-Paste <cit.>, to further serve downstream tasks like segmentation. We try to find a function f such that f_θ (Ĩ_a, M_a) ≈ I_a. Here, Ĩ_a is the augmented image, I_a is the corresponding real image, and M_a is the foreground mask of Ĩ_̃ã. θ refers to the parameter vector of f, , ARHNet. However, since the augmented image Ĩ_a does not have a corresponding real image I_a, we perform foreground intensity perturbation using a real brain MRI scan I with stroke lesions and its foreground mask M to create an image Ĩ that simulates Ĩ_a with a disharmonious foreground. We train ARHNet using the pairs (Ĩ, M) → I to learn the parameter vector θ. §.§ Overview of ARHNet Fig. <ref> represents our framework (ARHNet) for foreground harmonization, which comprises four components: a foreground intensity perturbation unit, a boundary extractor, a generator G, and a discriminator D. Given I and M, I is first scaled from 0 to 1. Next, the foreground intensity perturbation unit generates a foreground intensity-perturbed image Ĩ. Intensity perturbation is performed as follows: Ĩ=[ ( 1 + α) · I + λ] ⊙ M + I ⊙( 1 - M), where α∼𝒰(-0.3,0.3), λ∼𝒰(-0.3,0.3). Here α and λ can simulate large intensity variance in augmented images Ĩ_a generated by advanced data augmentation approaches like Copy-Paste <cit.>. “⊙” denotes element-wise multiplication. After the foreground intensity perturbation, the stroke area is either brighter or darker compared to the background tissue, which is a boundary mismatch. Next, Ĩ is passed through G to obtain the intensity difference map. The foreground region of the intensity difference map is then extracted using M and further added by Ĩ to get a harmonized image Î. Inspired by <cit.>, we concatenate Î with Ĩ and M to create the input image pair for D. Here Ĩ and M provide location information of the foreground, which benefits the adversarial training process and ensures Î have high fidelity to the ground truth image. To optimize G and D so that harmonized images Î have realistic texture and harmonized boundary intensities, three loss functions are deployed during model training: reconstruction loss ℒ_rec, boundary-aware total variation loss ℒ_btv, and adversarial loss ℒ_adv. The reconstruction loss implemented in our framework is defined as: ℒ_rec=I-Î_1. Reconstruction L1 loss makes the output and ground truth have similar appearances but may cause over-smoothing of images. Therefore, the model tends to output images with low mean square error but with relatively blurred texture. To prevent texture blurring we add a discriminator so that the generator will produce distinct and realistic images. The adversarial loss ℒ_adv is added as additional supervision to the training process. In particular, we use hinge loss <cit.> instead of the cross-entropy loss to stabilize the training process and prevent the gradient from vanishing. The ℒ_adv is formulated as follows: ℒ_adv(D) = 𝔼_Î,Ĩ,M[max(0,1-D(Î,Ĩ,M))]+𝔼_I,Ĩ,M[max(0,1+D(I,Ĩ,M))], ℒ_adv(G) = - 𝔼_Î,Ĩ,M[D(Î,Ĩ,M)]. A loss with only ℒ_rec and ℒ_adv leads to an abrupt boundary between the foreground and background. To encourage the network to give low gradients on the border area of Î and make the transition from background to foreground smoother, we present a boundary-aware total variation loss ℒ_btv. If M̃ is the set of boundary voxels extracted by the boundary extractor, ℒ_btv can be defined as: ℒ_btv=∑_(i, j, k) ∈M̃Î_i+1, j, k-Î_i, j, k_1+Î_i, j+1, k-Î_i, j, k_1+Î_i, j, k+1-Î_i, j, k_1, where i, j and k represent the (i,j,k)^th voxel in M̃. By adding the boundary-aware loss, our framework makes the boundary transition smoother compared to other methods (see Fig. <ref> and <ref>), which makes the harmonized images more like those observed on real MRI. Overall, our total loss function is defined as: ℒ_total = λ_recℒ_rec+λ_btvℒ_btv+λ_advℒ_adv, where λ_rec, λ_btv and λ_adv are weighting factors for each term. §.§ Adaptive Region Harmonization (ARH) Module To better align the foreground and background feature maps obtained from Ĩ, we design a new feature normalization paradigm called Adaptive Region Harmonization (ARH) module. As depicted in Fig. <ref>, the ARH module takes the resized foreground mask M and the feature maps F as input. Here F ∈ℝ^C × H × W × D and M ∈ℝ^1 × H × W × D, where C, H, W, D indicate the number of feature channels, height, width, and depth of F, respectively. We first divide the feature maps into foreground F_f=F ⊙ M and background F_b=F ⊙ (1-M) according to M. Then we normalize F_f and F_b using Instance Normalization (IN) <cit.>, and calculate the channel-wise background mean value μ∈ℝ^C and standard deviation σ∈ℝ^C as follows: μ = 1/sum(1-M)∑_h,w,dF_c,h,w,d⊙(1-M_h,w,d), σ = √(1/sum(1-M)∑_h,w,d[ F_c,h,w,d⊙(1-M_h,w,d) - μ]^2), where sum(·) indicates summing all elements in the map. Different from the RAIN module <cit.> that directly applies μ and σ to F_f to align the foreground to the background, we present a learned scaling parameter strategy, with an attention mechanism so that the network focuses more on task-relevant areas to better learn the consistent feature representation for both foreground and background. Specifically, we calculate an attention map F_a∈ℝ^1 × H × W × D based on the entire feature maps in the ARH module, to let the module adaptively extract style information from important areas. F_a is formulated as: F_a = S(Conv([F_max,F_avg,F_Conv])), where S denotes the sigmoid function and Conv denotes the convolution operation. Additionally, we calculate two channel-wised scaling parameters γ∈ℝ^C × H × W × D and β∈ℝ^C × H × W × D as: γ = Conv(F_a),β = Conv(F_a). γ and β allow element-wise adjustments on σ and μ which represent the global intensity information extracted from the background feature maps. We fuse γ and β with σ and μ with two convolutional layers to obtain the foreground scaling factors γ_f and β_f, which can be calculated as: γ_f = Conv(γ + σ),β_f = Conv(β + μ). By applying γ_f and β_f to the foreground feature maps F_f, we finally attain the aligned feature maps via F̂ = F_f⊙ (1+γ_f) + β_f + F_b. § EXPERIMENTS §.§ Experiment Settings §.§.§ Dataset We use the ATLAS v2.0 dataset <cit.> to evaluate the performance of ARHNet. ATLAS (short for ATLAS v2.0) is a large stroke dataset, which contains 655 T1-weighted brain MRIs with publicly available voxel-wise annotations. All images were registered to the MNI-152 template with a voxel spacing of 1mm × 1mm × 1mm. According to <cit.>, about half of the images are characterized as small lesion images (foreground voxels ≤ 5,000). In this work we focus on only these images, corresponding to 320 MRIs. We split the dataset into five folds, stratified by lesion size to ensure both training and testing sets have the same data distribution. We randomly select one fold (20%) as the test set and the remaining four folds are the training set. §.§.§ Implementation Details ARHNet is implemented within PyTorch <cit.> and uses TorchIO <cit.> for loading data and creating intensity perturbations. To optimize the generator and discriminator, we use two AdamW optimizers <cit.>. The initial learning rates for G and D are set to 1e-4, and 5e-5, respectively. The batch size is set to 16 and total training epochs are 200 for each model. For input images, we randomly extract a 64 × 64 × 64 patch from the MRI scans corresponding to the region that contains the stroke annotation(s). The loss weight factors λ_rec, λ_btv, and λ_adv are set to 100, 10, 1, respectively. For the down-stream segmentation task that is used to evaluate our framework, we implement a segmentation model based on Attention UNet <cit.> in the MONAI framework <cit.>. The initial learning rate is set to 1e-3, and the batch size is 4. For a fair comparison, we train each setting for 30,000 iterations. §.§.§ Evaluation Metrics We evaluate the performance of ARHNet on the image harmonization task and also a down-stream stroke segmentation task. For the image harmonization task, we use four metrics to measure the fidelity of the output, , mean absolute error (MAE), mean absolute error of the foreground region (fMAE), peak signal-to-noise ratio (PSNR), and signal-to-noise ratio of the foreground region (fPSNR). For the down-stream stroke segmentation task, we use three metrics to evaluate the segmentation performance: the Dice coefficient, 95% Hausdorff Distance (95HD), and average surface distance (ASD). §.§ Experimental Results §.§.§ Comparison of Image Harmonization Results We quantitatively compare the foreground image harmonization results of ARHNet on the ATLAS test set with other non-learning- and learning-based methods. Results are shown in Table <ref> where “Composite” means we do not use any image harmonization method but directly calculating the metrics based on the images with foreground disparities which are inputs for all other methods. It gives the worst results as expected. If we adapt the foreground intensity to be consistent with the background based on Histogram Matching (“HM” in Table <ref>), we can achieve better results, but still worse than all of the learning-based methods evaluated. Four learning-based methods are implemented as comparisons. Here “UNet” refers to the UNet model trained with only the reconstruction loss ℒ_rec. “Hinge-GAN” means the UNet model trained with only the adversarial loss ℒ_adv. “UNet-GAN” denotes the UNet model is trained under the supervision of ℒ_rec and ℒ_adv. “RainNet” is a generator that consists of the RAIN module <cit.>, also only ℒ_rec and ℒ_adv are used for backpropagation. From Table <ref>, we can find that our method outperforms other methods on all metrics, proving the efficacy and rationality of ARHNet. Furthermore, compared with RainNet, our method achieve a big improvement of 1.41 dB in PSNR and 1.44 dB in fPSNR. We present qualitative results in Fig. <ref> and <ref>. In Fig. <ref> we can observe that ARHNet can achieve realistic harmonization images no matter if the foreground is brighter or darker than the background (top two rows: darker, bottom two rows: brighter). Also, the boundaries in our results are smoother than other methods. Additionally, we show the image harmonization results on composite brain MRI scans in Fig. <ref>. By zooming in on the boundary area, it is easy to observe that composite images harmonized by ARHNet are more realistic than RainNet, which demonstrates the superiority of our method again. §.§.§ Comparison of Down-Stream Segmentation Performance We report quantitative measures of the down-stream lesion segmentation performance for different training sets in Table <ref>. For each setting, we keep the batch size the same and train for 30,000 iterations for a fair comparison. “-” denotes no additional data is used for model training. “200 real” means 200 images with big lesions (foreground voxels > 5,000) from the original ATLAS v2.0 dataset are utilized as additional training samples. “200 by <cit.>” refers to using CarveMix to generate additional 200 images for model training. “200 by Ours” means we first use Copy-Paste <cit.> strategy to create 200 composite images, then we use ARHNet to adjust the foreground intensity to harmonize the images. As shown in Table <ref>, our method achieves the best segmentation result and brings a large performance gain of 12.57% in Dice compared to not using any additional data. §.§.§ Ablation Study We also investigate the performance gain achieved by our ARH module, results are shown in Table <ref>. We can find that if we keep all other settings unchanged and only replace the ARH module with InstanceNorm or BatchNorm, higher PSNR is reached compared to RainNet (see in Table <ref>). This demonstrates the effectiveness of some of the additional elements we presented in this work, such as boundary-aware total variation loss and the foreground intensity perturbation unit. However, if we replace the ARH module with the RAIN module, the result is the worst among all normalization methods. This is likely because the RAIN module only considers the entire style of the background, and therefore cannot align the foreground feature maps properly. § CONCLUSION In this paper, we propose an Adaptive Region Harmonization Network (ARHNet) that can effectively harmonize a target area and make the style of foreground and background consistent in this region. This framework can be utilized to harmonize synthetic samples generated by other data augmentation methods, and make these images more realistic and natural. Harmonized augmented samples can be further utilized in down-stream segmentation tasks to improve the segmentation model's generalization ability. Extensive experimental results demonstrate that our proposed method can generate style-consistent images and is effective for segmenting small stroke lesions on T1-weighted MRI. splncs04
http://arxiv.org/abs/2307.01975v1
20230705012435
Strong convergence rates for a full discretization of stochastic wave equation with nonlinear damping
[ "Meng Cai", "David Cohen", "Xiaojie Wang" ]
math.NA
[ "math.NA", "cs.NA", "math.PR" ]
Strong convergence rates for a full discretization of stochastic wave equation with nonlinear damping[1] Meng Cai^a^b, David Cohen^c, Xiaojie Wang^a[2] ^a School of Mathematics and Statistics, HNP-LAMA, Central South University, Changsha, China ^b LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China ^c Department of Mathematical Sciences, Chalmers University of Technology & University of Gothenburg, Göteborg, Sweden August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================ [1] This work was supported by NSF of China (Nos. 11971488, 12071488) and NSF of Hunan Province (No. 2020JJ2040). This work was partially supported by STINT and NSFC Joint China-Sweden Mobility programme (project nr. CH2016-6729). The work of D. Cohen was partially supported by the Swedish Research Council (VR) (projects nr. 2018-04443). [2] [email protected], [email protected], [email protected] The paper establishes the strong convergence rates of a spatio-temporal full discretization of the stochastic wave equation with nonlinear damping in dimension one and two. We discretize the SPDE by applying a spectral Galerkin method in space and a modified implicit exponential Euler scheme in time. The presence of the super-linearly growing damping in the underlying model brings challenges into the error analysis. To address these difficulties, we first achieve upper mean-square error bounds, and then obtain mean-square convergence rates of the considered numerical solution. This is done without requiring the moment bounds of the full approximations. The main result shows that, in dimension one, the scheme admits a convergence rate of order 12 in space and order 1 in time. In dimension two, the error analysis is more subtle and can be done at the expense of an order reduction due to an infinitesimal factor. Numerical experiments are performed and confirm our theoretical findings. AMS subject classification: 60H35, 60H15, 65C30 Keywords: stochastic wave equation, nonlinear damping, strong approximation, spectral Galerkin method, modified exponential Euler scheme § INTRODUCTION Hyperbolic stochastic partial differential equations (SPDEs) play an essential role in a range of real application areas, see below for examples. In the last decade, these SPDEs have attracted considerable attention from both a theoretical and a numerical point of view. One of the fundamental hyperbolic SPDE is the stochastic wave equation (SWE). Stochastic wave equations are used for instance to model the motion of a vibrating string <cit.> or the motion of a strand of DNA in a fluid <cit.>. A damping term is often included to the wave equation to model energy dissipation and amplitude reduction, see for instance the references <cit.> on stochastic wave equations with damping. In the present work, we focus on the strong approximation of the solution to the following SWE with nonlinear damping in a domain 𝒟: {[ u(t) = v(t) t,; v(t) = ( Δ u(t) + F ( v(t) )) t + W^Q(t), t ∈ (0, T],; u(0) = u_0, v(0) = v_0, ]. with homogeneous Dirichlet boundary conditions. Here, Δ denotes the Laplace operator and 𝒟⊂^d, with d ∈{ 1, 2}, is a bounded domain with smooth boundary ∂𝒟. Precise assumptions on F, the noise W^Q on a given probability space (Ω,ℱ,ℙ,{ℱ_t}_t≥ 0) as well as the initial value (u_0,v_0) are provided in the next section. As opposed to the large amount of works on the numerical analysis of SPDEs of parabolic type, see for instance the books <cit.> and references therein for the globally Lipschitz setting and <cit.> for the non-globally Lipschitz setting, the literature on the numerical analysis of SPDEs of hyperbolic type is relatively scarce. The numerical analysis of SWEs without damping has been investigated by several authors, see for example <cit.>. For instance, in the work <cit.>, the strong convergence rate of a fully discrete scheme for a stochastic wave equation driven by (a possibly cylindrical) Q-Wiener process is obtained together with an almost trace formula; Walsh in <cit.> used an adaptation of the leapfrog discretization to construct a fully discrete finite difference scheme which achieves a convergence order of 12 in both time and space for an SPDE driven by space-time white noise; the authors of <cit.> presented strong approximations of higher order by using linear functionals of the noise process in the time-stepping schemes. For stochastic strongly damped wave equations, Qi and Wang in <cit.> investigated the regularity and strong approximations of a full discretization performed by a standard finite element method in space and a linear implicit Euler–Maruyama scheme in time. These authors also analyzed an accelerated exponential scheme in <cit.>. For SWEs with weak damping, the authors of <cit.> proved existence and uniqueness of an invariant measure. With regard to the weak convergence analysis, we refer to the work <cit.> for a spatial spectral Galerkin approximations of a damped-driven stochastic wave equation. We also refer to the recent work <cit.>, where the authors analyzed the approximation of the invariant distribution for stochastic damped wave equations. Most of the aforementioned papers are concerned with SPDEs having globally Lipschitz coefficients. For SWEs with a cubic nonlinearity, we are aware of the following references. The work <cit.> studied a nonstandard partial-implicit midpoint-type difference method to control the energy functional of the SPDE. The recent work <cit.> analyzed an energy-preserving exponentially integrable numerical method. Finally, we recall that the existence and uniqueness of an invariant measure for the underlying stochastic wave equation have been proven in <cit.>. An interesting question could be to investigate numerical approximations of such invariant measure, which relies on a long-time error analysis. This will be the subject of a future work. In the present paper, we make a further contribution to the numerical analysis of SWEs with non-globally Lipschitz coefficients. Indeed, we prove strong convergence rates of a full discretization of the SPDE (<ref>) under certain assumptions allowing for super-linearly growing coefficients. To do this, we first spatially discretize the SPDE (see the abstract equation (<ref>) below) by a spectral Galerkin method (see equation (<ref>)). We then propose a modified implicit exponential Euler scheme (see equation (<ref>)) applied to the spectral Galerkin approximation. The main convergence results (see Theorems <ref> and <ref> below) show that, under Assumptions <ref> and <ref> and further technical conditions, the proposed fully discrete scheme strongly converges with order 12 in space and 1 in time. More precisely, let X(t) and X_N,m be the solution of the SWE with nonlinear damping (<ref>) and of the full approximation solution (<ref>). For N,M ∈ and d=1, there exists a constant C > 0, independent of the discretization parameters, such that sup_ 0 ≤ m ≤ M X(t_m) - X_N,m_L^2 (Ω;^̋1) ≤Cλ_N^ -1/2 + C τ, where τ= TM is the time stepsize, λ_N is the N-th eigenvalue of the Laplacian, and the product space ^̋1 is defined in the next section. Furthermore, we derive in Theorem <ref> the strong convergence in the $̋-norm of the spectral Galerkin approximation X(t) - X^N(t) _L^2( Ω;)̋≤Cλ_N^-1, where the convergence rate is twice that in the^̋1-norm. For the strong convergence in the$̋-norm of the time-stepping scheme, we cannot expect more as the convergence rate in the ^̋1-norm is already of order one. We mention that the proof relies on the exponential integrability properties of v(t) and v^N(t), as well as certain commutative assumptions on the nonlinear term F. In addition, for d=2, one gets the error estimates sup_ 0 ≤ m ≤ M X(t_m) - X_N,m_L^2 (Ω;^̋1) ≤Cλ_N^ -1/2 + ϵ + C τ^1-ϵ. Here, ϵ > 0 is an arbitrary small parameter. The error analysis for the space dimension d = 3 or higher is non-trivial. In particular, it is limited by the regularities of the mild solution X(t) and spatial semi-dicretization X^N(t). We now illustrate the main steps behind the proofs of our convergence results. For the spatial convergence analysis, we start by introducing an auxiliary process X^N(t) given by (<ref>) and then separate the strong error in space into two terms, X(t) - X^N(t) _L^2(Ω;^̋1) ≤ X(t) - X^N(t) _L^2(Ω;^̋1) + X^N(t) - X^N(t) _L^2(Ω;^̋1) =: Err_1 + Err_2. The bound for the term Err_1 can be done by a standard approach. The estimate of the second term Err_2 is not easy and heavily relies on the global monotonicity property of the nonlinearity, Gronwall's lemma, suitable uniform moment bounds for the auxiliary process X^N and the numerical approximations as well as the bounds of Err_1. For the temporal convergence analysis, motivated by the approach from the work <cit.> for finite-dimensional stochastic differential equations, we first show an upper bound of the temporal error in Proposition <ref>. This result then enables us to prove mean-square convergence rates for the considered SPDE without requiring an a priori high-order moment estimates of the fully discrete solution. It is worthwhile to mention that the error estimates for dimension two is more involved than for dimension one since the Sobolev embedding inequality Ḣ^1 ⊂ V:=C(𝒟; ℝ) fails to hold in dimension two. In order to overcome this difficulty, we combine Hölder's inequality and the Sobolev embedding theorem to bound the nonlinearity F(v) in a weak sense (see Lemma <ref> below). This causes an order reduction in the rate of convergence due to an infinitesimal factor in the convergence analysis for d=2. The outline of the paper is as follows. We start by collecting some notation, useful results and then show the well-posedness of the considered problem in Section <ref>. Section <ref> and Section <ref> are devoted to the mean-square convergence analysis in space and time, respectively. Finally, numerical experiments are presented in Section <ref> and illustrate the obtained convergence rates. § THE STOCHASTIC WAVE EQUATION WITH NONLINEAR DAMPING In this section, we set notation and show the well-posedness of the stochastic wave equation with nonlinear damping as well as the exponential integrability of its solution. Consider two separable Hilbert spaces U and H with norms denoted by ·_U and ·_H respectively. We denote by ℒ(U;H) the space of bounded linear operators from U to H with the usual operator norm ·_ℒ(U;H). As an important subspace of ℒ(U;H), we let ℒ_2(U;H) be the set of Hilbert–Schmidt operators with the norm T _ℒ_2(U;H) :=( ∑_k=1^∞ Te_k _H^2 )^1/2, where {e_k}_k=1^∞ is an arbitrary orthonormal basis of U. If H=U, we write ℒ(U):=ℒ(U;U) and ℒ_2 (U):=ℒ_2(U;U) for short. Let Q ∈ℒ(U) be a self-adjoint, positive semidefinite operator. As usual, we introduce the separable Hilbert space U_0 :=Q^1/2(U) with the inner product ⟨ u_0, v_0 ⟩_U_0:= ⟨ Q^-1/2 u_0,Q^-1/2 v_0 ⟩_U. Furthermore, the set ℒ_2^0 denotes the space of Hilbert–Schmidt operators from Q^1/2(U) to U with norm T _ℒ_2^0 = TQ^1/2_ℒ_2 (U). Finally, let (Ω,ℱ,ℙ,{ℱ_t}_t≥ 0) be a filtered probability space and L^p(Ω;U) be the space of U-valued integrable random variables with norm u _L^p(Ω;U) := ( [ u _U^p] )^1/p < ∞ for any p ≥ 2. In the sequel, we take U:=L^2( 𝒟;) with norm · and inner product ⟨·,·⟩. We also set V:=C(𝒟; ) to be the Banach space of all continuous functions endowed with the supremum norm. We let -Λ:=Δ denote the Laplacian with Dom(Λ) = H^2(𝒟) ∩ H^1_0(𝒟). Here, H^m(𝒟) denotes the standard Sobolev spaces of integer order m ≥ 1. For α∈ℝ, we then define the separable Hilbert spaces Ḣ^α=Dom(Λ^α/2) equipped with inner product ⟨ u , v ⟩_α := ⟨Λ^α/2 u , Λ^α/2 v ⟩ =∑_j=1^∞λ_j^α⟨ u , φ_j ⟩⟨ v , φ_j ⟩, u,v ∈Ḣ^α, where {(λ_j,φ_j)}_j=1^∞ are the eigenpair of Λ with {φ_j}_j=1^∞ being orthonormal eigenfunctions. The corresponding norm in the space Ḣ^α is defined by u _α = ⟨ u , u ⟩_α ^1/2. Furthermore, we introduce the product space ^̋α: =Ḣ^α×Ḣ^α-1 with the inner product ⟨ Y , Z ⟩_^̋α := ⟨ y_1 , z_1 ⟩_α + ⟨ y_2 , z_2 ⟩_α - 1 for Y=[y_1,y_2]^T and Z=[z_1,z_2]^T. The induced norm is denoted by X _^̋α := ( x_1 _α^2 + x_2 _α-1^2 )^1/2 for X=[x_1,x_2]^T. For the special case α = 0, we define :̋=^̋0=Ḣ^0×Ḣ^-1 and Ḣ^0 = U =L^2( 𝒟 ; ). To follow the semigroup framework of the book <cit.>, we formally transform the SPDE (<ref>) into the following abstract Cauchy problem: X(t) = A X(t) t + F(X(t)) t + B W^Q(t), X(0) = X_0, where X =X(t)=[ [ u; v ]] , A= [ [ 0 I; -Λ 0 ]] , F(X)= [ [ 0; F(v) ]] , B = [ [ 0; I ]], X_0 = [ [ u_0; v_0 ]]. Here, X_0 is assumed to be an ℱ_0-measurable random variable. The operator A with Dom(A)= ^̋1 = Ḣ^1 ×Ḣ^0 is the generator of a strongly continuous semigroup (E(t))_t ≥ 0 on ^̋1, written as E(t)= e^t A = [ [ C(t) Λ^-1/2 S(t); -Λ^1/2 S(t) C(t) ]], where C(t) := cos( t Λ^1/2) and S(t) := sin( t Λ^1/2) are the so-called cosine and sine operators. These operators are bounded in the sense that C(t) φ≤φ and S(t) φ≤φ hold for any φ∈ U. The trigonometric identity S(t) φ^2 + C(t) φ^2 = φ^2 also holds for any φ∈ U. Additionally, due to the commutative property between C(t), S(t) and Λ^α for α∈, one can check the stability property of the semigroup, that is E(t) X _^̋γ≤ X _^̋γ for t ≥ 0, γ∈ and X ∈^̋γ. Finally, we recall the following lemma which will be used frequently in our convergence analysis. For t ≥ s ≥ 0 and γ∈ [0,1], we have ( S(t) - S(s) ) Λ^-γ/2_ℒ(U)≤C (t-s)^γ, ( C(t) - C(s) ) Λ^-γ/2_ℒ(U)≤C (t-s)^γ for some constant C>0. Moreover, for X∈^̋γ it holds that ( E(t) - E(s) ) X _≤C (t-s)^γX_^̋γ for some constant C>0. We refer to <cit.>, for instance, for a proof of this lemma. To show the well-posedness of the SPDE (<ref>), we make the following assumptions on the nonlinear term and on the noise process, see <cit.>. The nonlinear term F is assumed to be the Nemytskij operator associated to a real-valued function f: → given by F (v)(x) =f ( v(x)) x ∈𝒟, where f is assumed to be twice continuous differentiable and satisfies v f(v) ≤ C_0 ( 1 + |v|^2 ), for some constant C_0 > 0, f'(v) ≤ C_1, for some constant C_1 ∈, | f'(v) | ≤ C_2 ( 1 + |v|^γ - 1), for some constant C_2 >0, γ≥ 2, | f”(v) | ≤ C_3 ( 1 + |v|^γ - 2), for some constant C_3 >0,γ≥ 2. A typical example of a nonlinearity satisfying the above assumptions is f(v)=v-v^3. Let { W^Q(t)}_t ∈ [0,T] be a standard Q-Wiener process such that the covariance operator Q= Q^1/2∘ Q^1/2 satisfies Λ^1/2 Q^1/2_ℒ_2(U) < ∞. An example of a covariance operator satisfying the above condition is Q = Λ^- δ, δ > 1 + d/2. The well-posedness of the SPDE (<ref>) and the spatial regularity of the mild solution X(t) are given in the following theorem. Let T>0. Under Assumptions <ref> and <ref>, the stochastic wave equation (<ref>) admits a unique mild solution given by, for each t ∈ [ 0, T], X(t) = E(t) X_0 + ∫_0^t E(t-s) F(X(s)) s + ∫_0^t E(t-s) B W^Q(s), a.s. Additionally, if we assume that the initial value satisfies X_0 _L^p(Ω; ^̋2) < ∞ for some p ≥ 2, then we get the bound sup_t ∈ [0,T] X(t) _L^p(Ω;^̋2) < ∞. The existence and uniqueness of the mild solution (<ref>) can be proven as in the reference <cit.>. We now prove the spatial regularity (<ref>). An application of the Itô formula for X(t) _^̋2^p, Young's inequality, properties of stochastic integrals, the assumptions on the inital value and on the dissipativity of the nonlinear term F, and the assumption (<ref>) on the Q-Wiener process yield that [ X(t) _^̋2^p ] = [ X_0 _^̋2^p ] + p ∫_0^t [ X(s) _^̋2^p-2⟨ X(s) , X(s) ⟩_^̋2] + 1/2∑_j=1^∞∫_0^t [ p X(s) _^̋2^p-2⟨ B Q^1/2φ_j, B Q^1/2φ_j ⟩_^̋2 + p (p-2) X(s) _^̋2^p-4⟨ X(s), B Q^1/2φ_j ⟩_^̋2⟨ X(s), B Q^1/2φ_j ⟩_^̋2] s ≤C + p ∫_0^t [ X(s) _^̋2^p-2⟨ X(s) , A X(s) ⟩_^̋2] s + p ∫_0^t [ X(s) _^̋2^p-2⟨ X(s) , F( X(s)) ⟩_^̋2] s + p ∫_0^t [ X(s) _^̋2^p-2⟨ X(s) , B W^Q(s)⟩_^̋2] +C(p) ∫_0^t [ X(s) _^̋2^p-2Λ^1/2 Q^1/2_ℒ_2^2 ] s ≤C(p,T) + p ∫_0^t [ X(s) _^̋2^p-2⟨∇ v(s) , F'( v(s) )∇ v(s) ⟩] s +C(p,T) ∫_0^t [ X(s) _^̋2^p] s ≤C(p,T) +C(p,T) ∫_0^t [ X(s) _^̋2^p] s. Here, we used the facts that ⟨ X(s), A X(s) ⟩_^̋2 =0, ⟨ v(s), F( v(s) )⟩_Ḣ^1 =⟨∇ v(s), F'( v(s) ) ∇ v(s) ⟩ and X(s) _^̋2^p-2≤ X(s) _^̋2^p +1 in the second inequality. At last, an application of Gronwall's lemma finishes the proof. Next, we show an exponential integrability property of the mild solution X(t). Under the setting of the above theorem and assuming that [ exp( X_0 _^̋2^p ) ] < ∞ for some p ≥ 2, then there exists a constant α≥ p + p(p-1)2Λ^1/2 Q^1/2_ℒ_2^2 such that [ exp( X(t) _^̋2^p · e^ - α t ) ] ≤[ exp( X_0 _^̋2^p ) ] · exp( t p(p-1)2 Λ^1/2 Q^1/2 t _ℒ_2^2 ) for t∈[0,T]. By Itô's formula, we have, for all t∈[0,T], that exp( X(t) _^̋2^p · e^ - α t ) = exp ( X_0 _^̋2^p ) + ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) · p · X(s) _^̋2^p-2⟨ X(s) , A X(s) ⟩_^̋2 s + ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) · p · X(s) _^̋2^p-2⟨ X(s) , F(X(s)) ⟩_^̋2 s + ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) · p · X(s) _^̋2^p-2⟨ X(s) , B W(s) ⟩_^̋2 + p/2∑_j=1^∞∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) X(s) _^̋2^p-2⟨ B Q^1/2φ_j, B Q^1/2φ_j ⟩_^̋2 s + p(p-2)/2∑_j=1^∞∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) X(s) _^̋2^p-4| ⟨ X(s), B Q^1/2φ_j ⟩_^̋2|^2 s - ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) ·α· X(s) _^̋2^p s. Taking expectation on both sides leads to [ exp( X(t) _^̋2^p · e^ - α t ) ] ≤[ exp ( X_0 _^̋2^p ) ] + [ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) · p · X(s) _^̋2^p s] + p/2[ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) X(s) _^̋2^p-2Λ^1/2 Q^1/2_ℒ_2^2 s] + p(p-2)/2[∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) X(s) _^̋2^p-2Λ^1/2 Q^1/2_ℒ_2^2 s] - [ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) ·α· X(s) _^̋2^p s] ≤[ exp ( X_0 _^̋2^p ) ] + [ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) · p · X(s) _^̋2^p s] + p(p-1)/2[ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) X(s) _^̋2^pΛ^1/2 Q^1/2_ℒ_2^2 s] + p(p-1)/2[ ∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) Λ^1/2 Q^1/2_ℒ_2^2 s] - [∫_0^t exp( X(s) _^̋2^p · e^ - α s - α s ) ·α· X(s) _^̋2^p s] ≤[ exp ( X_0 _^̋2^p ) ] + p(p-1)/2[ ∫_0^t exp( X(s) _^̋2^p · e^ - α s ) Λ^1/2 Q^1/2_ℒ_2^2 s], where the fact that X(s) _^̋2^p-2≤ X(s) _^̋2^p + 1 was used in the second inequality and the condition α≥ p + p(p-1)2Λ^1/2 Q^1/2_ℒ_2^2 together with exp(- α s) <1 were used in the last inequality. Finally, invoking Gronwall's lemma finishes the proof. Under the same assumptions as in Lemma <ref>, for any c > 0 and p ≥ 2 it holds that [ exp( ∫_0^T c v(s)_1^p s ) ] < ∞. Using Jensen's inequality and the elementary inequality x+y ≥ 2 √(xy), for x,y>0, we obtain [ exp( ∫_0^T c v(s)_1^p s ) ] = [ exp( 1/T∫_0^T c T v(s)_1^p s ) ] ≤[ 1/T∫_0^T exp( c T v(s)_1^p ) s ] ≤sup_s ∈ [0,T][ exp( c T v(s)_1^p ) ] ≤sup_s ∈ [0,T][ exp( v(s)_1^2p/e^α s) · exp( c^2 T^2 e^α s/4) ] < ∞, where α comes from Lemma <ref>. Applying this later lemma finishes the proof. We conclude this section by introducing some basic inequalities especially useful when considering the SPDE (<ref>) in dimension d=2. Recall first the following Sobolev embedding inequality, (e.g., <cit.> and <cit.>), for sufficiently small ϵ > 0 and θ∈ (0,1), Ḣ^ 2 ϵ⊂ L^2/ 1 - 2 ϵ, Ḣ^ 1 - θ⊂ L^2/θ. Then, for x∈ L^2/1 + 2 ϵ, one has Λ^-ϵ x = sup_ y = 1 | ⟨ x , Λ^-ϵ y ⟩| ≤sup_ y = 1 x _L^2/1 + 2 ϵΛ^-ϵ y _L^2/1 - 2 ϵ ≤Csup_ y = 1 x _L^2/1 + 2 ϵΛ^-ϵ y _2ϵ≤C x _L^2/1 + 2 ϵ. Concerning the nonlinear term F, we have the following useful lemmas. Under Assumption <ref>, we have, for d =1, that F (v) _1≤C( 1 + v_1^γ), v∈Ḣ^1 and for d = 2, θ∈ (0,1), that F (v) _θ≤C( 1 + v_1^γ), v∈Ḣ^1. In dimension d=1, using properties of the space Ḣ^θ, the assumption on F and the Sobolev embedding Ḣ^1⊂ V, one has F (v) _1= ∇ ( F(v) ) = F'(v) ∇ v≤C( 1 + v _V^γ-1) v _1 ≤C( 1 + v_1^γ). In dimension d=2, the Sobolev embedding inequality (<ref>) and Hölder's inequality, yield x _θ = Λ^θ/2 x =sup_ y = 1 | ⟨Λ^θ/2 x , y ⟩| = sup_ y = 1 |⟨Λ^1/2 x , Λ^θ-1/2 y ⟩| ≤Csup_ y = 1 Λ^1/2 x _L^2/2 - θΛ^θ-1/2 y _L^2/θ≤CΛ^1/2 x _L^2/2 - θ. As a consequence, one obtains F (v) _θ ≤C F'(v) ∇ v _L^2/2 - θ≤C F'(v) _L^2/1 - θ∇ v ≤C( 1 + v _L^2(γ-1)/1-θ^γ-1) v _1 ≤C( 1 + v_1^γ). This concludes the proof of the lemma. Under Assumption <ref>, we have for d=1, that F'(ϕ) ψ_-1≤C ( 1 + ϕ_1^γ - 1 ) ψ_-1 for ϕ∈Ḣ^1 and ψ∈ H. Let us start the proof with a preliminary step. Using the assumption on the nonlinearity and applying the Sobolev embedding inequality Ḣ^1 ⊂ V yield ∇( F' (ϕ) φ)^2 = ∫_𝒟| ∂∂ x( f'(ϕ(x)) φ(x) ) |^2 x ≤ 2 ∫_𝒟( | f”(ϕ(x)) ϕ'(x) φ(x) |^2 + | f'(ϕ(x)) φ'(x) |^2 ) x ≤C ( 1 + ϕ_V^γ-2)^2 φ_V^2 ϕ_1^2 +C ( 1 + ϕ_V^γ-1)^2 φ_1^2 ≤C ( 1 + ϕ_1^γ-1)^2 φ_1^2 for ϕ∈Ḣ^1 and φ∈Ḣ^1. To get equation (<ref>), we note that Λ^-1/2 F'( ϕ ) ψ = sup_ξ≤ 1| ⟨Λ^-1/2 F'( ϕ ) ψ, ξ⟩| = sup_ξ≤ 1| ⟨Λ^-1/2ψ, Λ^1/2 F'( ϕ ) Λ^-1/2ξ⟩| ≤ψ_-1·sup_ξ≤ 1 F' ( ϕ ) Λ^-1/2ξ_1 ≤C ( 1 + ϕ_1^γ-1) ψ_-1, where the Cauchy–Schwarz inequality and the self-adjointness of F' ( ϕ ) and Λ^-1/2 were used. § STRONG CONVERGENCE RATES OF THE SPATIAL DISCRETIZATION In this section, we analyze the strong approximation of the spatial discretization of the SPDE (<ref>) by a spectral Galerkin method. It should be noted that essential difficulties exist for analyzing a finite element method for the considered SPDE. Indeed, the dissipative property of the nonlinear mapping P_h F, where P_h is the orthogonal projection, does not hold in the ℍ^2-norm. This problem does not arise with the spectral Galerkin approximation that we now consider. To begin with, we consider a positive integer N and define the finite dimensional subspace U_N of U=L^2 ( 𝒟 ; ) by U_N := span{ e_1, e_2, ⋯, e_N }, where we recall that (e_j)_j=1^N are the first eigenfunctions of Λ. We next define the projection operator P_N: Ḣ^α→ U_N by P_N ψ = ∑_i=1^N ⟨ψ, e_i ⟩ e_i, ∀ψ∈Ḣ^α, α≥ -1. Moreover, the projection operator on ^̋β, still denoted by P_N for convenience, is given by P_N X = [P_N x_1, P_N x_2]^T for X=[x_1,x_2]^T ∈^̋β,β≥ 0. Then, one can immediately verify that P_N ψ≤ψ, (I - P_N) ψ≤λ_N ^-κ/2ψ_κ, for ψ∈Ḣ^κ, κ≥ 0. The discrete Laplacian Λ_N: U_N → U_N is defined by Λ_N ψ = Λ P_N ψ = P_N Λψ = ∑_i=1^N λ_i ⟨ψ, e_i ⟩ e_i, ∀ψ∈ U_N. Therefore, applying the spectral Galerkin method to the SPDE (<ref>) yields the finite dimensional problem X^N(t) = A_N X^N(t) t + F_N (X^N(t)) t + B_N W^Q(t) with the initial value X^N(0)=[ [ P_N u_0; P_N v_0 ]]. Here, we denote F_N := P_N F and X^N =X^N(t) = [ [ u^N; v^N ]] , A_N = [ [ 0 I; -Λ_N 0 ]] , F_N(X^N) = [ [ 0; F_N (v^N) ]] , B_N = [ [ 0; P_N ]]. Analogously to the continuous setting, the operator A_N generates a strongly continuous semigroup E_N(t) for t ≥ 0 on U_N × U_N, given by E_N(t)= e^t A_N = [ [ C_N(t) Λ_N^-1/2 S_N(t); -Λ_N^1/2 S_N(t) C_N(t) ]]. Obviously, the discrete cosine and sine operators C_N(t):= cos( t Λ_N^1/2) and S_N(t):= sin( t Λ_N^1/2) satisfy C_N(t) P_N ψ = C(t) P_N ψ = P_N C(t) ψ, S_N(t) P_N ψ = S(t) P_N ψ = P_N S(t) ψ, ψ∈Ḣ^α,α≥ -1. Similarly to the continuous setting, the mild solution of the semi-discrete problem (<ref>) reads X^N(t) = E_N(t) X^N(0) + ∫_0^t E_N(t-s) F_N(X^N(s)) s + ∫_0^t E_N(t-s) B_N W^Q(s). The following lemma, concerning the spatio-temporal regularity of X^N(t), is crucial in the presented strong convergence analysis. Under Assumptions <ref> and <ref> and assuming that X_0 _L^p(Ω; ^̋2) < ∞ for some p ≥ 2, it holds that sup_t ∈ [0,T] X^N(t) _L^p(Ω;^̋2) < ∞. In addition, for 0 ≤ s ≤ t ≤ T and η∈ [1,2), one has the temporal regularity X^N(t) - X^N(s) _L^p(Ω;^̋η)≤C( t - s )^min{2-η,1/2}. The proof for the spatial regularity of the process X^N(t) can be shown similarly to the proof of Theorem <ref>. This is thus omitted. To prove the temporal Hölder regularity, we use the mild form of the semi-discrete solution (<ref>) and obtain X^N(t) - X^N(s) = ( E_N(t-s) - I ) X^N(s) + ∫_s^t E_N(t-r) F_N (X^N(r)) r + ∫_s^t E_N(t-r) B_N W^Q(r). Thus, by the triangle inequality, we get X^N(t) - X^N(s) _L^p(Ω; ^̋η) ≤ ( E_N(t-s) - I ) X^N(s) _L^p(Ω; ^̋η) + ∫_s^t E_N(t-r) F_N(X^N(r)) r _L^p(Ω; ^̋η) + ∫_s^t E_N(t-r) B_N W^Q(r) _L^p(Ω; ^̋η). We now treat each of the above three terms separately. The first term can be directly estimated by (<ref>) and (<ref>) in order to get the estimates ( E_N(t-s) - I ) X^N(s) _L^p(Ω; ^̋η)≤C( t - s )^ 2- η X^N(s) _L^p(Ω; ^̋2)≤C( t - s )^ 2- η. For the second term, it follows from the stability of the semigroup, Assumption <ref>, the bound (<ref>) with θ = η -1, and the relation (<ref>) that ∫_s^t E_N(t-r) F_N (X^N(r)) r _L^p(Ω; ^̋η) ≤C∫_s^t F(v^N(r)) _L^p(Ω; Ḣ^η-1) r ≤C∫_s^t ( 1 + v^N(r) _L^γ p(Ω; Ḣ^1)^γ) r ≤C( t - s ). Finally, using the Burkholder–Davis–Gundy inequality, the stability of the sine and cosine operators as well as of the projection operator, and the assumption (<ref>) on the noise, we obtain ∫_s^t E_N(t-r) B_N W^Q(r) _L^p(Ω; ^̋η) ≤( ∫_s^t S(t-r) Λ^η-1/2_ℒ_2^0^2 + C(t-r) Λ^η-1/2_ℒ_2^0^2 r )^1/2 ≤C ( ∫_s^t Λ^η-2/2Λ^1/2 Q^1/2_ℒ_2^2 r )^1/2 ≤C( t - s )^1/2. It remains to collect all the above estimates to conclude the proof. The following theorem gives the strong error, in the ^̋1-norm, of the spatial approximation of the stochastic wave equation (<ref>) by the spectral Galerkin method. Let X(t), resp. X^N(t), denote the mild solution (<ref>) of the considered stochastic wave equation with nonlinear damping, resp. the mild solution (<ref>) of its spectral Galerkin approximation. Under Assumptions <ref> and <ref> and assuming that X_0_L^p(Ω;^̋2) < ∞ for some p ≥ 2, we have for dimension d=1, the error estimate X(t) - X^N(t) _L^2( Ω;^̋1 )≤Cλ_N^-1/2. For dimension d=2, for any sufficiently small parameter ϵ > 0, the error estimate reads X(t) - X^N(t) _L^2( Ω;^̋1 )≤Cλ_N^-1/2+ϵ. To carry out the convergence analysis of the spectral Galerkin method, we first define an auxiliary process X^N(t) satisfying X^N(t) = A_N X^N(t) t + F_N (X(t)) t + B_N W^Q(t), X^N(0)=X^N(0), which can be regarded as a linear SPDE and its solution is given by X^N(t) = E_N(t) X^N(0) + ∫_0^t E_N(t-s) F(X(s)) s + ∫_0^t E_N(t-s) B_N W^Q(s). We can then split the error of the spatial approximation into two terms: X(t) - X^N(t) _L^2(Ω;^̋1)≤ X(t) - X^N(t) _L^2(Ω;^̋1) + X^N(t) - X^N(t) _L^2(Ω;^̋1). For the first error term, we further divide it into three terms using the triangle inequality and the fact that F_N=P_N F. For any p≥2, we then obtain X(t) -X^N(t) _L^p(Ω;^̋1) ≤ E(t) X(0) - E_N(t) X^N(0)_L^p(Ω;^̋1) + ∫_0^t ( E(t-s) - E_N(t-s) ) F(X(s)) s _L^p(Ω;^̋1) + ∫_0^t ( E(t-s) B - E_N(t-s) B_N ) W^Q(s) _L^p(Ω;^̋1) =: I_1 + I_2 + I_3. We now estimate the above three terms. For the first one, owing to the properties of the projection operator, see equation (<ref>), and using that X(0)_L^p(Ω;^̋2) < ∞, one gets the estimate I_1 ≤C λ_N^-1/2X(0)_L^p(Ω;^̋2)≤C λ_N^-1/2. For the term I_2, we use again (<ref>), the moment bound for the mild solution given in (<ref>), and the assumption (<ref>) on the noise to show I_2 = ( I - P_N ) ∫_0^t E(t-s) F (X(s)) s _L^p(Ω;^̋1) = ( I - P_N ) ( X(t) - E(t) X(0) - ∫_0^t E(t-s) B W^Q(s) ) _L^p(Ω;^̋1) ≤C λ_N^-1/2( X(t) _L^p(Ω;^̋2) + X(0) _L^p(Ω;^̋2) + Λ^1/2 Q^1/2_ℒ_2) ≤C λ_N^-1/2. For the term I_3, the Itô isometry, stability properties of the operators C(t), S(t), (<ref>) and (<ref>) provide us with the estimate I_3 ≤( ∫_0^t ( S(t-s) - S_N(t-s) _ℒ_2^0^2 + C(t-s) - C_N(t-s) _ℒ_2^0^2) s )^1/2 ≤( ∫_0^t (I - P_N) Λ^-1/2Λ^1/2 Q^1/2_ℒ_2^2 s )^1/2 ≤C λ_N^-1/2. Thus, combining the above, we obtain the bound X(t) -X^N(t) _L^p(Ω;^̋1)≤Cλ_N^-1/2. To deal with the second error term X^N(t) - X^N(t) _L^2(Ω;^̋1), we need the regularity of X^N(t) that we now show. By definition of the auxiliary process and the use of our assumptions as well as the regularity of the mild solution of the stochastic wave equation, we obtain the bound X^N(t) _L^p(Ω;^̋2) ≤ E_N (t) X^N(0) _L^p(Ω;^̋2) + ∫_0^t E_N(t-s) F ( X(s) ) s _L^p(Ω;^̋2) + ∫_0^t E_N(t-s) B_N W^Q(s) _L^p(Ω;^̋2) ≤ X(0) _L^p(Ω;^̋2) + ∫_0^t E(t-s) F(X(s)) s _L^p(Ω;^̋2) + Λ^1/2 Q^1/2_ℒ_2 ≤C < ∞. Next, applying the dissipative condition of the nonlinearity F gives ( X^N(t) - X^N(t) _^̋1^2 ) t = 2 ⟨X^N(t) - X^N(t), ( X^N(t) - X^N(t) ) t⟩_^̋1 = 2 ⟨X^N(t) - X^N(t), A_N ( X^N(t) - X^N(t) ) ⟩_^̋1 + 2 ⟨X^N(t) - X^N(t), F_N ( X(t) ) - F_N (X^N(t) ) ⟩_^̋1 = 2 ⟨X^N(t) - X^N(t), F_N (X^N(t) ) - F_N (X^N(t) ) ⟩_^̋1 + 2 ⟨X^N(t) - X^N(t), F_N (X(t)) - F_N (X^N(t) ) ⟩_^̋1 ≤ 2 X^N(t) - X^N(t) _^̋1^2 + 2 ⟨X^N(t) - X^N(t), F_N (X(t)) - F_N ( X^N(t) ) ⟩_^̋1. Therefore, in dimension d=1, integrating both sides of the above relation over [0,t] and using the Sobolev embedding inequality Ḣ^1 ⊂ V:=C(𝒟;ℝ), the regularity of the auxiliary and mild solutions, one gets the estimate X^N (t) - X^N(t) _^̋1^2 ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s +C∫_0^t F_N ( v(s) ) - F_N (v^N(s)) ^2 s ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s + C∫_0^t ( 1 + v^N(s)_V^2(γ-1) + v(s)_V^2(γ-1)) v(s) - v^N(s) ^2 s ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s + C∫_0^t ( 1 + v^N(s)_1^2(γ-1) + v(s)_1^2(γ-1)) X(s) - X^N(s) _^̋1^2 s. Unfortunately, the embedding Ḣ^1 ⊂ V is not valid in dimension d=2. As a result, the error analysis can only be done at the expense of an order reduction due to an infinitesimal factor. Indeed, in this case and for ϵ>0, we obtain the estimate X^N (t) - X^N(t) _^̋1^2 ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s +C∫_0^t Λ^ϵ P_N Λ^-ϵ ( F ( v(s) ) - F (v^N(s)) ) ^2 s ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s +C λ_N^2ϵ ∫_0^t ( F ( v(s) ) - F (v^N(s)) ) _L^2/1+2ϵ^2 s ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s + C λ_N^2ϵ ∫_0^t ( 1 + v^N(s)_L^(γ-1)/ϵ^2(γ-1) + v(s)_L^(γ-1)/ϵ^2(γ-1)) v(s) - v^N(s) ^2 s ≤C∫_0^t X^N(s) - X^N(s) _^̋1^2 s + C λ_N^2ϵ ∫_0^t ( 1 + v^N(s)_1^2(γ-1) + v(s)_1^2(γ-1)) X(s) - X^N(s) _^̋1^2 s, where the inequality (<ref>) and Hölder's inequality ab_L^2/1+2ϵ≤a_L^1/ϵb_L^2 were used. Finally, taking expectation in the above error estimates and using Gronwall's lemma, Hölder's inequality, the regularity of X(s), X^N(s) and equation (<ref>), we obtain for dimension d=1, the error bound for the second error term X^N(t) - X^N(t) _L^2(Ω;^̋1)≤C λ_N^-1/2. In dimension d=2, for a sufficiently small ϵ > 0, we obtain the error bound for the second error term X^N(t) - X^N(t) _L^2(Ω;^̋1)≤C λ_N^-1/2+ϵ. This, together with the bound (<ref>) for the first error term, finishes the proof. Following the same approaches as in the proofs of Lemma <ref> and Corollary <ref>, one can show the exponential integrability property of the spatial approximation v^N(s). Under the setting of Lemma <ref>, for any c > 0 and p ≥ 2, it holds that [ exp( ∫_0^T c v^N(s)_1^p s ) ] < ∞. The forthcoming theorem concerns the strong convergence in the $̋-norm, which provides estimates for the approximation of the first component of the problem in theL^2-norm. Its proof heavily relies on the exponential integrability properties ofv(t)andv^N(t), as well as certain commutative conditions on the nonlinear termF. Let X(t), resp. X^N(t), denote the mild solution (<ref>) of the considered stochastic wave equation with nonlinear damping, resp. the mild solution (<ref>) of its spectral Galerkin approximation. Under Assumptions <ref> and <ref> and assuming that X(0)_L^p(Ω;^̋2) < ∞ for some p ≥ 2, we have for dimension d=1, the strong error estimate X(t) - X^N(t) _L^2( Ω;)̋≤Cλ_N^-1. The strong error of Galerkin's method can be divided into two parts using the triangle inequality X(t) - X^N(t) _L^2( Ω;)̋≤ ( I - P_N ) X(t) _L^2( Ω;)̋ + P_N X(t) - X^N(t) _L^2( Ω;)̋. According to the spatial regularity of X(t) and the bound (<ref>) on the mild solution, we deduce DC: maybe write 2 instead of p below, no?? and the norms in the text in blue seems not correct?? one has expectation in the first term in black ( I - P_N ) X(t) _L^2( Ω;)̋ = ( [ ( I - P_N ) u(t) ^2 + ( I - P_N ) v(t) _-1^2 ])^1/2 ≤C λ_N^-1 X(t) _L^2( Ω;^̋2 )≤C λ_N^-1. It remains to bound the term P_N X(t) - X^N(t) _L^2( Ω;)̋. By Newton–Leibniz formula and Lemma <ref>, we obtain t P_N X(t) - X^N(t) _^2 = 2 ⟨ P_N X(t) - X^N(t), t ( P_N X(t) - X^N(t)) ⟩_ = 2 ⟨ P_N X(t) - X^N(t), A_N ( P_N X(t) - X^N(t) ) + F_N( X(t) ) - F_N( X^N(t) ) ⟩_ = 2 ⟨ P_N v(t) - v^N(t), F ( P_N v(t) ) - F ( v^N(t) ) ⟩_-1 + 2 ⟨ P_N v(t) - v^N(t), F ( v(t) ) - F ( P_N v(t) ) ⟩_-1 ≤C( 1 + v(t) _1^γ-1 + v^N(t) _1^γ-1) P_N X(t) - X^N(t) _^2 +C( 1 + v(t) _1^2(γ-1) + v^N(t) _1^2(γ-1)) X(t) - P_N X(t) _^2. The above then implies the bound P_N X(t) - X^N(t) _^2 ≤C λ_N^-2 exp( ∫_0^T ( 1 + v(s) _1^γ-1 + v^N(s) _1^γ-1 ) s ) ×∫_0^t ( 1 + v(s) _1^2(γ-1) + v^N(s) _1^2(γ-1)) X(s) _^̋2^2 s, where we used Gronwall's inequality. Taking expectation and using Corollary <ref> and Lemma <ref> conclude the proof. Observe that in dimension d=2, the Sobolev embedding inequality Ḣ^1 ⊂ V:=C(𝒟;ℝ) does not hold and thus one cannot use Lemma <ref> to arrive at (<ref>) in the case d = 2. § STRONG CONVERGENCE RATES OF THE FULL DISCRETIZATION In this section, we analyze the strong convergence rate of the temporal discretization of the semi-discrete problem (<ref>) coming from a spectral Galerkin approximation of the stochastic wave equation (<ref>). Let M ∈and consider a uniform mesh { t_0, t_1, ⋯, t_M } of the interval[0, T ]satisfyingt_m = m τwithτ= TMbeing the time stepsize. To motivate the proposed time integrator, we observe that the mild solution of the semi-discrete problem can be approximated as follows X^N(t_1) = E_N( t_1-t_0 ) X^N(t_0) + ∫_t_0^t_1 E_N( t_1-s ) F_N ( X^N (s) ) s+ ∫_t_0^t_1 E_N( t_1-s ) B_N W^Q(s) ≈ E_N( τ ) X^N(t_0) + τ F_N ( X^N (t_1) ) + E_N(τ) B_N ( W^Q(t_1)-W^Q(t_0) ). Hence, we define the fully discrete numerical solution by X_N,m+1 = E_N(τ) X_N,m + τ F_N(X_N,m+1) + E_N(τ) B_N Δ W_m, Δ W_m := W^Q(t_m+1) - W^Q(t_m). By noting that the nonlinear mappingF_Nsatisfies the monotonicity condition in^̋1: ⟨ F_N (X) - F_N (Y), X - Y ⟩_^̋1 ≤ C_1 X - Y _^̋1 , X, Y ∈ U_N × U_N and thanks to the uniform monotonicity theorem in <cit.>, the implicit scheme (<ref>) is well-defined and a.s. uniquely solvable in^̋1for small enough time stepsizeτ. To analyze the error of the above numerical scheme, we begin by rewriting the mild solution of the spectral Galerkin method (<ref>) as follows X^N(t_m+1) = E_N(τ) X^N(t_m) + ∫_t_m^t_m+1 E_N(t_m+1 - s) F_N(X^N(s)) s + 𝒪_N,t_m,t_m+1 = E_N(τ) X^N(t_m) + τ F_N(X^N(t_m+1)) + E_N(τ) B_N Δ W_m + R_m+1, where we define 𝒪_N,t_m,t_m+1 := ∫_t_m^t_m+1 E_N (t_m+1-s) B_N W^Q(s) and R_m+1 := ∫_t_m^t_m+1( E_N(t_m+1 - s) F_N(X^N(s)) - F_N(X^N(t_m+1)) ) s + 𝒪_N,t_m,t_m+1 - E_N(τ) B_N Δ W_m. Subtracting the fully discrete solution (<ref>) from the semi-discrete solution (<ref>) yields the following recursion for the temporal error e_N,m+1 = E_N(τ) e_N,m + τ( F_N ( X^N(t_m+1) ) - F_N ( X_N,m+1 ) ) + R_m+1, where we define e_N,m+1 := X^N(t_m+1) - X_N,m+1 . We are now fully prepared to prove an upper mean-square error bound for the temporal discretization of the semi-discrete problem (<ref>). This result plays a fundamental role in proving the strong convergence rate of the fully discrete solution. Suppose that Assumptions <ref> and <ref> are fulfilled and let the time stepsize τ≤ 16  max{0,C_1}. Then, for any m ∈{0,1,⋯,M-1}, M ∈, there exists a constant C > 0, independent of m and N, such that the error in the time discretization satisfies [ e_N,m+1_^̋1^2 ] ≤C ( ∑_i=1^m+1[ R_i _^̋1^2 ] + τ^-1∑_i=0^m[ [ R_i+1|ℱ_t_i ] _^̋1^2 ] ). By the definition of the error term e_N,m+1 in (<ref>), we know that, for m ∈{ 1, 2, ⋯, M-1 }, e_N,m+1 - τ( F_N(X^N(t_m+1)) - F_N(X_N,m+1) ) _^̋1^2 = E_N(τ) e_N,m + R_m+1_^̋1^2 = E_N(τ) e_N,m_^̋1^2 + R_m+1_^̋1^2 + 2 ⟨ E_N(τ) e_N,m , R_m+1⟩_^̋1. Taking expectation on both sides of the above relation and using Young's inequality, one deduces [ e_N,m+1 - τ( F_N ( X^N(t_m+1) ) - F_N ( X_N,m+1 ) ) _^̋1^2 ] = [ E_N(τ) e_N,m_^̋1^2 ] + [ R_m+1_^̋1^2 ] + 2 [ ⟨ E_N(τ) e_N,m , R_m+1⟩_^̋1] ≤[ e_N,m_^̋1^2 ] + [ R_m+1_^̋1^2 ] + 2 [ ⟨ E_N(τ) e_N,m , [ R_m+1|ℱ_t_m] ⟩_^̋1] ≤[ e_N,m_^̋1^2 ] + [ R_m+1_^̋1^2 ] + τ[ e_N,m_^̋1^2 ] + τ^-1[ [ R_m+1|ℱ_t_m] _^̋1^2 ], where the stability of E_N(τ) and properties of conditional expectations were used. Owing to the monotonicity of the nonlinearity F, one can then infer that [ e_N,m+1 - τ( F_N ( X^N(t_m+1) ) - F_N ( X_N,m+1 ) ) _^̋1^2 ] = [ e_N,m+1_^̋1^2 ] + τ^2 [ F_N ( X^N(t_m+1) ) - F_N ( X_N,m+1 ) _^̋1^2 ] - 2 τ[ ⟨ e_N,m+1, F_N ( X^N(t_m+1) ) - F_N ( X_N,m+1 ) ⟩_^̋1] ≥ ( 1 - 2 C_1 τ ) [ e_N,m+1_^̋1^2 ]. Therefore, by iterating and noting that e_N,0=0, we get [ e_N,m+1_^̋1^2 ] ≤ ( 1 - 2 C_1 τ )^ - ( m + 1 )[ e_N,0_^̋1^2 ] + ∑_i=1^m+1 ( 1 - 2 C_1 τ )^ - ( m + 2 -i )[ R_i _^̋1^2 ] + τ∑_i=0^m ( 1 - 2 C_1 τ )^ - ( m + 1 -i )[ e_N,i_^̋1^2 ] + τ^-1∑_i=0^m ( 1 - 2 C_1 τ )^ - ( m + 1 -i )[ [ R_i+1|ℱ_t_i] _^̋1^2 ] ≤C τ∑_i=0^m[ e_N,i_^̋1^2 ] +C∑_i=1^m+1[ R_i _^̋1^2 ] +C τ^-1∑_i=0^m[ [ R_i+1|ℱ_t_i] _^̋1^2 ], where we used (1 - 2 C_1 τ)^-m≤ 1 for C_1 < 0 and (1 - 2 C_1 τ)^-m≤ (1+3C_1τ)^m ≤ e^3C_1 T for C_1 > 0, τ≤ 16C_1. Finally, the use of Gronwall's inequality results in the desired assertion. Equipped with the above preparations, we now prove the mean-square convergence rates of the temporal discretization. Let Assumptions <ref> and <ref> be fulfilled and assume that X(0)_L^p(Ω;^̋2) < ∞ for some p≥2. Let X^N(t) denote the mild solution of the semi-discrete problem (<ref>) resulting from a spectral Galerkin discretization. Let X_N,m denote the numerical approximation in time by the modified implicit exponential Euler scheme (<ref>) with time stepsize τ = TMsatisfying τ≤ 16  max{0,C_1}. Then, there exists a positive constant C, independent of N,M ∈, such that sup_0 ≤ m ≤ M X^N(t_m) - X_N,m_L^2(Ω;^̋1)≤C τ, for dimension d=1, and for sufficiently small ϵ > 0, sup_0 ≤ m ≤ M X^N(t_m) - X_N,m_L^2(Ω;^̋1)≤C τ^1-ϵ, for dimension d=2. We start by estimating the two terms [ R_i+1_^̋1^2 ] and [ [ R_i+1|ℱ_t_i] _^̋1^2 ] on the right-hand side of the upper error bound given in Proposition <ref>. We first divide the term R_i+1_ L^2 ( Ω; ^̋1 ) into three parts as follows, R_i+1_ L^2 ( Ω; ^̋1 ) ≤∫_t_i^t_i+1( E_N(t_i+1 - s) F_N(X^N(s)) - F_N ( X^N(t_i+1) ) ) s _L^2(Ω; ^̋1) + ∫_t_i^t_i+1 E_N( t_i+1 - s) B_N W^Q(s) - E_N (τ) B_N Δ W_i _ L^2 ( Ω; ^̋1 ) ≤∫_t_i^t_i+1( E_N (t_i+1 - s) - I ) F_N ( X^N (s) ) s _ L^2 ( Ω; ^̋1 ) + ∫_t_i^t_i+1( F_N ( X^N(s) ) - F_N ( X^N(t_i+1) ) ) s _ L^2 ( Ω; ^̋1 ) + ∫_t_i^t_i+1[ E_N( t_i+1 - s) - E_N( t_i+1 - t_i) ] B_N W^Q(s) _ L^2 ( Ω; ^̋1 ) =: J_1 + J_2 + J_3. In dimension d=1. Together with (<ref>) in Lemma <ref>, the assumptions on F, Lemma <ref> and (<ref>), we arrive at J_1 = ∫_t_i^t_i+1( E_N(t_i+1 - s) -I ) F_N ( X^N (s) ) s _ L^2 ( Ω; ^̋1 ) ≤∫_t_i^t_i+1 ( t_i+1 - s) F ( v^N (s) ) _ L^2 ( Ω; Ḣ^1 ) s ≤Cτ^2. In dimension d=2, using (<ref>) with θ = 1-ϵ and Lemma <ref>, we arrive at the estimate J_1 = ∫_t_i^t_i+1( E_N(t_i+1 - s) -I ) F_N ( X^N (s) ) s _ L^2 ( Ω; ^̋1 ) ≤∫_t_i^t_i+1 ( t_i+1 - s)^ 1 - ϵ F ( v^N (s) ) _ L^2 ( Ω; Ḣ^1-ϵ ) s ≤C τ^ 1 - ϵ∫_t_i^t_i+1( 1 + v^N (s) _L^2γ( Ω; Ḣ^1 )^γ) s ≤C τ^ 2 - ϵ. For the second term J_2, in dimension d=1, we obtain J_2 = ∫_t_i^t_i+1( F_N (X^N(t_i+1)) - F_N (X^N(s)) ) s _ L^2 ( Ω; ^̋1 ) ≤∫_t_i^t_i+1F_N ( v^N(t_i+1)) - F_N (v^N(s)) _ L^2 ( Ω;U) s ≤C∫_t_i^t_i+1( 1+ v^N(s) _L^4(γ-1)(Ω;V)^γ-1 + v^N(t_i+1) _L^4(γ-1)(Ω;V)^γ-1) × v^N (t_i+1) - v^N(s) _ L^4 ( Ω; U ) s ≤Cτ^3/2. For the term J_2 in dimension d=2, we do a Taylor expansion of the nonlinearity, denote ξ(λ) := v^N(s) + λ (v^N(t_i+1) - v^N(s)), and apply Hölder's inequality, the Sobolev embedding inequality Ḣ^2ϵ⊂ L^2/1-2ϵ, for a sufficiently small ϵ, Assumption <ref> and (<ref>) to acquire the bound J_2 ≤∫_t_i^t_i+1 F ( v^N(t_i+1)) - F (v^N(s)) _ L^2 ( Ω; U ) s = ∫_t_i^t_i+1∫_0^1 F'(ξ(λ)) ( v^N(t_i+1) - v^N(s) ) λ_ L^2 ( Ω; U ) s ≤C∫_t_i^t_i+1∫_0^1 F'(ξ(λ)) _L^1/ϵ· v^N(t_i+1) - v^N(s) _L^2/1-2ϵ_ L^2 ( Ω; )λ s ≤C∫_t_i^t_i+1( 1 + sup_s∈[0,T] v^N(s)_L^4(γ-1)(Ω;Ḣ^1)^γ-1) · v^N (t_i+1) - v^N(s) _ L^4 ( Ω; Ḣ^2ϵ ) s ≤Cτ^3/2. For the last term J_3, we apply Itô's isometry, (<ref>) in Lemma <ref> (using the stability of the projection) and the assumption on the noise (<ref>) to obtain the bound J_3 = ∫_t_i^t_i+1[ E_N( t_i+1 - t_i) - E_N( t_i+1 - s) ] B_N W^Q(s) _ L^2 ( Ω; ^̋1 ) ≤C( ∫_t_i^t_i+1( ( S_N ( t_i+1 - t_i) - S_N ( t_i+1 - s) ) Λ^-1/2Λ^1/2 Q^1/2_ℒ_2^2. +. ( C_N ( t_i+1 - t_i) - C_N ( t_i+1 - s) ) Λ^-1/2Λ^1/2 Q^1/2_ℒ_2^2) s )^1/2 ≤C( ∫_t_i^t_i+1 ( s - t_i )^2 s )^1/2 ≤Cτ^3/2. Collecting all the above estimates, we get the bound [ R_i+1_^̋1^2 ] ≤C τ^3. It remains to estimate the term [ [ R_i+1|ℱ_t_i] _^̋1^2 ]. First, observe that the stochastic integral vanishes under the conditional expectation. Next, proceeding as in the proof of the estimates (<ref>)-(<ref>), for dimension d=1, we obtain the bound [ [ R_i+1 | ℱ_t_i] _^̋1 ^2 ] = [ [ ∫_t_i^t_i+1[ E_N(t_i+1 - s) F_N(X^N(s)) - F_N(X^N(t_i+1)) ] s | ℱ_t_i] _^̋1 ^2 ] ≤ [ ∫_t_i^t_i+1( E_N(t_i+1 - s) - I ) F_N( X^N (s) ) s _^̋1 ^2 ] + [ [ ∫_t_i^t_i+1( F_N ( X^N(t_i+1) ) - F_N ( X^N(s) ) ) s | ℱ_t_i] _^̋1 ^2 ] ≤C τ^4+ [ [ ∫_t_i^t_i+1( F_N ( v^N(t_i+1) ) - F_N ( v^N(s) ) ) s | ℱ_t_i] ^2 ] . For dimension d=2, we obtain the bound [ [ R_i+1 | ℱ_t_i] _^̋1 ^2 ] ≤C τ^4-2ϵ+ [ [ ∫_t_i^t_i+1( F_N ( v^N(t_i+1) ) - F_N ( v^N(s) ) ) s | ℱ_t_i] ^2 ]. Applying a Taylor expansion to the nonlinearity F_N, we get, for s ∈ [ t_i , t_i+1 ], F_N ( v^N (t_i+1) ) - F_N ( v^N(s) ) = F_N'( v^N(s) ) (v^N(t_i+1) - v^N(s)) + ∫_0^1 F_N”(χ(λ)) ( v^N(t_i+1) - v^N(s), v^N(t_i+1) - v^N(s) ) ( 1 - λ ) λ, where χ(λ) := v^N(s) + λ ( v^N(t_i+1) - v^N(s) ). The goal is now to estimate each terms in this Taylor expansion. First, one uses the definition of the mild solution (<ref>) of the semi-discrete problem and get the relation v^N ( t_i+1 ) - v^N ( s ) = - Λ_N^1/2 S_N(t_i+1 - s) u^N ( s ) + ( C_N(t_i+1 - s) - I ) v^N ( s ) + ∫_s^t_i+1 C_N(t_i+1 - r) F_N ( v^N(r) ) r + ∫_s^t_i+1 C_N (t_i+1 - r) W^Q(r) which can then be inserted in the above Taylor expansion. Next, owing to the property of stochastic integration and the conditional expectation, we get the relation [ ∫_t_i^t_i+1 F_N'( v^N (s) ) ∫_s^t_i+1 C_N( t_i+1 - r ) W^Q(r) s | ℱ_t_i] = ∫_t_i^t_i+1[ ∫_s^t_i+1 F_N'( v^N (s) ) C_N( t_i+1 - r ) W^Q(r) | ℱ_t_i] s = ∫_t_i^t_i+1[ [ ∫_s^t_i+1 F_N'( v^N (s) ) C_N( t_i+1 - r ) W^Q(r) | ℱ_s ] | ℱ_t_i] s = 0. Next, by virtue of the triangle inequality and the property of the conditional expectation, we have the decomposition [ [ ∫_t_i^t_i+1( F_N (v^N(t_i+1)) - F_N (v^N(s)) ) s | ℱ_t_i] ^2 ] ≤∫_t_i^t_i+1 F_N'(v^N(s)) ( - Λ_N^1/2 S_N(t_i+1 - s) u^N ( s ) ) s _L^2(Ω;U)^2 + ∫_t_i^t_i+1 F_N'(v^N(s)) ( C_N(t_i+1 - s) - I ) v^N ( s ) s _L^2(Ω;U)^2 + ∫_t_i^t_i+1 F_N'(v^N(s)) ∫_s^t_i+1 C_N(t_i+1-r) F_N(v^N(r)) r s _L^2(Ω;U)^2 + ∫_t_i^t_i+1∫_0^1 F_N”(χ(λ)) ( v^N(t_i+1) - v^N(s), v^N(t_i+1) - v^N(s) ) ( 1 - λ ) λ s _L^2(Ω;U)^2 =: K_1 + K_2 + K_3 + K_4. Our final task is to bound these four terms. For the first term, K_1, one uses Hölder's inequality, (<ref>) in Lemma <ref> and the spatial regularity of X^N(t) to show that, in dimension d=1, K_1 = ∫_t_i^t_i+1 F_N'(v^N(s)) ( S_N(t_i+1-s) - S_N(0) ) Λ_N^1/2 u^N(s) s _L^2(Ω;U)^2 ≤( ∫_t_i^t_i+1 F_N'(v^N(s)) ( S_N(t_i+1-s) - S_N(0) ) Λ_N^1/2 u^N(s) _L^2(Ω;U) s )^2 ≤C ( ∫_t_i^t_i+1( 1+v^N(s)_L^4(γ-1)(Ω;V)^γ-1) × ( S_N(t_i+1-s) - S_N(0) ) Λ_N^-1/2Λ_N u^N(s) _L^4(Ω;U) s )^2 ≤C ( ∫_t_i^t_i+1( 1+v^N(s)_L^4(γ-1)(Ω;V)^γ-1) × ( S(t_i+1-s) - S(0) ) Λ^-1/2Λ u^N(s) _L^4(Ω;U) s )^2 ≤C τ^2 ( ∫_t_i^t_i+1( 1+v^N(s)_L^4(γ-1)(Ω;V)^γ-1) u^N(s) _L^4(Ω;Ḣ^2) s )^2 ≤C τ^4. For dimension d=2, using Hölder's inequality and (<ref>), it follows that, for sufficiently small ϵ>0, K_1 ≤( ∫_t_i^t_i+1 F_N'(v^N(s)) ( S_N(t_i+1-s) - S_N(0) ) Λ_N^1/2 u^N(s) _L^2(Ω;U) s )^2 ≤C ( ∫_t_i^t_i+1 F'(v^N(s)) _L^1/ϵ· ( S(t_i+1-s) - S(0) ) Λ^-1/2Λ u^N(s) _L^2/1-2ϵ_ L^2 ( Ω; ) s )^2 ≤C τ^2-4ϵ( ∫_t_i^t_i+1( 1+v^N(s)_L^4(γ-1)(Ω;Ḣ^1)^γ-1) u^N(s) _L^4(Ω;Ḣ^2) s )^2 ≤C τ^4-4ϵ. Similarly to the above, for the second term in dimension d=1, one gets the bound K_2 ≤C τ^4. And for dimension d=2, one obtains the bound K_2 ≤C τ^4-4ϵ. The estimate for the term K_3 follows similarly to the one for the term K_1, but using the boundedness of the operator C_N(t) and Lemma <ref> instead. We obtain the bound K_3 = ∫_t_i^t_i+1 F_N'(v^N(s)) ∫_s^t_i+1 C_N(t_i+1-r) F_N(v^N(r)) r s _L^2(Ω;U)^2 ≤( ∫_t_i^t_i+1 F_N'(v^N(s)) ∫_s^t_i+1 C_N(t_i+1-r) F_N(v^N(r)) r _L^2(Ω;U) s )^2 ≤C ( ∫_t_i^t_i+1∫_s^t_i+1 F'(v^N(s)) _L^1/ϵ· C_N(t_i+1-r) F_N(v^N(r)) _L^2/1-2ϵ_ L^2 ( Ω; ) r s )^2 ≤C( ∫_t_i^t_i+1∫_s^t_i+1( 1+v^N(s)_L^4(γ-1)(Ω;Ḣ^1)^γ-1) F(v^N(r)) _L^4(Ω;Ḣ^2ϵ) r s )^2 ≤C τ^4. For the last term K_4, first in dimension d=1, it follows from the Sobolev embedding inequality Ḣ^1/2⊂ L^4, the fact that Ḣ^1⊂ V in dimension d=1, and the spatio-temporal regularity of the numerical solution v^N(t), that K_4 = ∫_t_i^t_i+1∫_0^1 F_N”(χ(λ)) ( v^N(t_i+1) - v^N(s), v^N(t_i+1) - v^N(s) ) ( 1 - λ ) λ s _L^2(Ω;U)^2 ≤( ∫_t_i^t_i+1∫_0^1 F_N”(χ(λ)) ( v^N(t_i+1) - v^N(s), v^N(t_i+1) - v^N(s) ) _L^2(Ω;U) λ s )^2 ≤C( ∫_t_i^t_i+1∫_0^1 ( 1+ v^N(s)+λ(v^N(t_i+1) - v^N(s)) _L^4(γ-2)(Ω;V)^γ-2) × v^N(t_i+1) - v^N(s) _L^8(Ω;Ḣ^1/2)^2 λ s )^2 ≤C( 1 + sup_s∈[0,T]v^N(s)_L^4(γ-2)(Ω;Ḣ^1)^γ-2)^2 ·( ∫_t_i^t_i+1 ( t_i+1 - s ) s )^2 ≤C τ^4. For dimension d=2, by the Sobolev embedding inequality Ḣ^1+ϵ/2+ϵ⊂ L^4+2ϵ, one gets K_4 ≤( ∫_t_i^t_i+1∫_0^1 F_N”(χ(λ)) ( v^N(t_i+1) - v^N(s), v^N(t_i+1) - v^N(s) ) _L^2(Ω;U) λ s )^2 ≤C( ∫_t_i^t_i+1∫_0^1 F_N”(χ(λ))_L^4/ϵ+ 2· v^N(t_i+1) - v^N(s)_L^4+2ϵ^2 _L^2(Ω;) λ s )^2 ≤C( 1 + sup_s∈[0,T]v^N(s)_L^4(γ-2)(Ω;Ḣ^1)^γ-2)^2 ·( ∫_t_i^t_i+1 v^N(t_i+1) - v^N(s) _L^8(Ω;Ḣ^1+ϵ/2+ϵ) ^2 s )^2 ≤C ( ∫_t_i^t_i+1 ( t_i+1 - s )^2/2+ϵ s )^2 ≤C τ^ 4 - 2ϵ/2+ϵ. Collecting all the above estimates, for dimension d=1, one arrives at the bound [ [ R_i+1|ℱ_t_i ] _^̋1^2 ] ≤C τ^4. For dimension d=2, one gets the bound [ [ R_i+1|ℱ_t_i ] _^̋1^2 ] ≤C τ^4-4ϵ. The above estimates, in conjunction with the bound (<ref>) and Proposition <ref>, finish the proof of the theorem. § NUMERICAL EXPERIMENTS We conclude this paper by illustrating the above theoretical findings with numerical experiments. Let us consider the stochastic wave equation with nonlinear damping {[ u = v t,; v = Δ u t + (v - v^3) t + W, in 𝒟× (0, 1] ]. with𝒟 = (0,1)^d, ford=1,2, equipped with homogeneous Dirichlet boundary condition. The Fourier coefficients of the initial positions are randomly set to0or1and the obtained vector is then divided by the eigenvalues of the Laplacian. The initial velocity is set to bev(0)=0. The covariance operators of the infinite-dimensional Wiener processW(t), fort ∈[0,1], are chosen asQ=Λ^-1.005-d/2. In what follows, we use the fully discrete scheme (<ref>) to approximate solution to the SPDE (<ref>). The strong error bounds are measured in the mean-square sense and the expectations are approximated by computing averages over1000samples. We have checked empirically that the Monte–Carlo errors are negligable. We now fixN = 100ford=1andN = 30ford=2and investigate the strong convergence rates in time by using the stepsizesτ= 2^-j,j=4,5,…,9. The reference solution is computed numerically by using the proposed time integrator with the reference stepsizeτ= 2^-10. In the loglog plots from Figure <ref>, one can observe that the approximation errors of the implicit exponential Euler scheme decrease with order1. This is consistent with our theoretical findings. To visually illustrate the error in space ford=1, we compute a reference solution by using the proposed numerical scheme withτ=2^-5andN=2^10. The resulting errors of six different mesh parametersN = 2^i, i = 4,5,…,9are plotted in Figure <ref> on a log–log scale. One can observe that the expected convergence rates agree with those indicated in Theorem <ref> and Theorem <ref>. 10adams1975sobolev R. A. Adams. Sobolev Spaces, Pure and Applied Mathematics, Volume 65, Elsevier, Amsterdam, 1975. anton2016full R. Anton, D. Cohen, S. Larsson and X. Wang. Full discretisation of semi-linear stochastic wave equations driven by multiplicative noise. SIAM J. Numer. Anal., 54(2):1093–1119, 2016. MR1944756 V. Barbu, G. Da Prato. The stochastic nonlinear damped wave equation. Appl. Math. Optim., 46(2-3):125–141, 2002. barbu2007stochastic V. Barbu, G. Da Prato and L. Tubaro. Stochastic wave equations with dissipative damping. Stochastic Process. Appl., 117(8):1001–1013, 2007. becker2017strong S. Becker, B. Gess, A. Jentzen and P. E. Kloeden. Strong convergence rates for explicit space-time discrete numerical approximations of stochastic Allen–Cahn equations. Stoch. Partial Differ. Equ. Anal. Comput., 11(1):211–268, 2023. Arnulf2013Galerkin D. Blomker and A. Jentzen. Galerkin approximations for the stochastic Burgers equation. SIAM J. Numer. Anal., 51(1):694–715, 2013. brehier2019strong C.-E. Bréhier, J. Cui and J. Hong. Strong convergence rates of semi-discrete splitting approximations for stochastic Allen–Cahn equation. IMA J. Numer. Anal., 39(4):2096–2134, 2019. brehier2018weak C.-E. Bréhier, M. Hairer and A. M. Stuart. Weak error estimates for trajectories of SPDEs under spectral Galerkin discretization. J. Comput. Math., 36(2):159–182, 2018. Cabana1972on E. M. Cabaña. On barrier problems for the vibrating string. Probab. Theory Related Fields, 22:13–24, 1972. cai2021weak M. Cai, S. Gan and X. Wang. Weak convergence rates for an explicit full-discretization of stochastic Allen–Cahn equation with additive noise. J. Sci. Comput., 86:34, 2021. campbell2018adaptive S. Campbell and G. J. Lord. Adaptive time-stepping for stochastic partial differential equations with non-Lipschitz drift. arXiv preprint arXiv:1812.09036, 2018. Cao2007spectral Y. Cao and L. Yin. Spectral Galerkin method for stochastic wave equations driven by space-time white noise. Commun. Pure Appl. Anal., 6(3):607–617, 2007. MR2244432 P.-L. Chow. Asymptotics of solutions to semilinear stochastic wave equations. Ann. Appl. Probab., 16(2):757–789, 2006. cohen2022numerical D. Cohen and A. Lang. Numerical approximation and simulation of the stochastic wave equation on the sphere. Calcolo, 59:32, 2022. cohen2013trigonometric D. Cohen, S. Larsson and M. Sigg. A trigonometric method for the linear stochastic wave equation. SIAM J. Numer. Anal., 51(1):204–222, 2013. cohen2016fully D. Cohen and L. Quer-Sardanyons. A fully discrete approximation of the one-dimensional stochastic wave equation. IMA J. Numer. Anal., 36(1):400–420, 2016. cui2019strong J. Cui and J. Hong. Strong and weak convergence rates of finite element method for stochastic partial differential equation with non-globally Lipschitz coefficients. SIAM J. Numer. Anal., 57(4):1815–1841, 2019. cui2019energy J. Cui, J. Hong, L. Ji and L. Sun. Energy-preserving exponential integrable numerical method for stochastic cubic wave equation with additive noise. arXiv preprint arXiv:1909.00575, 2019. cui2012semi-implicit J. Cui, J. Hong and L. Sun. Semi-implicit energy-preserving numerical schemes for stochastic wave equation via SAV approach. arXiv preprint arXiv:2208.13394, 2022. dalang2009minicourse R. C. Dalang, D. Khoshnevisan, C. Mueller, D. Nualart and Y. Xiao. A Minicourse on Stochastic Partial Differential Equations. Volume 152. Lecture Notes in Math. 1962, Springer-Verlag, Berlin, 2009. da2014stochastic G. Da Prato and J. Zabczyk. Stochastic Equations in Infinite Dimensions. Volume 152. Cambridge university press, 2014. feng2017finite X. Feng, Y. Li and Y. Zhang. Finite element methods for the stochastic Allen–Cahn equation with gradient-type multiplicative noise. SIAM J. Numer. Anal., 55(1):194–216, 2017. MR3078824 H. Gao, F. Liang, and B. Guo. Stochastic wave equations with nonlinear damping and source terms. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 16(2):1350013-1–1350013-29, 2013. gyongy2016convergence I. Gyöngy, S. Sabanis and D. Šiška. Convergence of tamed Euler schemes for a class of stochastic evolution equations. Stoch. Partial Differ. Equ. Anal. Comput., 4(2):225–245, 2016. MR2426123 J. U. Kim. On the stochastic wave equation with nonlinear damping. Appl. Math. Optim., 58(1):29–67, 2008. Klioba2023pathwise K. Klioba and M. Veraar. Pathwise uniform convergence of time discretisation schemes for SPDEs. arXiv preprint arXiv:2303.00411, 2023. kovacs2020weak M. Kovács, A. Lang and A. Petersson. Weak convergence of fully discrete finite element approximations of semilinear hyperbolic SPDE with additive noise. ESAIM Math. Model. Numer. Anal., 54(6):2199–2227, 2020. MR3123856 M. Kovács, S. Larsson and F. Lindgren. Weak convergence of finite element approximations of linear stochastic evolution equations with additive noise II. Fully discrete schemes. BIT, 53(2):497–525, 2013. kovacs2015backward M. Kovács, S. Larsson and F. Lindgren. On the backward Euler approximation of the stochastic Allen–Cahn equation. J. Appl. Probab., 52(2):323–338, 2015. kovacs2010finite M. Kovács, S. Larsson and F. Saedpanah. Finite element approximation of the linear stochastic wave equation with additive noise. SIAM J. Numer. Anal., 48(2):408–427, 2010. kruse2014strong R. Kruse. Strong and Weak Approximation of Semilinear Stochastic Evolution Equations. Springer, 2014. lei2023numerical Z. Lei, C.-E. Bréhier and S. Gan. Numerical approximation of the invariant distribution for a class of stochastic damped wave equations. arXiv preprint arXiv:2306.13998, 2023. li2022finite Y. Li, S. Wu and Y. Xing. Finite element approximations of a class of nonlinear stochastic wave equations with multiplicative noise. J. Sci. Comput., 91:53, 2022. liu2021strong Z. Liu and Z. Qiao. Strong approximation of monotone stochastic partial differential equations driven by multiplicative noise. Stoch. Partial Differ. Equ. Anal. Comput., 9(3):559–602, 2021. lord2014introduction G. J. Lord, C. E. Powell and T. Shardlow. An Introduction to Computational Stochastic PDEs. Number 50. Cambridge University Press, 2014. qi2017accelerated R. Qi and X. Wang. An accelerated exponential time integrator for semi-linear stochastic strongly damped wave equation with additive noise. J. Math. Anal. Appl., 447(2):988–1008, 2017. qi2019error R. Qi and X. Wang. Error estimates of finite element method for semi-linear stochastic strongly damped wave equation. IMA J. Numer. Anal., 39(3):1594–1626, 2019. qi2019optimal R. Qi. and X. Wang. Optimal error estimates of Galerkin finite element methods for stochastic Allen–Cahn equation with additive noise. J. Sci. Comput., 80(2):1171–1194, 2019. quer2006space L. Quer-Sardanyons and M. Sanz-Solé. Space semi-discretisations for a stochastic wave equation. Potential Anal., 24(4):303–332, 2006. schurz2008analysis H. Schurz. Analysis and discretization of semi-linear stochastic wave equations with cubic nonlinearity and additive space-time noise. Discrete Contin. Dyn. Syst. Ser. S, 1(2):353–363, 2008. stuart1996dynamical A. Stuart and A. Humphries. Dynamical Systems and Numerical Analysis. Cambridge University Press, Cambridge, 1996. thomee2006galerkin V. Thomée. Galerkin Finite Element Methods for Parabolic Problems. Springer, Berlin, 2006. walsh2006numerical J. B. Walsh. On numerical solutions of the stochastic wave equation. Illinois J. Math., 50(1-4):991–1018, 2006. wang2015exponential X. Wang. An exponential integrator scheme for time discretization of nonlinear stochastic wave equation. J. Sci. Comput., 64(1):234–263, 2015. wang2020efficient X. Wang. An efficient explicit full-discrete scheme for strong approximation of stochastic Allen–Cahn equation. Stochastic Process. Appl., 130(10):6271–6299, 2020. wang2014higher X. Wang, S. Gan and J. Tang. Higher order strong approximations of semilinear stochastic wave equation with additive space-time white noise. SIAM J. Sci. Comput., 36(6):A2611–A2632, 2014. wang2020meanEuler X. Wang, J. Wu and B. Dong. Mean-square convergence rates of stochastic theta methods for SDEs under a coupled monotonicity condition. BIT, 60(3):759–790, 2020.
http://arxiv.org/abs/2307.03183v1
20230706175828
Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers
[ "Yuan Gong", "Sameer Khurana", "Leonid Karlinsky", "James Glass" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Spin-Polarized Majorana Zero Modes in Proximitized Superconducting Penta-Silicene Nanoribbons M. S. Figueira August 1, 2023 ============================================================================================= In this paper, we focus on Whisper <cit.>, a recent automatic speech recognition model trained with a massive 680k hour labeled speech corpus recorded in diverse conditions. We first show an interesting finding that while Whisper is very robust against real-world background sounds (e.g., music), its audio representation is actually not noise-invariant, but is instead highly correlated to non-speech sounds, indicating that Whisper recognizes speech conditioned on the noise type. With this finding, we build a unified audio tagging and speech recognition model Whisper-AT by freezing the backbone of Whisper, and training a lightweight audio tagging model on top of it. With <1% extra computational cost, Whisper-AT can recognize audio events, in addition to spoken text, in a single forward pass. § INTRODUCTION In recent years, significant progress has been made in advancing automatic speech recognition (ASR) performance. Specifically, self-supervised learning schemes such as wav2vec2.0 <cit.> and Hubert <cit.> have achieved great success, requiring minimal labeled training data. However, since the public model checkpoints are trained with clean speech data (e.g., Librispeech <cit.> or Libri-light <cit.>), their robustness in real-world environments is limited. To improve noise robustness, the Whisper <cit.> model uses 680K hours of labeled speech collected from the Internet with diverse environments and recording setups as the training data, and reports better robustness over existing ASR models. In this paper, we first show a counter-intuitive finding that while Whisper is robust against background sounds (noise for ASR), its audio representation is actually not noise-invariant, but instead encodes rich information of non-speech background sounds (shown in Figure <ref> and discussed in detail in Section <ref>), indicating that the Whisper model does not learn a noise-invariant representation, but encodes the noise type, and then recognize speech conditioned on the noise type. One exciting application of the above finding is that we can build a unified model for ASR and Audio Tagging (i.e., recognize general audio events) based on Whisper since it 1) is robust to noise, and 2) encodes rich general audio event information. Currently, ASR and audio tagging (AT) models are typically performed independently. In many applications such as video transcribing, voice assistants, and hearing aid systems, we desire to get both spoken text and acoustic scene analysis from the audio, but running two systems is computationally expensive. In this work, we show that with <1% extra computational cost, we can make Whisper recognizes audio events together with spoken text in a single forward pass. Our model achieves an mAP of 41.5 on AudioSet, which is slightly worse than standalone AT models, but is nevertheless over 40× faster. Related Work: To the best of our knowledge, we are the first to report that a robust ASR actually learns a noise-variant representation; most previous work focuses on noise-invariant representations <cit.>. For ASR and AT model unification, the closest works are <cit.>. In <cit.>, a unified keyword spotting and audio tagging model is proposed, however, keyword spotting only considers up to 35 words and is a much simpler task than the large‐vocabulary continuous speech recognition task we are targeting. In <cit.>, joint ASR and audio tagging/captioning training frameworks are proposed, but in this work, we show that Whisper already encodes rich general audio information even without any explicit audio tagging training. In <cit.>, ASR representations are tested for the audio tagging task, but the overall performance is unsatisfactory. § WHISPER ROBUST ASR MODEL Whisper <cit.> is a recently proposed robust ASR model that features a standard Transformer <cit.>-based encoder-decoder architecture. The main novelty of Whisper is not its architecture, but its training data and training scheme. Specifically, the 680K-hour non-public training set contains audio-transcript pairs collected from the Internet with a very broad distribution of audio from many different environments, recording setups, speakers, and languages. Significant effort was made to filter out low-quality data. Compared with the most commonly used Librispeech (960 hours) and Libri-light (60K hours) data that are collected from audiobooks, the Whisper training data is much larger and more diverse, but also has noisy labels. We identify this as the main factor that differentiates Whisper from existing ASR models. During Whisper training, only text transcripts are used as supervision signals, no audio event labels are given. In this paper, we use the Whisper-Large model unless otherwise stated. Since Whisper is an encoder-decoder model, we only use the audio encoder part of Whisper for audio tagging, which consists of 32 Transformer layers with a dimension of 1280. § NOISE-ROBUST ASR LEARNS NOISE-VARIANT REPRESENTATIONS Thanks to the diverse 680K-hour training data, Whisper has been shown to be more robust under white and pub noise than its counterparts <cit.>. We confirmed this point by evaluating Whisper and other state-of-the-art ASR models on Librispeech clean speech data that were contaminated with ESC-50 <cit.> environmental sounds with various signal-to-noise ratios (SNRs). As shown in Figure <ref> (upper), Whisper has superior performance. What is the noise-robust mechanism of Whisper? It is commonly believed that the representation of a robust ASR model should be noise-invariant, and researchers often set noise-invariance as an explicit inductive bias for robust ASR (e.g., in <cit.>). However, we, perhaps surprisingly, found that Whisper's representation is actually noise-variant and encodes rich non-speech background sound information. Specifically, we froze the entire Whisper model and input audio samples from the ESC-50 environment sound dataset <cit.>. We then extracted the intermediate representation from every layer of Whisper and trained a linear layer on top of it to classify the sound class from 50 possible classes. If Whisper did not encode background sound information, or its representations were invariant to background sounds, the sound classification result would be low, and vice versa. As shown in Figure <ref> (lower), the Whisper representations had the best ESC-50 sound classification accuracy compared to other SOTA ASR models, indicating that its representation encodes most background sound information. In addition, for all other ASR models, representations from deeper layers led to lower sound classification accuracies, showing that the models are learning to encode speech information, and ignore background sound information. Whisper does not have this behavior, since representations from deeper layers also encode background sound information. The fact that Whisper is noise-robust while its representation encodes rich background sound information reveals that the robustness mechanism of Whisper is different from other ASR models (including wav2vec2-robust <cit.>). Instead of learning a noise-invariant representation, it first encodes the background sound and then transcribes text conditioned on the type of noise. We confirmed this point by further checking the class-wise relationship between Whisper's robustness against a specific background sound class, and its potential ability to recognize the sound class in Figure <ref>. We found there is indeed a positive correlation between them. Compared to noise-aware training <cit.> that requires manually inputting noise type to the model, Whisper learns it directly from its massive 680K hour training set. Note that the discussion in this section is mostly based on Whisper, and our experiments do not indicate that noise-invariance does not help noise-robust ASR, nor that a noise-robust ASR's representation should be noise-variant. In fact, we believe encouraging noise-invariant representations <cit.> is a practical solution in self-supervised learning or small data cases. Whisper training requires industry-level computational resources and is expensive. What we hope to convey is that a noise-robust ASR model does not have to learn a noise-invariant representation, and that there exist other ways to be noise-robust - a noise-conditioned model like Whisper can, and indeed does, work very well. § UNIFYING ASR AND AUDIO TAGGING MODEL One exciting application of the finding in Section <ref> is that we are able to build a unified model for ASR and Audio Tagging based on Whisper to recognize spoken text and background sounds (e.g., music, horn, etc) simultaneously, which is highly desirable in applications such as video transcribing, voice assistants, and hearing aid systems. Whisper is ideal as a backbone for such a unified model because 1) it is robust to background sounds, and 2) its intermediate representations encode rich general audio event information, which serves as a solid base for audio tagging. Nonetheless, the original Whisper does not output sound labels, so we need to train a model on top of Whisper intermediate representations to enable it to predict a sound class. Note that we intentionally do not modify the original weights of the Whisper model, but instead add new audio tagging layers on top of it so that the Whisper ASR ability is not changed and text and audio labels can be generated in a single forward pass. We call this unified ASR and Audio Tagging model Whisper-AT. In previous sections, we applied a basic linear layer on the representation of a single layer for probing purposes. In this section, we discuss more advanced methods that lead to better audio tagging performance. * : The most basic method, we first apply a temporal mean pooling over the last layer representation of Whisper and then apply a linear layer to map it to the prediction. * : As shown in Figure <ref>, we find the last layer is not optimal for all sound classes. Thus we weighted average (WA) the representations from all layers and set the weight to be learnable before temporal mean pooling and linear layer, so this approach leverages representations from all layers. * : Temporal mean pooling removes all temporal details, and a single linear layer may be too simple for audio tagging. Therefore, we replace the linear layer of with a single-head temporal Transformer layer for this model. * : Time and layer-wise Transformer (our main method, shown in Figure <ref>). Though weighted averaging leverage representation of all layers, all sound classes use a fixed set of weights. In Figure <ref>, we show that different sound classes achieve their best performance using different representation layers. Therefore, ideally, each class should have its own set of weights. This motivates us to build an attention mechanism over the layers. Specifically, we apply another layer-wise Transformer to the output of the temporal Transformer. Efficient Design: As the original goal of Whisper-AT is being more computationally efficient than two independent ASR and AT models, we aim to minimize the extra cost for audio tagging. Introducing a new Transformer layer in and is relatively expensive. Consider the complexity of Transformer is O(d^2n+dn^2), where d is the dimension and n is the input length of the Transformer, for each 10-second input audio, the representations of each Whisper layer is in the shape of (n=500, d=1280). If the temporal and layer Transformer have the same n and d as Whisper, their computational cost is not negligible. Therefore, as illustrated in Figure <ref>, we propose the following efficient design: 1) We add a mean pooling layer to each Whisper representation to lower the time sequence length n from 500 to 25; 2) We add an optional linear projection layer to lower d from 1280 to 512 before audio tagging Transformers (denoted by ); and 3) For , we first conduct weighted averaging and then apply a temporal Transformer, for , we use a single temporal Transformer for all layers. Thus both and only need one temporal Transformer. § EXPERIMENTS As mentioned in Section <ref>, we intentionally freeze the weights of the original Whisper model, so the ASR performance of Whisper-AT is exactly the same as the original Whisper <cit.>. Thus we only conduct experiments on the audio tagging task. §.§ Experiment Settings Dataset: We use AudioSet and ESC-50 datasets following standard evaluation protocols. AudioSet <cit.> is a collection of over 2 million 10-second audio clips excised from YouTube videos and labeled with the sounds that the clip contains from a set of 527 labels. We train our model with both the balanced training set (AS-20K) and full training set (AS-2M) and report mAP on the evaluation set. ESC-50 <cit.> consists of 2,000 5-second environmental audio recordings organized into 50 classes; we evaluate our model using the official 5-fold cross-validation protocol. Hyper-Parameters: We use the standard training pipeline in prior AT work <cit.>. For all experiments, we use a batch size of 48 and an Adam optimizer <cit.>. For the proposed model, we use an initial learning rate of 2e-4, 1e-4, and 5e-4, and train the model for 30, 5, and 30 epochs for AS-20K, AS-2M, and ESC-50, respectively. For baseline methods, we search the learning rate to ensure a fair comparison. §.§ Experiment Results We show the main results in Table <ref>. The key conclusions are: First, Whisper-AT is significantly stronger than Hubert X-Large <cit.> and wav2vec2-Large-Robust <cit.> on audio tagging, demonstrating that Whisper is not only the most robust ASR model but also the strongest audio tagging backbone. Second, comparing the four Whisper-AT models, the proposed model leads to the best performance with higher computational overhead. However, by projecting the Transformer dimension from 1280 to 512, _ strikes a balance between performance and efficiency, as its FLOPs are less than 1% of the Whisper ASR FLOPs yet it performs almost the same as _. In Table <ref>, we further study the relationship between the audio tagging performance and Transformer dimension d for . Even _ provides reasonably good audio tagging performance, while its computational cost is almost free (<0.1% FLOPs of the Whisper ASR FLOPs). Third, Whisper-AT is slightly worse than SOTA standalone audio tagging models but is much more efficient. The proposed _512 achieves 32.8 mAP, 41.5 mAP, and 91.7 accuracy on AS-20K, AS-2M, and ESC-50, respectively, and is 42 times faster and 11 times smaller than AST <cit.>. Note that we target the cases that the user is already running an ASR and want to get additional audio labels, so we only compare the additional cost for AT and do not include the cost of ASR in this comparison. Fourth, how does Whisper perform in the end-to-end finetuning setting, and how does it compare to SOTA audio tagging models? We add a new Transformer layer on top of the Whisper encoder and train the entire model end-to-end (new layer uses a 10-100 larger learning rate). For a fair comparison, we also test Whisper-Small which is of similar size to SOTA audio tagging models. We find Whisper-Small performs similarly with previous self-supervised pretrained models such as SSAST <cit.> and MAE-AST <cit.> after fine-tuning. Finally, we test the audio tagging performance of smaller Whisper models. As shown in Figure <ref>, smaller models have weaker audio tagging performance but the difference between Whisper-Small, Medium, and Large is minor. We also test the ASR noise-robustness of these models on speech contaminated by ESC50 background sounds; larger models are more robust. We again observe a positive correlation between ASR noise robustness and AT performance. In addition, Whisper-Base (74M parameters) is already more robust in ASR and stronger in audio tagging than Hubert-X-Large (964M parameters). § CONCLUSION The Whisper ASR model revives the supervised learning scheme by using a massive and diverse training corpus. In this paper, we report an intriguing property of Whisper that while being very robust, the audio representation of Whisper is actually noise-variant and encodes rich background sound information. Based on this finding, we propose a unified audio tagging and ASR model called Whisper-AT. With only less than 1% additional cost, Whisper-AT can recognize the background sound in addition to spoken text in a single forward pass. Acknowledgments: This research is supported by the MIT-IBM Watson AI Lab. IEEEtran
http://arxiv.org/abs/2307.01359v1
20230703212650
Toward an accurate equation of state and B1-B2 phase boundary for magnesium oxide to TPa pressures and eV temperatures
[ "Shuai Zhang", "Reetam Paul", "S. X. Hu", "Miguel A. Morales" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "astro-ph.EP", "physics.chem-ph", "physics.comp-ph", "physics.geo-ph" ]
[email protected] Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA [email protected] Lawrence Livermore National Laboratory, Livermore, California 94550, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA By applying auxiliary-field quantum Monte Carlo, we calculate the equation of state (EOS) and B1-B2 phase transition of magnesium oxide (MgO) up to 1 TPa. The results agree with available experimental data at low pressures and are used to benchmark the performance of various exchange-correlation functionals in density functional theory calculations. We determine PBEsol is an optimal choice for the exchange-correlation functional and perform extensive phonon and quantum molecular-dynamics calculations to obtain the thermal EOS. Our results provide a preliminary reference for the EOS and B1-B2 phase boundary of MgO from zero up to 10,500 K. Toward an accurate equation of state and B1-B2 phase boundary for magnesium oxide to TPa pressures and eV temperatures Miguel A. Morales August 1, 2023 ====================================================================================================================== § INTRODUCTION Materials structures and behaviors under very high pressure (∼100 GPa to 1 TPa) are an important topic in high-energy-density sciences and earth and planetary sciences. At such conditions, materials are strongly compressed, which can lead to transitions into phases with different structures (by lowering the thermodynamic energies) and chemistry (by modifying the bonding). The past two decades have seen advances in computing and compression technologies that have added important knowledge to this subject by unveiling new structures (e.g., MgSiO_3 post-perovskite <cit.>) or chemical stoichiometry (such as H_4O <cit.> and Xe-FeO_2 <cit.>) with notable changes to properties of chemical systems (particularly the insulator-metal transition <cit.> and high-temperature superconductivity <cit.>). However, accurate determination of phase transitions at such extreme conditions remains challenging. Experimentally, static compression experiments based on diamond-anvil cells (DACs) <cit.> are limited by sample sizes and diagnostics, while dynamic compression experiments are limited by the time scale and regime of the thermodynamic paths that can be achieved <cit.>. Theoretically, state-of-art investigations often rely on calculations based on Kohn–Sham density functional theory (DFT) <cit.>. Despite the tremendous success of DFT in predicting many structures and properties to moderately high pressures, errors associated with the single-particle approximation and exchange-correlation (xc) functionals render DFT predictions dubious where precise experimental constraints do not exist. Recent studies have shown quantum Monte Carlo (QMC) methods to be able to benchmark solid-state equation of state (EOS) and phase transitions <cit.> by directly solving the many-electron Schrödinger equation. Auxiliary-field quantum Monte Carlo (AFQMC) is one such QMC method that has shown great promise with flexibility and scalability for simulating both real and model many-body systems with high accuracy <cit.>. In this work we apply the phaseless AFQMC <cit.> method, in combination with optimized periodic Gaussian basis sets <cit.>, to investigate high-pressure EOS and phase transition in solid-state materials by using magnesium oxide (MgO) as an example. This provides theoretically accurate cold-curve results for MgO, which we then use to benchmark against various predictions by DFT calculations. We then use DFT-based lattice dynamics and molecular dynamics with one of the best xc functionals to calculate the thermal contributions to the EOS. Finally, we combined the thermal with the cold-curve results to determine the finite-temperature EOS and B1-B2 phase boundary for MgO to eV temperatures. MgO is a prototype rock-forming mineral in planets, a pressure calibrator in DAC experiments, and a window material in shock experiments. From ambient pressure up to about 500 GPa, MgO is stabilized in the sodium chloride (NaCl, or B1) structure. Beyond that, it transforms into the cesium chloride (CsCl, or B2) structure, which is characterized by smaller coordination and lower viscosity that may be associated with mantle convection and layering in super-Earths different from those in the Earth. A benchmark of the EOS and phase transition of MgO would be important for modeling the interior dynamics and evolution of super-Earths, testing the degree of validity of various theoretical EOS or models at extreme conditions, as well as elucidating materials physics at extreme conditions by answering such questions as: is thermodynamic equilibrium reached in the experiments, or to what degree are the phase transformations subject to kinetic effects, chemistry/composition changes, or a combination of them, leading to various observations in experiments? This problem of the B1-B2 transition in MgO has been studied for over 40 years but remains uncertain in experiments <cit.> and there is a discrepancy of ∼20% between state-of-the-art DFT calculations <cit.>. In addition to the debates over phase relations near the triple point near 500 GPa <cit.>, recent double-shock experiments also suggest an inconsistency exists between theoretical predictions and experiments of the melting curve at TPa pressures <cit.>. The main goal of this work is to provide an accurate EOS and phase diagram for MgO to TPa pressures and eV temperatures by jointly combining an accurate many-body electronic structure approach (AFQMC) and finite-temperature quantum molecular dynamics (QMD) based on DFT to fully address various details of physics (electronic correlation, anharmonic vibration, EOS models, finite sizes of the simulation cell, and Born effective charge) that can affect the thermal EOS results. This paper is organized as follows: Section <ref> outlines the methodologies used in this study, including those for zero-K and finite-temperature calculations; Sec. <ref> presents the cold curve, thermal EOS, and phase boundary results for MgO, and discusses the errors and their sources; finally, Sec. <ref> concludes the paper. § METHODS In the following, we present descriptions and settings of the computational approaches used in this study, including AFQMC, Hartree–Fock (HF), and DFT for the zero-K internal-energy calculations, and quantum molecular dynamics (QMD) and thermodynamic integration (TDI) for the thermodynamic free energies at nonzero temperatures. §.§ Zero-K static lattice calculations For the internal energy-volume E(V) relations at 0 K (often called the “cold curve”), we perform static lattice calculations for MgO in the B1 and B2 structures at a series of volumes by using a combination of AFQMC, HF, and DFT with various xc functionals. AFQMC is a zero-temperature quantum Monte Carlo approach. It is based on the stochastic propagation of wavefunctions in imaginary time using an ensemble of walkers in the space of non-orthogonal Slater determinants. It uses the Hubbard–Stratonovich transformation <cit.> to rewrite the expensive two-body part of the propagator into an integral over auxiliary fields coupled to one-body propagators, which are then integrated with Monte Carlo techniques. Like other QMC methods, AFQMC also faces an obstacle for fermionic systems, namely the “phase (or sign) problem” which arises because the fields led by Coulomb interaction are complex. Control of the sign problem can be achieved using constraints based on trial wavefunctions, like the fixed-node approximation in diffusion MC (DMC) <cit.> or, in the case of AFQMC, the constrained-path <cit.> and phaseless approximation <cit.>. When combined with appropriate trial wavefunctions, these methods have been shown to provide benchmark-quality results across a range of electronic structure problems including atoms, molecules, solids, and correlated model Hamiltonians, including cases with strong electronic correlations <cit.>, known to be challenging to alternative approaches like Kohn–Sham DFT. Recent advances in the development of accurate and flexible trial wavefunctions include the use of multi-determinant expansions <cit.> and generalized HF <cit.>. In this work, we use the phaseless AFQMC (ph-AFQMC) method <cit.> to calculate the ground state properties of bulk MgO. In our ph-AFQMC calculations, the trial wavefunction is constructed from a Slater determinant of HF orbitals (the HF solution for each MgO structure at every density), which were found to yield accurate energy results. We use QUANTUM ESPRESSO (QE) <cit.> for the calculation of the trial wavefunction and for the generation of the one- and two-electron integrals. The modified Cholesky decomposition <cit.> is used to avoid the 𝒪(M^4) cost of storing the two-electron repulsion integrals. All QE simulations were performed using optimized norm-conserving Vanderbilt (ONCV) pseudopotentials <cit.>, constructed with the Perdew–Burke–Ernzerhof (PBE) <cit.> xc functional. We used the recently developed optimized Gaussian basis sets <cit.> in all AFQMC calculations. The calculations were based on primitive unit cells and performed using Γ-centered 2×2×2, 3×3×3, and 4×4×4 k grids to extrapolate to the thermodynamic limit at each density. Results from multiple basis sets were used, in combination with corrections based on periodic second-order Møller–Plesset perturbation theory (MP2) calculations, to obtain results extrapolated to the complete basis set (CBS) limit (see Appendix <ref> for more details). This was shown to be a successful approach to removing basis and finite size errors in previous studies, to which we refer readers for additional details <cit.>. All AFQMC calculations were performed using the open-source QMCPACK software package <cit.>. We used ∼1000 walkers and a time step of 0.005 Ha^-1, which we found sufficient to control any potential population and finite time-step biases, respectively. Kohn–Sham DFT <cit.> follows the Hohenberg–Kohn theorem <cit.> and simplifies the many-body problem into a single-particle mean-field equation that can be solved self-consistently via iteration over the electron density. The real complicated electron-electron interactions are simplified into the xc functional term. Since the accurate QMC solution for the uniform electron gas <cit.>, there have been developments of many forms of xc functionals for various applications, which form a “Jacob's ladder” with different rungs [local density approximation (LDA), generalized gradient approximation (GGA), meta-GGA, etc.] that lead to chemical accuracy at the expense of increasing computational cost. Our DFT calculations of the cold curve are performed with Vienna Ab initio Simulation Package (VASP) <cit.>. In our VASP simulations, we use a two-atom unit cell, Γ-centered 16×16×16 Monkhorst–Pack k mesh, a plane-wave basis with cutoff of 1200 eV, and convergence criteria of 10^-7 eV/cell for the self-consistent iteration. The simulations use the projector augmented wave (PAW) <cit.> method with pseudopotentials labeled with “sv_GW” and “h”, 1.75 and 1.1-Bohr core radii, and treating the outermost 10 and 6 electrons as valence for Mg and O, respectively. We consider five different xc functionals: LDA <cit.>, PBE <cit.>, PBEsol <cit.>, strongly constrained and appropriately normed meta-GGA (SCAN) <cit.>, and the Heyd–Scuseria–Ernzerhof-type HF/DFT hybrid functional (HSE06) <cit.>. The DFT calculations also produce pressures that are not directly available from our AFQMC calculations because of the difficulties in QMC to calculate forces. For consistency in data comparison between different approaches and determination of the B1-B2 transition, we fitted the E(V) data to EOS models that are widely used in high-pressure studies. It has long been known that high-order elastic moduli may be required to parameterize materials EOS under extreme (e.g., near 2-fold) compression <cit.>. Therefore, we have considered multiple EOS models and cross-checked them with a numerical (spline fitting) approach to ensure the accuracy of the EOS and phase-transition results. We have considered two different analytical EOS models: one is the Vinet model <cit.>, which follows E(V)=E(V_0)+∫_V^V_0P(V)dV, with P(V)=3B_01-x/x^2e^1.5(B_0'-1)(1-x), where x=(V/V_0)^1/3 and V_0 and B_0' are, respectively, the volume and first-order pressure derivative of the bulk modulus at zero pressure; the other is the Birch–Murnaghan model <cit.> to the fourth order, which follows E(V) = E_0 + 9B_0V_0(f^2/2 + a_1f^3/3 + a_2 f^4/4), where f = [(V_0/V)^2/3-1]/2 is the Eulerian finite strain, a_1 = 1.5(B_0' - 4), and a_2 is another parameter. We have also tested the third-order Birch–Murnaghan model, which does not include the a_2 term in Eq. <ref>, for comparison with the other models in selected cases (see Appendix <ref>). §.§ Finite-temperature thermodynamic calculations Thermodynamic calculations at nonzero temperatures are performed in two different ways: one is from lattice dynamics by using the quasiharmonic approximation (QHA), and the other is based on QMD. Within QHA, lattice vibrations are considered to be dependent on volume but independent of temperature. In practice, one can use the small-displacement approach or density functional perturbation theory (DFPT) to calculate phonons at 0 K and then compute the thermodynamic energies analytically from quantum statistics. Despite its wide usage and success in giving improved thermodynamic properties over the fully harmonic approximation for materials at relatively low temperatures, the applicability of QHA is questionable at high temperatures and for systems with light elements at low temperatures. In comparison, QMD simulations significantly improve the description of lattice dynamics by naturally including all anharmonic vibrations. By employing a TDI approach, the free energies can also be accurately calculated, which makes it possible to chart the phase-transition boundaries at finite temperatures. We use the PHONOPY program <cit.> and VASP to calculate the phonons of MgO at 0 K with DFPT and under QHA. We have tested the effects of including the Born effective charge (which is necessary to correctly account for the splitting between longitudinal and transverse optical modes) and different xc functionals on the phonon band structures and vibrational energies (see Appendices <ref> and <ref>). The calculation is performed at a series of volumes V. This allows estimation of the ion thermal contributions F_i-th(V, T), at any temperature T, to be added to the free energies F(V,T) via F_QHA(V,T) = E_QHA(V,T)-T S_QHA(V,T) = k_B T ∑_q, sln[2 sinh(ħω_q, s / 2 k_B T)], where E_QHA(V,T) = ∑_q, s( ñ+1/2) ħω_q, s is the vibrational internal energy, ñ =1/(e^ħω_q, s / k_B T-1) is the effective number of the phonon mode with frequency q and index s, and S_QHA (V,T)= k_B∑_q, s[ ( ñ+1) ln( ñ+1) - ñlnñ] is the vibrational entropy. Each calculation employed a 54-atom supercell and was performed using a Γ-centered 4×4×4 k mesh (for both B1 and B2 phases). In QMD calculations, we use the Mermin–Kohn–Sham DFT approach <cit.> with the PBEsol xc functional. Ion temperatures are controlled by using the Nosé–Hoover thermostat <cit.>, while electron temperatures are defined by the Fermi–Dirac distribution via a smearing approach. NVT ensembles are generated that consist of 4000 to 10,000 MD steps with time step of 0.5 fs. Mg_sv_GW and O_h potentials are used, the same as the DFT calculations at 0 K. The energy cutoff is 1000 eV, which defines the size of the plane-wave basis set. It requires large enough cells in combination with proper/fine k meshes to ensure the accuracy of the DFT calculations (see Appendix <ref>). In our simulations, we use cubic cells with 64 and 54 atoms that are sampled by a special k point (1/4,1/4,1/4) and Γ-centered 2×2×2 k mesh for B1 and B2 phases, respectively, in order to obtain results reasonably close to the converged setting while computational cost is relatively low. Structure snapshots have been uniformly sampled from each QMD trajectory and recalculated with denser k meshes of 2×2×2 (for B1) and 3×3×3 (for B2) to improve the accuracy of the thermal EOS and their volume dependence and reduce the error in the calculation of the phase transition. The QMD calculations are performed at temperatures between 500 and 12,000 K, in steps of 500 to 1500 K, with more calculations at low to intermediate temperatures to improve the robustness of the TDI for anharmonic free-energy calculations. Large numbers of 360 and 320 electronic bands are considered, respectively, for B1 and B2 simulations to ensure the highest-energy states remain unoccupied. The EOS obtained from the QMD or QHA calculations produces E(V,T) and P(V,T) data that allow the calculation of the Hugoniot. The analysis of the QMD EOS data follows the procedure that was introduced in detail in our recent paper on liquid SiO_2 <cit.>. The Hugoniot is calculated by solving the Rankine–Hugoniot equation using the numerically interpolated EOS. The different theoretical predictions are then compared to the experimentally measured Hugoniot to benchmark the performance of the computational approaches and the xc functionals in the corresponding thermodynamic regime. With the assistance of QHA and QMD, the entire ion thermal contribution to the free energy can be calculated by F_i-th(V, T)=F_QHA(V, T)+F_anharm(V, T). In Eq. <ref>, F_anharm(V, T)=-T ∫_T_ref^TE_anharm(V, 𝒯) /𝒯^2d𝒯 denotes the anharmonic term as calculated by TDI, where E_anharm = E_QMD-E_cold+QHA, E_QMD is the internal energy from QMD simulations, and T_ref is a reference temperature. We note that QMD misses the quantum zero-point motion of ions while QHA does not. This leads to increased discrepancy between QMD and QHA internal energies as temperature drops near zero, associated with decreasing heat capacity C_V of the real system (from ∼3k_B/atom to zero, as captured by QHA, whereas QMD gives C_V=3k_B) since fewer lattice vibration modes can be excited. In order to eliminate the resultant artificial exaggeration of the integrand, we have replaced E_cold+QHA with E_cold+3k_BT in our calculations of Eq. <ref> (see Appendix <ref>). This effectively treats the ions classically in the evaluation of F_anharm at temperatures higher than T_ref, which we believe is a reasonable approximation for phonon interactions (the anharmonic term). Our calculated results for E_anharm as a function of temperature are then fitted to high-order polynomials (to the sixth-order for B1 and eighth-order for B2) to compute the numerical integration in Eq. <ref>. The functionality of TDI also requires choosing the proper reference point T_ref. In this work, we consider T_ref to be low by following the idea that QHA is valid and other anharmonic contributions (beyond the volume-dependence vibration changes as have been included in QHA) are zero for MgO at low temperatures. For consistency among different isochores, we make the choice of T_ref such that the heat capacity is 10% of 3k_B. The corresponding T_ref is 100 to 200 K. We have also tested other choices of T_ref and examined their effects on F_anharm and the B1-B2 phase boundary. The results are summarized in Appendix <ref>. We note that when analyzing the QMD trajectory to calculate the EOS, we disregarded the beginning part (20%) of each MD trajectory to ensure the reported EOS represents that under thermodynamic equilibrium. Ion kinetic contributions to the EOS are manually included by following an ideal gas formula (i.e., internal energy E_ion kin.=3Nk_BT/2 and pressure P_ion kin.=Nk_BT/V, where N is the total number of atoms in the simulation cell and k_B is the Boltzmann constant). Although MgO is an insulating solid with a wide electronic band gap in all conditions considered in this study, we have still carefully considered the effect of electron thermal effects in the free-energy calculations (see Fig. <ref>(b)). By following the idea of Vinet <cit.>, we consider the EOS at each temperature and fit the Helmholtz free energy-volume data F(V) to various EOS models, including the Vinet and fourth-order Birch–Murnaghan model as introduced in the previous Sec. <ref>, as well as a numerical approach using cubic splines. The B1-B2 transition pressures and volumes of the two phases upon transition can then be determined by the common tangents of F(V) of the two phases (see Appendix <ref>). § RESULTS §.§ Cold-curve equation of state The cold-curve EOS of B1 and B2 MgO based on static-lattice HF and AFQMC calculations are listed in Table <ref>. The data for each phase at every volume is based on calculations using basis sets and simulation cells with finite sizes, which have then been extrapolated to the thermodynamic and CBS limits. The results show that, for both B1 and B2 phases, the energy minimum locates at 17.0 to 18.7 Å^3 when the calculation takes into account only exchange interactions of the electrons (E_HF), the correlation energy is about -0.60 Ha (1 Ha=27.211386 eV) at above 10.5 Å^3 and decreases to -0.63 eV as the cell volume shrinks to ∼7 Å^3, and the standard errors of the AFQMC data are small (∼0.1 mHa). The energy-volume curves E(V) are obtained by fitting the AFQMC static lattice data to EOS models, which gives rise to the equilibrium volume V_0 and bulk modulus B_0 of each phase. The results are summarized in Table <ref> and compared to those from HF and DFT simulations in order to investigate the importance of the xc functionals. We then calculated the B1-B2 transition pressure P_tr and volumes of the two phases upon transition V_tr from the common tangent of the E(V) curves. This is equivalent to another common approach for determining the transition pressure using the enthalpy-pressure relation (see Appendix <ref>). Our results show that DFT predictions vary by up to ∼7% in V_0, ∼15% in B_0, ∼7% in P_tr, and ∼10% in volume change upon B1-B2 transition, due to usage of different xc functionals. To directly compare theoretical EOS with DAC experiments, corrections to the static-lattice results are needed to account for the differences due to lattice vibration and thermal contributions. We have added ion thermal contributions to the cold curve EOS via lattice vibration calculations under QHA, which is generally considered a good approximation for MgO under room temperature. The cold-curve EOS is re-evaluated by fitting the corrected 300-K data for each phase to the EOS models. The equilibrium volume and the pressure-volume results from theoretical calculations (AFQMC, DFT, and HF) are shown in Fig. <ref> and compared to experimental results. It shows remarkable agreement between AFQMC and experimental results for both the equilibrium volume and compression curve. In contrast, the HF and DFT results scatter around the experimental values and vary significantly. Explicitly, DFT results exhibit a strong dependence on the choice of the xc functional, with HSE06, SCAN, and PBEsol performing better than PBE and LDA when compared with the experimental data. Figure <ref> compares the AFQMC cold curve of MgO at higher densities (near the B1-B2 transition) with those calculated using HF and DFT with various xc functionals. It shows HF, LDA and PBE demonstrate large deviations (approximately ±0.5 to 1 eV/MgO in energy and 0 to 40 GPa in pressure, depending on the phase and the density) for the cold curves, while PBEsol, SCAN, and HSE06 show significantly improved agreements, in comparison to the AFQMC results. These findings are overall consistent with normal expectations based on Jacob's ladder (precision relation: hybrid>meta-GGA>GGA/LDA>HF). Figure <ref> summarizes the B1-B2 phase transition pressures (red) and volume changes upon the phase transition (black) of MgO calculated using HF and DFT with various XC functionals in comparison to AFQMC. Due to the reconciliation of the EOS errors for the B1 and B2 phases in the calculation of the B1-B2 transition, the proximity of HF and DFT to AFQMC results no longer follows expectations of Jacob's ladder. The AFQMC predicted transition pressure is lower and volumes upon transition are larger than all other methods. PBEsol prediction of the transition pressure is closer to AFQMC than HF or other DFT xc functionals, with a difference of 20 GPa. §.§ High-temperature EOS High-temperature EOS of solid-state MgO is obtained from QMD and QHA calculations. The QHA results are based on a combination of phonon and cold-curve EOS, where the cold curves are obtained by static DFT calculations using four different xc functionals (LDA, PBE, PBEsol, and SCAN), while phonon calculations are performed by using the DFPT approach, PBEsol xc functional, Mg_sv_GW and O_h pseudopotentials, and including the Born effective charge. Tests show negligible differences in vibrational energies if the phonon calculations are done by using other xc functionals or ignoring the splitting between longitudinal and transverse optical modes (see Appendices <ref> and <ref>). We first used the EOS to calculate the principal Hugoniot and compared them with experiments. Figure <ref> shows comparisons of the Hugoniots in pressure–density and temperature–pressure spaces. Similar to previous QMD calculations that used the Armiento–Mattsson (AM05) xc functional <cit.>, our present QMD results based on the PBEsol functional show excellent agreement with experimental Hugoniots in stability regimes of both B1 and B2 in the pressure-density relation, as well as for B1 in the temperature profile. In comparison, QHA results show consistency with experiments at low pressures but give increasingly higher density at high pressures along the Hugoniot, more so for the B2 than the B1 phase. The breakdown of QHA as shown in the pressure-density results can be attributed to the anharmonic vibration effect that is naturally included in QMD but missing in QHA and becomes more significant at higher temperatures. By comparing the thermal EOS along an isotherm, we found similar energies but higher pressures given by QMD than by QHA; according to the Rankine–Hugoniot equation, this must be reconciled by less shrinking in volume, which explains the Hugoniot density relations between QMD and QHA as shown in Fig. <ref>(a). In the temperature-pressure space, QMD and QHA results of the Hugoniot are less distinct from each other than in the pressure-density space. QHA results based on LDA xc functional clearly lie below the range of the experimental data for the B1 phase, PBE significantly improves the agreement with experiments, while PBEsol and SCAN functionals and AFQMC data fall between LDA and PBE and near the lower bound of the experimental data. QMD predictions of the temperature are higher and improve over that by QHA using PBEsol. In addition, QMD predicts smaller differences between the Hugoniot of the B1 and B2 phases than QHA; AFQMC predictions of the B2 Hugoniot show good agreement with SCAN and LDA under QHA, following the trend of experimental Hugoniot after the turnover, while the QHA-PBEsol predictions are slightly higher. Our QMD results of the Hugoniot are overall consistent with previous calculations and align with experiments. More discussions will be given in the following and in Appendix <ref> regarding the B1–B2 phase boundary and comparison between our prediction and the experiments. The agreement with experiments in both the thermal (along the Hugoniot) and the cold-curve EOS (as shown in the previous Sec. <ref>) validates PBEsol as an optimal choice for the xc functional for calculations of MgO at both the ground state and finite temperatures. In the following, we have added the QHA and QMD-derived (using the TDI approach) thermal free energies based on DFT-PBEsol calculations to various cold curves (by AFQMC and DFT-PBEsol/SCAN) to estimate the total free energies of MgO in both B1 and B2 phases [We encounter imaginary-mode problems when using SCAN for phonon calculations at some of the densities. Therefore, we only use PBEsol for the QHA and finite-temperature QMD calculations.]. Based on these results, we charted the B1-B2 transition and calculate the volumes of the two phases upon transition. The results provide a preliminary reference for the B1-B2 phase boundary and its uncertainty based on state-of-art theoretical computations. Figure <ref> shows the volume of MgO collapses by ∼4.75(±0.25)% at 0 K [from ∼9.2 Å^3/MgO for B1 to ∼8.7 Å^3/MgO for B2 (error associated with using different methods AFQMC, PBEsol, and SCAN: ±0.1 Å^3/MgO)] and 3.7(±0.2)% at 10,500 K [from ∼9.8 Å^3/MgO for B1 to ∼9.4 Å^3/MgO for B2 (error: ±0.2 Å^3/MgO)] for the B1→B2 transition, and the transition pressure decreases from ∼515(±25) GPa to ∼490(±25) GPa as temperature increases from 0 to 10,500 K. We found the V_tr-T curves are similar between the three sets of predictions based on AFQMC and DFT-PBEsol/SCAN cold curves, with the AFQMC predicted volumes and volume collapses larger (and transition pressures lower) than the DFT predictions. The dT/dP Clapeyron slope of the B1-B2 phase boundary predicted by the AFQMC data set is similar to DFT-SCAN, both being steeper than that by DFT-PBEsol (see Table <ref> that summarizes values of the Clapeyron slope). Figure <ref> also shows QHA predicts a much less steep boundary for the B1-B2 transition than QMD, reflecting the importance of anharmonic vibrational effects, similar to the report by previous studies <cit.>. Our results clearly show the amounts of changes in volume and volume difference between the two phases of MgO upon transition, as well as the important role of electronic interactions (many-body in nature versus single-particle approximation under different xc functionals) in affecting the results. The much less negative value in Clapeyron slope (dP_tr/dT) and slightly larger value in volume collapse of the B1–B2 transition predicted by QMD may cause less significant topography of discontinuity and lateral variations in deep-mantle mineralogy of super-Earths than previously expected based on the QHA results, changing expectations on the style of convection in these planets (see discussions in, e.g.,  <cit.>). Moreover, by predicting a steeper B1–B2 boundary than latest theoretical studies <cit.>, our AFQMC (and PBEsol) results show excellent consistency with both experiments by McWilliams et al <cit.> and Bolis et al <cit.> (see Appendix <ref>). We note that Bolis et al <cit.> interpreted the turnover in their experiments as the melting start of shocked MgO, largely based on comparisons with theoretical studies by then that underestimated P_tr along the Hugoniot. Our new results suggest that the turnovers in the experiments are associated with the B1–B2 transition. It is beyond the scope of this study, however, to decipher the nature of the subtle differences between experiments by Bolis et al. <cit.> and McWilliams et al. <cit.>, as it requires accurate knowledges of the triple point, thermodynamic free energies of the liquid phase, as well as considerations of the kinetics of the transition to fully understand the observations. We have performed additional tests and found the error in transition pressure (associated with the choices of different T_ref and fitting methods in TDI) increases to ∼50 GPa at T≈10^4 K (see Appendix <ref>, corresponding changes in the Clapeyron slope are tabulated in Table <ref>), while the errors due to other sources (EOS models, data error bars, and the data grid) are relatively small (e.g., the statistical error of the AFQMC and QMD energies only leads to a difference in P_tr of 1.5 GPa at 6000 K). § CONCLUSIONS This work exemplifies the first application of the AFQMC approach to benchmark the cold curve and phase transition in solid-state materials to very high pressures. Our AFQMC results reproduce the experimental cold curve (equilibrium volume and compressibility at room temperature) and provide a preliminary reference for the equations of state of MgO at up to 1 TPa. In comparison, DFT predictions vary by up to 7% to 15% for the equilibrium properties (V_0 and B_0) and B1-B2 transition (P_tr and volume collapse upon the transition), depending on the xc functionals, and the largest differences are observed between the cold curves by PBE and LDA. The HSE06, SCAN, PBEsol functionals perform better than PBE, LDA, and HF in reproducing the E(V) cold curves by AFQMC. The cold-curve differences for B1 offset those for B2, leading to the sensitivity of the predicted transition pressure and volume change to the choice of the xc functional. Our Hugoniot results based on QMD calculations of the thermal EOS using PBEsol show excellent agreement with experiments for B1 and B2 in the pressure-density relation, as well as for B1 in the temperature-pressure profile. In comparison, QHA results of the pressure-density Hugoniot show consistency with experiments at low pressures but increasing discrepancy at high pressures, because larger anharmonic effects are expected at higher temperatures. The good performance of PBEsol in reproducing both the thermal (along the Hugoniot) and the cold-curve EOS of MgO has motivated us to further calculate the anharmonic free energies and add them to the cold curves by AFQMC and DFT-PBEsol or SCAN to calculate the total free energies and evaluate the B1-B2 transition at various temperatures. Our results show temperature lowers the transition pressure and expands the volumes upon the B1-B2 transition. Anharmonic vibration increases the transition pressure P_tr and hinders the transition volumes V_tr from expansion, relative to QHA. AFQMC predicts a steeper dT/dP phase boundary and a larger volume collapse upon the B1→B2 transition than DFT-PBEsol, similar to the effect of anharmonicity with respect to QHA. In addition to providing a preliminary reference for the B1-B2 phase boundary and its uncertainty based on state-of-art theoretical computations, our results will be useful for building an accurate multiphase EOS table for MgO for planetary sciences and high energy density sciences applications, as well as for elucidating the mechanism of phase transformations (e.g., kinetics effects) in different experimental settings (e.g., compression rates). More work is desired to clarify the triple point and the melting curve at high temperatures and pressures to multi-TPa pressures. Looking ahead, finite-temperature AFQMC <cit.>, by better accounting of the electron thermal effects, and back-propagation for force and stress estimators in AFQMC <cit.> can offer additional yet more-accurate options to benchmark the EOS and phase transitions of solid-state materials at high temperatures and pressures. § ACKNOWLEDGEMENTS This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority. The Flatiron Institute is a division of the Simons Foundation. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract number DE-AC52-07NA27344. Part of the funding support was from the U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program and Center for Predictive Simulation of Functional Materials (CPSFM). Computer time was provided by the Oak Ridge Leadership Computing Facility, Livermore Computing Facilities, and UR/LLE HPC. S.Z. thanks R. S. McWilliams for sharing experimental data and J. Wu, R. Jeanloz, F. Soubiran, B. Militzer, T. Duffy, and K. Driver for discussions. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof. § FINITE-SIZE AND BASIS SETS CORRECTIONS TO AFQMC ENERGIES Our AFQMC calculations were performed for both phases of MgO at all volumes with various cell sizes and optimized basis sets. These include: (i) 2×2×2 k points (8 MgO units) with pVDZ, pVTZ, pVQZ, and pV5Z; (ii) 3×3×3 k points (27 MgO units) with pVTZ and pVQZ; and (iii) 4×4×4 k points (64 MgO units) with pVTZ. We have then followed three steps to extrapolate the AFQMC results to the thermodynamic and the complete basis set (CBS) limits: 1. Use all the pVTZ results to calculate finite size corrections for the 3×3×3 calculations; 2. Use all the 3×3×3 calculations to calculate the basis set corrections, combining AFQMC calculations with MP2 calculations and the “scaled” correction described in Ref. Morales2020; 3. Use the 2×2×2 calculations to check reliability of the basis set corrections in step 2 and to ensure the basis set corrections were robust. Our extrapolation procedure is demonstrated in Figure <ref>. The remarkable consistency between the pVTZ and pV5Z corrected values (to approximately 1–2 mHa/MgO from calculations with only 2×2×2 k points) suggests our corrections are reliable and robust. § EOS FIT AND TRANSITION PRESSURE DETERMINATION Figure <ref> compares the two different ways of calculating the transition pressure: using internal energies E(V) and their common tangent (left) or using enthalpies H(P) and their crossover point [This example is for T=0 K. At finite temperatures, we consider isotherms and use Helmholtz free energies F(V) for the common-tangent approach or Gibbs free energies G(P) for the crossover-point approach.]. The two approaches are thermodynamically equivalent, as shown by the same transition pressure that has been determined (the 1-GPa difference is due to the numerical fitting of the data). The common-tangent approach is our option in this study because the internal energy (or Helmholtz free energy for T≠ 0 K) is readily calculated while pressure is not except for the 0-K DFT cases. The E(V) data are fitted to EOS models to determine the equilibrium volume V_0 and bulk modulus B_0. Typical errors of these parameters can be calculated using the Monte Carlo approach and are shown in Fig. <ref>. Table <ref> summaries the equilibrium volume V_0 and bulk modulus B_0 by using different EOS models for the PBEsol data. We found the third-order Eulerian EOS (Birch–Murnaghan) works surprisingly well for MgO up to TPa pressures as long as data at high-enough pressures are included. § EFFECT OF LO-TO SPLITTING It is well known that the frequencies of the optical modes parallel and perpendicular to the electric field split (“LO-TO splitting”) in ionic materials such as MgO. <cit.> This mode splitting is missed in regular phonon calculations but can be correctly captured when Born effective charges, piezoelectric constants, and the ionic contribution to the dielectric tensor are considered (by switching on LEPSILON in VASP). The effects on the phonon dispersion relations of B1- and B2-MgO are shown in Fig. <ref>. Figure <ref> compares the resultant differences in vibrational energy and entropy of MgO in different phases and at different densities. The results show that LO-TO splitting only makes a small difference (<0.7%) at T<500 K and then quickly drops to zero at higher T; the effect on the differences between B1 and B2 is also small and negligible. § EFFECTS OF XC FUNCTIONAL ON PHONON RESULTS Figure <ref> shows that different xc functionals produce the same phonon band structure and vibrational free energies within QHA. § CONVERGENCE TEST Figure <ref> shows large cell sizes in combination with proper/fine k-point meshes are needed to ensure convergence of the EOS. For example, a 250-atom cell with a single k point is not enough for B2 at 6.0 g/cm^3. In our QMD simulations, we use a 64-atom cell with the “Brec” special k point and a 54-atom cell with a Γ-centered 2×2×2 k mesh, respectively, for the B1 and B2 calculations. Our additional tests for the phonon calculations show that 8- and 16-atom cells, respectively, are needed for B1 and B2 to obtain converged F_vib(T) results rather than using the primitive 2-atom cells. In this study, we choose 54-atom cells (with a 4×4×4 k mesh) for both B1 and B2 phonon calculations for better accuracy. § CALCULATION OF ANHARMONIC ENERGIES AND COMPARISON BETWEEN DIFFERENT TERMS Figure <ref> shows the finite-temperature internal energies of MgO estimated from cold calculations under QHA (E_cold+QHA) in comparison with values from direct QMD simulations (E_QMD). Overall, E_cold+QHA and E_QMD are similar to each other, with noticeable differences near zero K, because of the nuclear quantum effects, or at high temperatures, due to increased anharmonic vibration and electron excitation effects. The differences are more evident when the ion thermal energies (E-E_cold) are plotted with respect to the classical crystal value of 3k_BT. The mismatch between QMD and QHA near zero K and the proximity of E_QHA to 3k_BT at high temperatures have motivated us to define E_anharm=E_QMD-E_cold+QHA≈ E_QMD-E_cold-3k_BT in the TDI Eq. <ref> to calculate the anharmonic free energy F_anharm. Under this approximation, the total free energy of the system F(V, T)=E_cold(V)+F_i, QHA(V, T)+F_i-th, anharm(V, T) +F_e-th(V, T) ≈ E_cold(V)+F_ind.ph.^quantum.(V, T)+F_int.ph.^class.(V, T) +F_e-th(V, T) where the subindices “ind.” and “int.” denote independent and interacting, “ph.” denotes phonon, “class.” and “quantum.” represents the nature of the ions as being classical and quantum, respectively, and “e-th” denotes the electron thermal term. The only difference from an entirely accurate (“quantum”) description lies in the approximation in the anharmonic term by using classical ions (as in QMD simulations and the classical-crystal reference for TDI), whose effect, we believe, is negligible for the purpose of this paper. We have performed extensive tests and found E_anharm(V,T) can be isochorically fitted well using sixth- and eighth-order polynomials for the B1 and B2 phases, respectively. We also note that the different choices of T_ref or fitting E_QMD by using cubic splines can affect the value of F_anharm (see Fig. <ref>), while lower-order polynomials or exponential fits <cit.>, although they were found to work for certain materials at ambient densities or relatively low temperatures, break down for MgO at high densities and temperatures. In practice, QMD is less efficient and inappropriate for simulating near-zero temperatures. Therefore we have to choose a finite value for T_ref in TDI and assume QHA is valid for any temperature below T_ref. This would technically limit the accuracy of the anharmonic free energies, as shown in Fig. <ref>(b) by the different values of F_anharm when choosing different T_ref and fitting approaches. Figure <ref> also shows that the contributions by electron thermal excitation become increasingly significant when the temperature exceeds 8000 K, more at lower densities. The anharmonic vibration and electron thermal terms are relatively small in comparison to the lattice vibration as accounted under QHA. However, because of the similarities between energies of the B1 and B2 phases, the effects of anharmonic vibration can significantly affect the B1-B2 transition boundary, as shown in Fig. <ref>. Figure <ref> summarizes the B1-B2 transition pressure based on free energies calculated using different approaches. Despite the distinctions between predictions by AFQMC and DFT-PBEsol or SCAN at zero K, all methods give similar trends of decreasing P_tr (by ∼20 to 40 GPa at 9000 K, relative to the corresponding values at 0 K) and enlarging uncertainty (by ∼40 to 50 GPa at 9000 K) as temperature increases. The relations in P_tr between the different approaches AFQMC, DFT-PBEsol, and DFT-SCAN at high temperatures remain similar to those under QHA, whereas the anharmonic effects clearly steepen the dT/dP slope and push P_tr to higher values than QHA predictions. With polynomial fits of E_anharm, the differences between the phase boundaries based on QHA and anharmonic calculations are smaller if the values of T_ref are higher (light long dashed line-squares); in comparison, cubic spline fits of E_QMD using the same T_ref tend to produce larger differences than the polynomial fits of E_anharm (light short dashed line-squares). We have quantified the phase-boundary differences by calculating the Clapeyron slope. The results are summarized in Table <ref>. Furthermore, we have tested by employing two different versions of the TDI/temperature-integration approach to cross-check our above results, including (1) a more direct approach <cit.>: F(V, T)/T-F(V, T_ref)/T_ref= ∫_1/T_ref^1/T E(V, 𝒯) d1/𝒯 and (2) an indirect approach [by taking the difference of Eq. <ref> with respect to a reference system (e.g., the system under QHA) that also satisfies Eq. <ref>]: F(V, T)-F_ref(V, T)= T∫_1/T_ref^1/T [E(V, 𝒯)-E_ref(V, 𝒯) ] d1/𝒯. These tests were performed at T=3000, 6000, and 9000 K, with T_ref fixed to 500 K for simplicity. In approach (1), the free energy at T_ref is approximated by the corresponding values under QHA; in approach (2), the QHA system is taken as the reference, which defines F_ref and E_ref. In both approaches, an additional term E_QC(V, 𝒯)=E_QHA(V, 𝒯)-3k_B𝒯 has been introduced as quantum correction of the internal energy from QMD, similar to that in Ref. Jung2023. We note that the quantum correction is crucial to obtain accurate free energies within the temperature integration approach, which starts from a cold reference state where the important nuclear quantum effects are included by QHA but missed in QMD. We also note that, with the quantum correction and with E_cold deducted from all energy terms, Eq. <ref> is equivalent to our method introduced in detail above and in Sec. <ref> (Eq. <ref>). Based on our PBEsol data (cold curve, QHA, and QMD), the free energies calculated using these two approaches are similar, both producing similar B1–B2 transition pressures: 486 GPa at 3000 K, 462 GPa at 6000 K, and 439 GPa at 9000 K. The excellent consistency of the results from the tests with those shown in Fig. <ref> (blue shaded area) reconfirms the methodology and findings of this study. § COMPARISON WITH EXPERIMENTS Figure <ref> compares our AFQMC results of the B1–B2 transition to shock experiments <cit.> and recent theoretical calculations <cit.>. The previous calculations were based on LDA/PBE, and their predicted P_tr at 0 K is larger than our AFQMC prediction, consistent with our findings shown in Fig. <ref>. The differences get smaller at higher temperatures until approximately 8000 K, above which the previous calculations show a lower P_tr and thus a less steep Clapeyron slope than ours. Our estimation of the B1–B2 phase boundary, with uncertainty, agrees with the wiggled regions in both experiments by McWilliams et al. <cit.> and Bolis et al. <cit.>. This suggests the turnovers in both experiments can be associated with the B1–B2 transition. To fully unveil the origins of the subtle differences between the measurements, however, still requires improved experimental diagnostics and theoretical constraints on the structure, kinetics, and thermodynamic conditions of the samples under shock compression. 95 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Murakami et al.(2004)Murakami, Hirose, Kawamura, Sata, and Ohishi]Murakami2004 author author M. Murakami, author K. Hirose, author K. Kawamura, author N. Sata, and author Y. Ohishi, title title Post-perovskite phase transition in MgSiO3., https://doi.org/10.1126/science.1095932 journal journal Science (New York, N.Y.) volume 304, pages 855 (year 2004)NoStop [Oganov and Ono(2004)]Oganov2004 author author A. R. Oganov and author S. Ono, title title Theoretical and experimental evidence for a post-perovskite phase of MgSiO3 in Earth's D" layer, https://doi.org/10.1038/nature02701 journal journal Nature volume 430, pages 445 (year 2004)NoStop [Tsuchiya et al.(2004)Tsuchiya, Tsuchiya, Umemoto, and Wentzcovitch]Tsuchiya2004 author author T. Tsuchiya, author J. Tsuchiya, author K. Umemoto, and author R. M. Wentzcovitch, title title Phase transition in MgSiO3 perovskite in the earth's lower mantle, https://doi.org/10.1016/j.epsl.2004.05.017 journal journal Earth Planet. Sci. Lett. volume 224, pages 241 (year 2004)NoStop [Zhang et al.(2013)Zhang, Wilson, Driver, and Militzer]Zhang2013h4o author author S. Zhang, author H. F. Wilson, author K. P. Driver, and author B. Militzer, title title H 4 O and other hydrogen-oxygen compounds at giant-planet core pressures, https://doi.org/10.1103/PhysRevB.87.024112 journal journal Phys. Rev. B volume 87, pages 024112 1 (year 2013)NoStop [Peng et al.(2020)Peng, Song, Liu, Li, Miao, Chen, and Ma]Peng2020 author author F. Peng, author X. Song, author C. Liu, author Q. Li, author M. Miao, author C. Chen, and author Y. Ma, title title Xenon iron oxides predicted as potential Xe hosts in Earth's lower mantle, https://doi.org/10.1038/s41467-020-19107-y journal journal Nature Commun. volume 11, pages 5227 (year 2020)NoStop [Celliers et al.(2018)Celliers, Millot, Brygoo, McWilliams, Fratanduono, Rygg, Goncharov, Loubeyre, Eggert, Peterson, Meezan, Le Pape, Collins, Jeanloz, and Hemley]Celliers2018 author author P. M. Celliers, author M. Millot, author S. Brygoo, author R. S. McWilliams, author D. E. Fratanduono, author J. R. Rygg, author A. F. Goncharov, author P. Loubeyre, author J. H. Eggert, author J. L. Peterson, author N. B. Meezan, author S. Le Pape, author G. W. Collins, author R. Jeanloz, and author R. J. Hemley, title title Insulator-metal transition in dense fluid deuterium., https://doi.org/10.1126/science.aat0970 journal journal Science (New York, N.Y.) volume 361, pages 677 (year 2018)NoStop [Rillo et al.(2019)Rillo, Morales, Ceperley, and Pierleoni]Rillo2019 author author G. Rillo, author M. A. Morales, author D. M. Ceperley, and author C. Pierleoni, title title Optical properties of high-pressure fluid hydrogen across molecular dissociation, https://doi.org/10.1073/pnas.1818897116 journal journal Proc. Natl. Acad. Sci. USA volume 116, pages 9770 (year 2019)NoStop [Zurek and Bi(2019)]Zurek2019 author author E. Zurek and author T. Bi, title title High-temperature superconductivity in alkaline and rare earth polyhydrides at high pressure: A theoretical perspective, https://doi.org/10.1063/1.5079225 journal journal J. Chem. Phys. volume 150, pages 50901 (year 2019)NoStop [Dubrovinskaia et al.(2016)Dubrovinskaia, Dubrovinsky, Solopova, Abakumov, Turner, Hanfland, Bykova, Bykov, Prescher, Prakapenka, Petitgirard, Chuvashova, Gasharova, Mathis, Ershov, Snigireva, and Snigirev]Dubrovinskaia2016 author author N. Dubrovinskaia, author L. Dubrovinsky, author N. A. Solopova, author A. Abakumov, author S. Turner, author M. Hanfland, author E. Bykova, author M. Bykov, author C. Prescher, author V. B. Prakapenka, author S. Petitgirard, author I. Chuvashova, author B. Gasharova, author Y.-L. Mathis, author P. Ershov, author I. Snigireva, and author A. Snigirev, title title Terapascal static pressure generation with ultrahigh yield strength nanodiamond, https://doi.org/10.1126/sciadv.1600341 journal journal Sci. Adv. volume 2, pages e1600341 (year 2016)NoStop [Jenei et al.(2018)Jenei, O'Bannon, Weir, Cynn, Lipp, and Evans]Jenei2018 author author Z. Jenei, author E. F. O'Bannon, author S. T. Weir, author H. Cynn, author M. J. Lipp, and author W. J. Evans, title title Single crystal toroidal diamond anvils for high pressure experiments beyond 5 megabar, https://doi.org/10.1038/s41467-018-06071-x journal journal Nature Commun. volume 9, pages 3563 (year 2018)NoStop [McWilliams et al.(2012)McWilliams, Spaulding, Eggert, Celliers, Hicks, Smith, Collins, and Jeanloz]McWilliamsScience2012 author author R. S. McWilliams, author D. K. Spaulding, author J. H. Eggert, author P. M. Celliers, author D. G. Hicks, author R. F. Smith, author G. W. Collins, and author R. Jeanloz, title title Phase transformations and metallization of magnesium oxide at high pressure and temperature, https://doi.org/10.1126/science.1229450 journal journal Science volume 338, pages 1330 (year 2012)NoStop [Coppari et al.(2013)Coppari, Smith, Eggert, Wang, Rygg, Lazicki, Hawreliak, Collins, and Duffy]Coppari2013 author author F. Coppari, author R. F. Smith, author J. H. Eggert, author J. Wang, author J. R. Rygg, author a. Lazicki, author J. a. Hawreliak, author G. W. Collins, and author T. S. Duffy, title title Experimental evidence for a phase transition in magnesium oxide at exoplanet pressures, https://doi.org/10.1038/ngeo1948 journal journal Nature Geosci. volume 6, pages 926 (year 2013)NoStop [Millot et al.(2015)Millot, Dubrovinskaia, Černok, Blaha, Dubrovinsky, Braun, Celliers, Collins, Eggert, and Jeanloz]MillotSci2015 author author M. Millot, author N. Dubrovinskaia, author A. Černok, author S. Blaha, author L. Dubrovinsky, author D. G. Braun, author P. M. Celliers, author G. W. Collins, author J. H. Eggert, and author R. Jeanloz, title title Shock compression of stishovite and melting of silica at planetary interior conditions, https://doi.org/10.1126/science.1261507 journal journal Science volume 347, pages 418 (year 2015)NoStop [Kohn and Sham(1965)]dft2ks author author W. Kohn and author L. J. Sham, title title Self-consistent equations including exchange and correlation effects, https://doi.org/10.1103/PhysRev.140.A1133 journal journal Phys. Rev. volume 140, pages A1133 (year 1965)NoStop [Zhang et al.(2018)Zhang, Malone, and Morales]Zhang2018NiO author author S. Zhang, author F. D. Malone, and author M. A. Morales, title title Auxiliary-field quantum Monte Carlo calculations of the structural properties of nickel oxide, https://doi.org/10.1063/1.5040900 journal journal J. Chem. Phys. volume 149, pages 164102 (year 2018)NoStop [Driver et al.(2010)Driver, Cohen, Wu, Militzer, Ríos, Towler, Needs, and Wilkins]Driver2010SiO2 author author K. P. Driver, author R. E. Cohen, author Z. Wu, author B. Militzer, author P. L. Ríos, author M. D. Towler, author R. J. Needs, and author J. W. Wilkins, title title Quantum Monte Carlo computations of phase stability, equations of state, and elasticity of high-pressure silica., https://doi.org/10.1073/pnas.0912130107 journal journal Proc. Natl. Acad. Sci. USA volume 107, pages 9519 (year 2010)NoStop [Trail et al.(2017)Trail, Monserrat, López Ríos, Maezono, and Needs]Trail2017TiO2 author author J. Trail, author B. Monserrat, author P. López Ríos, author R. Maezono, and author R. J. Needs, title title Quantum monte carlo study of the energetics of the rutile, anatase, brookite, and columbite tio_2 polymorphs, https://doi.org/10.1103/PhysRevB.95.121108 journal journal Phys. Rev. B volume 95, pages 121108 (year 2017)NoStop [Malone et al.(2019)Malone, Zhang, and Morales]Malone2019C author author F. D. Malone, author S. Zhang, and author M. A. Morales, title title Overcoming the Memory Bottleneck in Auxiliary Field Quantum Monte Carlo Simulations with Interpolative Separable Density Fitting, https://doi.org/10.1021/acs.jctc.8b00944 journal journal J. Chem. Theory Comput. volume 15, pages 256 (year 2019)NoStop [Malone et al.(2020)Malone, Benali, Morales, Caffarel, Kent, and Shulenburger]Malone2020 author author F. D. Malone, author A. Benali, author M. A. Morales, author M. Caffarel, author P. R. C. Kent, and author L. Shulenburger, title title Systematic comparison and cross-validation of fixed-node diffusion monte carlo and phaseless auxiliary-field quantum monte carlo in solids, https://doi.org/10.1103/PhysRevB.102.161104 journal journal Phys. Rev. B volume 102, pages 161104 (year 2020)NoStop [LeBlanc et al.(2015)LeBlanc, Antipov, Becca, Bulik, Chan, Chung, Deng, Ferrero, Henderson, Jiménez-Hoyos, Kozik, Liu, Millis, Prokof'ev, Qin, Scuseria, Shi, Svistunov, Tocchio, Tupitsyn, White, Zhang, Zheng, Zhu, and Gull]PhysRevX.5.041041 author author J. P. F. LeBlanc, author A. E. Antipov, author F. Becca, author I. W. Bulik, author G. K.-L. Chan, author C.-M. Chung, author Y. Deng, author M. Ferrero, author T. M. Henderson, author C. A. Jiménez-Hoyos, author E. Kozik, author X.-W. Liu, author A. J. Millis, author N. V. Prokof'ev, author M. Qin, author G. E. Scuseria, author H. Shi, author B. V. Svistunov, author L. F. Tocchio, author I. S. Tupitsyn, author S. R. White, author S. Zhang, author B.-X. Zheng, author Z. Zhu, and author E. Gull (collaboration Simons Collaboration on the Many-Electron Problem), title title Solutions of the two-dimensional hubbard model: Benchmarks and results from a wide range of numerical algorithms, https://doi.org/10.1103/PhysRevX.5.041041 journal journal Phys. Rev. X volume 5, pages 041041 (year 2015)NoStop [Motta et al.(2017)Motta, Ceperley, Chan, Gomez, Gull, Guo, Jiménez-Hoyos, Lan, Li, Ma, Millis, Prokof'ev, Ray, Scuseria, Sorella, Stoudenmire, Sun, Tupitsyn, White, Zgid, and Zhang]PhysRevX.7.031059 author author M. Motta, author D. M. Ceperley, author G. K.-L. Chan, author J. A. Gomez, author E. Gull, author S. Guo, author C. A. Jiménez-Hoyos, author T. N. Lan, author J. Li, author F. Ma, author A. J. Millis, author N. V. Prokof'ev, author U. Ray, author G. E. Scuseria, author S. Sorella, author E. M. Stoudenmire, author Q. Sun, author I. S. Tupitsyn, author S. R. White, author D. Zgid, and author S. Zhang (collaboration Simons Collaboration on the Many-Electron Problem), title title Towards the solution of the many-electron problem in real materials: Equation of state of the hydrogen chain with state-of-the-art many-body methods, https://doi.org/10.1103/PhysRevX.7.031059 journal journal Phys. Rev. X volume 7, pages 031059 (year 2017)NoStop [Zheng et al.(2017)Zheng, Chung, Corboz, Ehlers, Qin, Noack, Shi, White, Zhang, and Chan]Zheng1155 author author B.-X. Zheng, author C.-M. Chung, author P. Corboz, author G. Ehlers, author M.-P. Qin, author R. M. Noack, author H. Shi, author S. R. White, author S. Zhang, and author G. K.-L. Chan, title title Stripe order in the underdoped region of the two-dimensional hubbard model, https://doi.org/10.1126/science.aam7127 journal journal Science volume 358, pages 1155 (year 2017)NoStop [Lee et al.(2020)Lee, Malone, and Reichman]Lee2020jcp author author J. Lee, author F. D. Malone, and author D. R. Reichman, title title The performance of phaseless auxiliary-field quantum Monte Carlo on the ground state electronic energy of benzene, https://doi.org/10.1063/5.0024835 journal journal J. Chem. Phys. volume 153, pages 126101 (year 2020)NoStop [Shi and Zhang(2021)]Shi2021jcp author author H. Shi and author S. Zhang, title title Some recent developments in auxiliary-field quantum Monte Carlo for real materials, https://doi.org/10.1063/5.0031024 journal journal J. Chem. Phys. volume 154, pages 24107 (year 2021)NoStop [Zhang and Krakauer(2003)]Zhang2003phaseless author author S. Zhang and author H. Krakauer, title title Quantum monte carlo method using phase-free random walks with slater determinants, https://doi.org/10.1103/PhysRevLett.90.136401 journal journal Phys. Rev. Lett. volume 90, pages 136401 (year 2003)NoStop [Motta and Zhang(2018)]motta2017ab author author M. Motta and author S. Zhang, title title Ab initio computations of molecular systems by the auxiliary-field quantum monte carlo method, https://doi.org/https://doi.org/10.1002/wcms.1364 journal journal WIREs Comput. Mol. Sci. volume 8, pages e1364 (year 2018)NoStop [Morales and Malone(2020)]Morales2020 author author M. A. Morales and author F. D. Malone, title title Accelerating the convergence of auxiliary-field quantum Monte Carlo in solids with optimized Gaussian basis sets, https://doi.org/10.1063/5.0025390 journal journal J. Chem. Phys. volume 153, pages 194111 (year 2020)NoStop [Dubrovinskaia et al.(2019a)Dubrovinskaia, Petitgirard, Chariton, Tucoulou, Garrevoet, Glazyrin, Liermann, Prakapenka, and Dubrovinsky]Dubrovinskaia2009 author author N. Dubrovinskaia, author S. Petitgirard, author S. Chariton, author R. Tucoulou, author J. Garrevoet, author K. Glazyrin, author H.-P. Liermann, author V. B. Prakapenka, and author L. Dubrovinsky, title title B1-B2 phase transition in MgO at ultra-high static pressure, @noop journal journal arXiv preprint arXiv:1904.00476 (year 2019a)NoStop [Bouchet et al.(2019)Bouchet, Bottin, Recoules, Remus, Morard, Bolis, and Benuzzi-Mounaix]Bouchet2019 author author J. Bouchet, author F. Bottin, author V. Recoules, author F. Remus, author G. Morard, author R. M. Bolis, and author A. Benuzzi-Mounaix, title title Ab initio calculations of the B1-B2 phase transition in MgO, https://doi.org/10.1103/PhysRevB.99.094113 journal journal Phys. Rev. B volume 99, pages 094113 (year 2019)NoStop [Alfè(2005)]Alfe2005mgomelt author author D. Alfè, title title Melting curve of mgo from first-principles simulations, https://doi.org/10.1103/PhysRevLett.94.235701 journal journal Phys. Rev. Lett. volume 94, pages 235701 (year 2005)NoStop [De Koker and Stixrude(2009)]dKS2009 author author N. De Koker and author L. Stixrude, title title Self-consistent thermodynamic description of silicate liquids, with application to shock melting of MgO periclase and MgSiO3 perovskite, https://doi.org/10.1111/j.1365-246X.2009.04142.x journal journal Geophys. J. Int. volume 178, pages 162 (year 2009)NoStop [Belonoshko et al.(2010)Belonoshko, Arapan, Martonak, and Rosengren]Belonoshko2010 author author A. B. Belonoshko, author S. Arapan, author R. Martonak, and author A. Rosengren, title title Mgo phase diagram from first principles in a wide pressure-temperature range, https://doi.org/10.1103/PhysRevB.81.054110 journal journal Phys. Rev. B volume 81, pages 054110 (year 2010)NoStop [Boates and Bonev(2013)]Boates2013 author author B. Boates and author S. a. Bonev, title title Demixing Instability in Dense Molten MgSiO_3 and the Phase Diagram of MgO, https://doi.org/10.1103/PhysRevLett.110.135504 journal journal Phys/ Rev. Lett. volume 110, pages 135504 (year 2013)NoStop [Cebulla and Redmer(2014)]Cebulla2014 author author D. Cebulla and author R. Redmer, title title Ab initio simulations of MgO under extreme conditions, https://doi.org/10.1103/PhysRevB.89.134107 journal journal Physical Review B volume 89, pages 134107 (year 2014)NoStop [Taniuchi and Tsuchiya(2018)]Taniuchi2018 author author T. Taniuchi and author T. Tsuchiya, title title The melting points of MgO up to 4 TPa predicted based on ab initio thermodynamic integration molecular dynamics, https://doi.org/10.1088/1361-648X/aaac96 journal journal J. Phys.: Condens. Matter volume 30, pages 114003 (year 2018)NoStop [Musella et al.(2019)Musella, Mazevet, and Guyot]Musella2019 author author R. Musella, author S. Mazevet, and author F. Guyot, title title Physical properties of MgO at deep planetary conditions, https://doi.org/10.1103/PhysRevB.99.064110 journal journal Phys. Rev. B volume 99, pages 064110 (year 2019)NoStop [Soubiran and Militzer(2020)]Soubiran2020 author author F. Soubiran and author B. Militzer, title title Anharmonicity and Phase Diagram of Magnesium Oxide in the Megabar Regime, https://doi.org/10.1103/PhysRevLett.125.175701 journal journal Phys. Rev. Lett. volume 125, pages 175701 (year 2020)NoStop [Root et al.(2015)Root, Shulenburger, Lemke, Dolan, Mattsson, and Desjarlais]Root2015mgo author author S. Root, author L. Shulenburger, author R. W. Lemke, author D. H. Dolan, author T. R. Mattsson, and author M. P. Desjarlais, title title Shock Response and Phase Transitions of MgO at Planetary Impact Conditions, https://doi.org/10.1103/PhysRevLett.115.198501 journal journal Phys. Rev. Lett. volume 115, pages 1 (year 2015)NoStop [Miyanishi et al.(2015)Miyanishi, Tange, Ozaki, Kimura, Sano, Sakawa, Tsuchiya, and Kodama]Miyanishi2015 author author K. Miyanishi, author Y. Tange, author N. Ozaki, author T. Kimura, author T. Sano, author Y. Sakawa, author T. Tsuchiya, and author R. Kodama, title title Laser-shock compression of magnesium oxide in the warm-dense-matter regime, https://doi.org/10.1103/PhysRevE.92.023103 journal journal Phys. Rev. E volume 92, pages 23103 (year 2015)NoStop [Bolis et al.(2016)Bolis, Morard, Vinci, Ravasio, Bambrink, Guarguaglini, Koenig, Musella, Remus, Bouchet, Ozaki, Miyanishi, Sekine, Sakawa, Sano, Kodama, Guyot, and Benuzzi-Mounaix]Bolis2016mgo author author R. M. Bolis, author G. Morard, author T. Vinci, author A. Ravasio, author E. Bambrink, author M. Guarguaglini, author M. Koenig, author R. Musella, author F. Remus, author J. Bouchet, author N. Ozaki, author K. Miyanishi, author T. Sekine, author Y. Sakawa, author T. Sano, author R. Kodama, author F. Guyot, and author A. Benuzzi-Mounaix, title title Decaying shock studies of phase transitions in MgO-SiO2 systems: Implications for the super-Earths' interiors, https://doi.org/10.1002/2016GL070466 journal journal Geophys. Res. Lett. volume 43, pages 9475 (year 2016)NoStop [Hansen et al.(2021)Hansen, Fratanduono, Zhang, Hicks, Suer, Sprowal, Huff, Gong, Henderson, Polsin, Zaghoo, Hu, Collins, and Rygg]Hansen2021 author author L. E. Hansen, author D. E. Fratanduono, author S. Zhang, author D. G. Hicks, author T. Suer, author Z. K. Sprowal, author M. F. Huff, author X. Gong, author B. J. Henderson, author D. N. Polsin, author M. Zaghoo, author S. X. Hu, author G. W. Collins, and author J. R. Rygg, title title Melting of magnesium oxide up to two terapascals using double-shock compression, https://doi.org/10.1103/PhysRevB.104.014106 journal journal Phy. Rev. B volume 104, pages 014106 (year 2021)NoStop [Hubbard(1959)]PhysRevLett.3.77 author author J. Hubbard, title title Calculation of partition functions, https://doi.org/10.1103/PhysRevLett.3.77 journal journal Phys. Rev. Lett. volume 3, pages 77 (year 1959)NoStop [Ceperley and Alder(1980)]Ceperley1980 author author D. M. Ceperley and author B. J. Alder, title title Ground state of the electron gas by a stochastic method, @noop journal journal Phys. Rev. Lett. volume 45, pages 566 (year 1980)NoStop [Zhang et al.(1997)Zhang, Carlson, and Gubernatis]zhang_cpmc author author S. Zhang, author J. Carlson, and author J. E. Gubernatis, title title Constrained path monte carlo method for fermion ground states, https://doi.org/10.1103/PhysRevB.55.7464 journal journal Phys. Rev. B volume 55, pages 7464 (year 1997)NoStop [Shee et al.(2019)Shee, Rudshteyn, Arthur, Zhang, Reichman, and Friesner]Shee2019 author author J. Shee, author B. Rudshteyn, author E. J. Arthur, author S. Zhang, author D. R. Reichman, and author R. A. Friesner, title title On achieving high accuracy in quantum chemical calculations of 3d transition metal-containing systems: A comparison of auxiliary-field quantum monte carlo with coupled cluster, density functional theory, and experiment for diatomic molecules, https://doi.org/10.1021/acs.jctc.9b00083 journal journal J. Chem. Theory Comput. volume 15, pages 2346 (year 2019)NoStop [Purwanto et al.(2009)Purwanto, Zhang, and Krakauer]doi:10.1063/1.3077920 author author W. Purwanto, author S. Zhang, and author H. Krakauer, title title Excited state calculations using phaseless auxiliary-field quantum monte carlo: Potential energy curves of low-lying c2 singlet states, https://doi.org/10.1063/1.3077920 journal journal J. Chem. Phys. volume 130, pages 094107 (year 2009)NoStop [Borda et al.(2018)Borda, Gomez, and Morales]borda2018non author author E. J. L. Borda, author J. A. Gomez, and author M. A. Morales, title title Non-orthogonal multi-slater determinant expansions in auxiliary field quantum monte carlo, @noop journal journal arXiv preprint arXiv:1801.10307 (year 2018)NoStop [Mahajan and Sharma(2021)]Mahajan2021 author author A. Mahajan and author S. Sharma, title title Taming the sign problem in auxiliary-field quantum monte carlo using accurate wave functions, https://doi.org/10.1021/acs.jctc.1c00371 journal journal J. Chem. Theory Comput. volume 17, pages 4786 (year 2021)NoStop [Qin et al.(2016)Qin, Shi, and Zhang]PhysRevB.94.085103 author author M. Qin, author H. Shi, and author S. Zhang, title title Benchmark study of the two-dimensional hubbard model with auxiliary-field quantum monte carlo method, https://doi.org/10.1103/PhysRevB.94.085103 journal journal Phys. Rev. B volume 94, pages 085103 (year 2016)NoStop [Chang and Morales(2017)]chang2017multi author author C.-C. Chang and author M. A. Morales, title title Multi-determinant generalized hartree-fock wave functions in monte carlo calculations, @noop journal journal arXiv preprint arXiv:1711.02154 (year 2017)NoStop [Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, Corso, de Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj, Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello, Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and Wentzcovitch]QE2009 author author P. Giannozzi, author S. Baroni, author N. Bonini, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author G. L. Chiarotti, author M. Cococcioni, author I. Dabo, author A. D. Corso, author S. de Gironcoli, author S. Fabris, author G. Fratesi, author R. Gebauer, author U. Gerstmann, author C. Gougoussis, author A. Kokalj, author M. Lazzeri, author L. Martin-Samos, author N. Marzari, author F. Mauri, author R. Mazzarello, author S. Paolini, author A. Pasquarello, author L. Paulatto, author C. Sbraccia, author S. Scandolo, author G. Sclauzero, author A. P. Seitsonen, author A. Smogunov, author P. Umari, and author R. M. Wentzcovitch, title title Quantum espresso: a modular and open-source software project for quantum simulations of materials, http://stacks.iop.org/0953-8984/21/i=39/a=395502 journal journal J. Phys.: Condens. Matter volume 21, pages 395502 (year 2009)NoStop [Giannozzi et al.(2017)Giannozzi, Andreussi, Brumme, Bunau, Buongiorno Nardelli, Calandra, Car, Cavazzoni, Ceresoli, Cococcioni, Colonna, Carnimeo, Dal Corso, de Gironcoli, Delugas, DiStasio, Ferretti, Floris, Fratesi, Fugallo, Gebauer, Gerstmann, Giustino, Gorni, Jia, Kawamura, Ko, Kokalj, Küçükbenli, Lazzeri, Marsili, Marzari, Mauri, Nguyen, Nguyen, Otero-de-la-Roza, Paulatto, Poncé, Rocca, Sabatini, Santra, Schlipf, Seitsonen, Smogunov, Timrov, Thonhauser, Umari, Vast, Wu, and Baroni]QE2017 author author P. Giannozzi, author O. Andreussi, author T. Brumme, author O. Bunau, author M. Buongiorno Nardelli, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author M. Cococcioni, author N. Colonna, author I. Carnimeo, author A. Dal Corso, author S. de Gironcoli, author P. Delugas, author R. A. DiStasio, Jr., author A. Ferretti, author A. Floris, author G. Fratesi, author G. Fugallo, author R. Gebauer, author U. Gerstmann, author F. Giustino, author T. Gorni, author J. Jia, author M. Kawamura, author H.-Y. Ko, author A. Kokalj, author E. Küçükbenli, author M. Lazzeri, author M. Marsili, author N. Marzari, author F. Mauri, author N. L. Nguyen, author H.-V. Nguyen, author A. Otero-de-la-Roza, author L. Paulatto, author S. Poncé, author D. Rocca, author R. Sabatini, author B. Santra, author M. Schlipf, author A. P. Seitsonen, author A. Smogunov, author I. Timrov, author T. Thonhauser, author P. Umari, author N. Vast, author X. Wu, and author S. Baroni, title title Advanced capabilities for materials modelling with Quantum ESPRESSO, https://doi.org/10.1088/1361-648X/aa8f79 journal journal J. Phys.: Condens. Matter volume 29, eid 465901 (year 2017)NoStop [F. and Jan(1977)]beebe77 author author B. N. H. F. and author L. Jan, title title Simplifications in the generation and transformation of two‐electron integrals in molecular calculations, https://doi.org/10.1002/qua.560120408 journal journal Int. J. Quantum Chem. volume 12, pages 683 (year 1977)NoStop [Koch et al.(2003)Koch, de Merás, and Pedersen]doi:10.1063/1.1578621 author author H. Koch, author A. S. de Merás, and author T. B. Pedersen, title title Reduced scaling in electronic structure calculations using cholesky decompositions, https://doi.org/10.1063/1.1578621 journal journal J. Chem. Phys. volume 118, pages 9481 (year 2003)NoStop [Francesco et al.(2009)Francesco, Luca, Nicolas, Giovanni, Per‐åke, Pavel, Bondo, Michal, Markus, O., Luis, Miroslav, Valera, and Roland]doi:10.1002/jcc.21318 author author A. Francesco, author D. V. Luca, author F. Nicolas, author G. Giovanni, author M. Per‐åke, author N. Pavel, author P. T. Bondo, author P. Michal, author R. Markus, author R. B. O., author S. Luis, author U. Miroslav, author V. Valera, and author L. Roland, title title Molcas 7: The next generation, https://doi.org/10.1002/jcc.21318 journal journal J. Comput. Chem. volume 31, pages 224 (year 2009)NoStop [Purwanto et al.(2011)Purwanto, Krakauer, Virgus, and Zhang]doi:10.1063/1.3654002 author author W. Purwanto, author H. Krakauer, author Y. Virgus, and author S. Zhang, title title Assessing weak hydrogen binding on ca+ centers: An accurate many-body study with large basis sets, https://doi.org/10.1063/1.3654002 journal journal J. Chem. Phys. volume 135, pages 164105 (year 2011)NoStop [Hamann(2013)]PhysRevB.88.085117 author author D. R. Hamann, title title Optimized norm-conserving vanderbilt pseudopotentials, https://doi.org/10.1103/PhysRevB.88.085117 journal journal Phys. Rev. B volume 88, pages 085117 (year 2013)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]PBE1996 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Kim et al.(2018)Kim, Baczewski, Beaudet, Benali, Bennett, Berrill, Blunt, Borda, Casula, Ceperley, Chiesa, Clark, III, Delaney, Dewing, Esler, Hao, Heinonen, Kent, Krogel, Kylänpää, Li, Lopez, Luo, Malone, Martin, Mathuriya, McMinis, Melton, Mitas, Morales, Neuscamman, Parker, Flores, Romero, Rubenstein, Shea, Shin, Shulenburger, Tillack, Townsend, Tubman, Goetz, Vincent, Yang, Yang, Zhang, and Zhao]qmcpack author author J. Kim, author A. T. Baczewski, author T. D. Beaudet, author A. Benali, author M. C. Bennett, author M. A. Berrill, author N. S. Blunt, author E. J. L. Borda, author M. Casula, author D. M. Ceperley, author S. Chiesa, author B. K. Clark, author R. C. C. III, author K. T. Delaney, author M. Dewing, author K. P. Esler, author H. Hao, author O. Heinonen, author P. R. C. Kent, author J. T. Krogel, author I. Kylänpää, author Y. W. Li, author M. G. Lopez, author Y. Luo, author F. D. Malone, author R. M. Martin, author A. Mathuriya, author J. McMinis, author C. A. Melton, author L. Mitas, author M. A. Morales, author E. Neuscamman, author W. D. Parker, author S. D. P. Flores, author N. A. Romero, author B. M. Rubenstein, author J. A. R. Shea, author H. Shin, author L. Shulenburger, author A. F. Tillack, author J. P. Townsend, author N. M. Tubman, author B. V. D. Goetz, author J. E. Vincent, author D. C. Yang, author Y. Yang, author S. Zhang, and author L. Zhao, title title Qmcpack : an open source ab initio quantum monte carlo package for the electronic structure of atoms, molecules and solids, http://stacks.iop.org/0953-8984/30/i=19/a=195901 journal journal J. Phys.: Condens. Matter volume 30, pages 195901 (year 2018)NoStop [Hohenberg and Kohn(1964)]dft1hk author author P. Hohenberg and author W. Kohn, title title Inhomogeneous electron gas, https://doi.org/10.1103/PhysRev.136.B864 journal journal Phys. Rev. volume 136, pages B864 (year 1964)NoStop [Kresse and Furthmüller(1996)]kresse96b author author G. Kresse and author J. Furthmüller, title title Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, @noop journal journal Phys. Rev. B volume 54, pages 11169 (year 1996)NoStop [Blöchl et al.(1994)Blöchl, Jepsen, and Andersen]Blochl1994 author author P. E. Blöchl, author O. Jepsen, and author O. K. Andersen, title title Improved tetrahedron method for brillouin-zone integrations, @noop journal journal Phys. Rev. B volume 49, pages 16223 (year 1994)NoStop [Perdew and Zunger(1981)]LDApz1981 author author J. P. Perdew and author A. Zunger, title title Self-interaction correction to density-functional approximations for many-electron systems, https://doi.org/10.1103/PhysRevB.23.5048 journal journal Phys. Rev. B volume 23, pages 5048 (year 1981)NoStop [Perdew et al.(2008)Perdew, Ruzsinszky, Csonka, Vydrov, Scuseria, Constantin, Zhou, and Burke]PBEsol author author J. P. Perdew, author A. Ruzsinszky, author G. I. Csonka, author O. A. Vydrov, author G. E. Scuseria, author L. A. Constantin, author X. Zhou, and author K. Burke, title title Restoring the density-gradient expansion for exchange in solids and surfaces, https://doi.org/10.1103/PhysRevLett.100.136406 journal journal Phys. Rev. Lett. volume 100, pages 136406 (year 2008)NoStop [Sun et al.(2015)Sun, Ruzsinszky, and Perdew]SCAN author author J. Sun, author A. Ruzsinszky, and author J. P. Perdew, title title Strongly constrained and appropriately normed semilocal density functional, https://doi.org/10.1103/PhysRevLett.115.036402 journal journal Phys. Rev. Lett. volume 115, pages 036402 (year 2015)NoStop [Heyd et al.(2006)Heyd, Scuseria, and Ernzerhof]HSE2006 author author J. Heyd, author G. E. Scuseria, and author M. Ernzerhof, title title Erratum: “hybrid functionals based on a screened coulomb potential” [j. chem. phys. 118, 8207 (2003)], https://doi.org/10.1063/1.2204597 journal journal J. Chem. Phys. volume 124, pages 219906 (year 2006)NoStop [Jeanloz(1988)]Jeanloz1988EOS author author R. Jeanloz, title title Universal equation of state, https://doi.org/10.1103/PhysRevB.38.805 journal journal Phys. Rev. B volume 38, pages 805 (year 1988)NoStop [Vinet et al.(1987)Vinet, Smith, Ferrante, and Rose]vinet author author P. Vinet, author J. R. Smith, author J. Ferrante, and author J. H. Rose, title title Temperature effects on the universal equation of state of solids, https://doi.org/10.1103/PhysRevB.35.1945 journal journal Phys. Rev. B volume 35, pages 1945 (year 1987)NoStop [Birch(1978)]Birchbm4 author author F. Birch, title title Finite strain isotherm and velocities for single-crystal and polycrystalline nacl at high pressures and 300°k, https://doi.org/https://doi.org/10.1029/JB083iB03p01257 journal journal J. Geophys. Res.: Solid Earth volume 83, pages 1257 (year 1978)NoStop [Togo and Tanaka(2015)]phonopy author author A. Togo and author I. Tanaka, title title First principles phonon calculations in materials science, @noop journal journal Scr. Mater. volume 108, pages 1 (year 2015)NoStop [Togo(2023)]phonopy-phono3py-JPSJ author author A. Togo, title title First-principles phonon calculations with phonopy and phono3py, https://doi.org/10.7566/JPSJ.92.012001 journal journal J. Phys. Soc. Jpn. volume 92, pages 012001 (year 2023)NoStop [Mermin(1965)]mermin1965 author author N. D. Mermin, title title Thermal properties of the inhomogeneous electron gas, https://doi.org/10.1103/PhysRev.137.A1441 journal journal Phys. Rev. volume 137, pages A1441 (year 1965)NoStop [Nosé(1984)]Nose1984 author author S. Nosé, title title A unified formulation of the constant temperature molecular dynamics methods, https://doi.org/10.1063/1.447334 journal journal J. Chem. Phys. volume 81, pages 511 (year 1984)NoStop [Zhang et al.(2022)Zhang, Morales, Jeanloz, Millot, Hu, and Zurek]Zhang2022sio2 author author S. Zhang, author M. A. Morales, author R. Jeanloz, author M. Millot, author S. X. Hu, and author E. Zurek, title title Nature of the bonded-to-atomic transition in liquid silica to TPa pressures, https://doi.org/10.1063/5.0081293 journal journal J. Appl. Phys. volume 131, pages 071101 (year 2022)NoStop [Dubrovinskaia et al.(2019b)Dubrovinskaia, Petitgirard, Chariton, Tucoulou, Garrevoet, Glazyrin, Liermann, Prakapenka, and Dubrovinsky]Dubrovinskaia2019 author author N. Dubrovinskaia, author S. Petitgirard, author S. Chariton, author R. Tucoulou, author J. Garrevoet, author K. Glazyrin, author H.-P. Liermann, author V. B. Prakapenka, and author L. Dubrovinsky, @noop title B1-b2 phase transition in mgo at ultra-high static pressure (year 2019b), https://arxiv.org/abs/1904.00476 arXiv:1904.00476 [cond-mat.mtrl-sci] NoStop [Fei(1999)]Fei1999 author author Y. Fei, title title Effects of temperature and composition on the bulk modulus of (Mg,Fe)O, https://doi.org/10.2138/am-1999-0308 journal journal Am. Mineral. volume 84, pages 272 (year 1999)NoStop [Speziale et al.(2001)Speziale, Zha, Duffy, Hemley, and Mao]Speziale2001 author author S. Speziale, author C.-S. Zha, author T. S. Duffy, author R. J. Hemley, and author H.-k. Mao, title title Quasi-hydrostatic compression of magnesium oxide to 52 gpa: Implications for the pressure-volume-temperature equation of state, https://doi.org/https://doi.org/10.1029/2000JB900318 journal journal J. Geophys. Res. volume 106, pages 515 (year 2001)NoStop [Jacobsen et al.(2008)Jacobsen, Holl, Adams, Fischer, Martin, Bina, Lin, Prakapenka, Kubo, and Dera]Jacobsen2008 author author S. D. Jacobsen, author C. M. Holl, author K. a. Adams, author R. a. Fischer, author E. S. Martin, author C. R. Bina, author J.-F. Lin, author V. B. Prakapenka, author a. Kubo, and author P. Dera, title title Compression of single-crystal magnesium oxide to 118 GPa and a ruby pressure gauge for helium pressure media, https://doi.org/10.2138/am.2008.2988 journal journal Am. Mineral. volume 93, pages 1823 (year 2008)NoStop [Marsh(1980)]MarshLASL1980 author author S. P. Marsh, @noop title LASL Shock Hugoniot Data (publisher University of California Press, Berkeley, year 1980)NoStop [Vassiliou and Ahrens(1981)]Vassiliou1981 author author M. S. Vassiliou and author T. J. Ahrens, title title Hugoniot equation of state of periclase to 200 gpa, https://doi.org/https://doi.org/10.1029/GL008i007p00729 journal journal Geophys. Res. Lett. volume 8, pages 729 (year 1981)NoStop [Fratanduono et al.(2013)Fratanduono, Eggert, Akin, Chau, and Holmes]Fratanduono2013mgo author author D. E. Fratanduono, author J. H. Eggert, author M. C. Akin, author R. Chau, and author N. C. Holmes, title title A novel approach to hugoniot measurements utilizing transparent crystals, https://doi.org/10.1063/1.4813871 journal journal J. Appl. Phys. volume 114, pages 043518 (year 2013)NoStop [Svendsen and Ahrens(1987)]Svendsen1987 author author B. Svendsen and author T. J. Ahrens, title title Shock-induced temperatures of MgO, https://doi.org/10.1111/j.1365-246X.1987.tb01664.x journal journal Geophys. J. R. Astron. Soc. volume 91, pages 667 (year 1987)NoStop [Note1()]Note1 note We encounter imaginary-mode problems when using SCAN for phonon calculations at some of the densities. Therefore, we only use PBEsol for the QHA and finite-temperature QMD calculations.Stop [Kar()]KaratoBook2003 @noop note S.-I. Karato, The Dynamic Structure of the Deep Earth: An Interdisciplinary Approach (Princeton Univ Press, 2003), Chap. 4.Stop [Lee et al.(2021)Lee, Morales, and Malone]Lee2021jcp author author J. Lee, author M. A. Morales, and author F. D. Malone, title title A phaseless auxiliary-field quantum Monte Carlo perspective on the uniform electron gas at finite temperatures: Issues, observations, and benchmark study, https://doi.org/10.1063/5.0041378 journal journal J. Chem. Phys. volume 154, pages 64109 (year 2021)NoStop [Shen et al.(2020)Shen, Liu, Yu, and Rubenstein]Shen2020jcp author author T. Shen, author Y. Liu, author Y. Yu, and author B. M. Rubenstein, title title Finite temperature auxiliary field quantum Monte Carlo in the canonical ensemble, https://doi.org/10.1063/5.0026606 journal journal J. Chem. Phys. volume 153, pages 204108 (year 2020)NoStop [Chen and Zhang(2023)]chen2023computation author author S. Chen and author S. Zhang, @noop title Computation of forces and stresses in solids: towards accurate structural optimizations with auxiliary-field quantum monte carlo (year 2023), https://arxiv.org/abs/2302.07460 arXiv:2302.07460 [cond-mat.mtrl-sci] NoStop [Note2()]Note2 note This example is for T=0 K. At finite temperatures, we consider isotherms and use Helmholtz free energies F(V) for the common-tangent approach or Gibbs free energies G(P) for the crossover-point approach.Stop [Alfè(2009)]Alfe2009 author author D. Alfè, title title PHON: A program to calculate phonons using the small displacement method, https://doi.org/10.1016/j.cpc.2009.03.010 journal journal Comput. Phys. Commun. volume 180, pages 2622 (year 2009)NoStop [Sun et al.(2014)Sun, Zhang, and Wentzcovitch]Sun2014exp author author T. Sun, author D. B. Zhang, and author R. M. Wentzcovitch, title title Dynamic stabilization of cubic CaSi O3 perovskite at high temperatures and pressures from ab initio molecular dynamics, https://doi.org/10.1103/PhysRevB.89.094109 journal journal Phys. Rev. B volume 89, pages 094109 (year 2014)NoStop [B1B()]B1B2modelJeanloz1982 @noop note R. Jeanloz, Effect of coordination change on thermodynamic properties, in High-Pressure Research in Geophysics, edited by S. Akimoto and M. Manghnani, pp. 479-498, Center for Academic Publishing, Tokyo, Japan, 1982.Stop [Jeanloz and Roufosse(1982)]Jeanloz1982 author author R. Jeanloz and author M. Roufosse, title title Anharmonic properties: ionic model of the effects of compression and coordination change., @noop journal journal J. Geophys. Res. volume 87, pages 10763 (year 1982)NoStop [Van Gunsteren et al.(2002)Van Gunsteren, Daura, and Mark]VanGunsteren2002 author author W. F. Van Gunsteren, author X. Daura, and author A. E. Mark, title title Computation of free energy, https://doi.org/10.1002/1522-2675(200210)85:10<3113::AID-HLCA3113>3.0.CO;2-0 journal journal Helvetica Chimica Acta volume 85, pages 3113 (year 2002)NoStop [Jung et al.(2023)Jung, Srinivasan, Forslund, and Grabowski]Jung2023 author author J. H. Jung, author P. Srinivasan, author A. Forslund, and author B. Grabowski, title title High-accuracy thermodynamic properties to the melting point from ab initio calculations aided by machine-learning potentials, https://doi.org/10.1038/s41524-022-00956-8 journal journal npj Computational Materials volume 9, pages 3 (year 2023)NoStop [not()]noteaboutBolis2016 @noop note Note that the experimental data by Bolis et al. shown here are scanned from Fig. 8 of Ref. <cit.> and lifted by 1300 K to approximately match the original publication (Fig. 2 of Ref. <cit.>). Unfortunately, the original data in Ref. <cit.> are not available and it is unclear where the mismatch was from.Stop
http://arxiv.org/abs/2307.02470v1
20230705174452
Statistical Physics Perspective on Economic Inequality
[ "Victor M. Yakovenko" ]
econ.GN
[ "econ.GN", "physics.soc-ph", "q-fin.EC", "stat.AP" ]
JQI, Department of Physics, University of Maryland, College Park, Maryland 20742, USA Professor of Physics; Ph.D. in Physics from the Landau Institute for Theoretical Physics in 1987 ORCID <https://orcid.org/0000-0003-3754-1794> Google Scholar <https://scholar.google.com/citations?user=pEnxwCMAAAAJ> Web page <https://physics.umd.edu/ yakovenk/econophysics/> E-mail <[email protected]> This article is a supplement to my main contribution to the Routledge Handbook of Complexity Economics (2023). On the basis of three recent papers, it presents an unconventional perspective on economic inequality from a statistical physics point of view. One section demonstrates empirical evidence for the exponential distribution of income in 67 countries around the world. The exponential distribution was not familiar to mainstream economists until it was introduced by physicists by analogy with the Boltzmann-Gibbs distribution of energy and subsequently confirmed in empirical data for many countries. Another section reviews the two-class structure of income distribution in the USA. While the exponential law describes the majority of population (the lower class), the top tail of income distribution (the upper class) is characterized by the Pareto power law, and there is no clearly defined middle class in between. As a result, the whole distribution can be very well fitted by using only three parameters. Historical evolution of these parameters and inequality trends are analyzed from 1983 to 2018. Finally, global inequality in energy consumption and CO_2 emissions per capita is studied using the empirical data from 1980 to 2017. Global inequality, as measured by the Gini coefficient G, has been decreasing until around 2010, but then saturated at the level G=0.5. The saturation at this level was theoretically predicted on the basis of the maximal entropy principle, well before the slowdown of the global inequality decrease became visible in the data. This effect is attributed to accelerated mixing of the world economy due to globalization, which brings it to the state of maximal entropy and thus results in global economic stagnation. This observation has profound consequences for social and geopolitical stability and the efforts to deal with the climate change. Statistical Physics Perspective on Economic Inequality Victor M. Yakovenko 2 July 2023 ====================================================== § INTRODUCTION This article is a concise follow-up to my paper <cit.> reproduced in this Routledge Handbook of Complexity Economics (2023). The update is based on three papers <cit.> published after <cit.> and summarized in three sections below. All papers are available at <https://physics.umd.edu/ yakovenk/econophysics/>. By analogy with the Boltzmann-Gibbs distribution[<https://en.wikipedia.org/wiki/Boltzmann_distribution>] of energy in statistical physics, Dragulescu-2000 proposed that the stationary probability distribution of money in an ensemble of interacting economic agents follows an exponential law. They performed agent-based modeling[<https://physics.umd.edu/ yakovenk/econophysics/animation.html>] to illustrate how the exponential Boltzmann-Gibbs distribution of money develops from an initially equal distribution due to monetary transactions between the agents, which are analogous to collisions between molecules in a gas. The same computer simulations also shows that the entropy of the system increases and saturates at its maximal value in statistical equilibrium, in agreement with the second law of thermodynamics. Rosser-2016 highlighted distinction between two flavors of entropy: “ontological” and “metaphorical.” The former is the thermodynamic entropy introduced by Carnot, Clausius, and Boltzmann to physics in the 19th century. It governs flow of energy in physical systems via the second law of thermodynamics. The latter is the informational entropy introduced by Shannon and von Neumann in the 20th century as a measure of combinatorial complexity in any statistical ensemble, not limited to a physical ensemble. The statistical physics perspective presented below is guided by application of the second, generalized concept of entropy to an ensemble of economic agents, either at national or global scale. This perspective is the opposite to the representative-agent approach of traditional economics, which, by construction, reduces an ensemble to a single agent and thus ignores statistical and heterogeneous aspects of the economy, such as entropy and inequality <cit.>. § EXPONENTIAL DISTRIBUTION OF INCOME IN 67 COUNTRIES In their next paper, Dragulescu-2001a found empirical evidence for the exponential distribution of income in the USA by analyzing the data from the U.S. Census Bureau. Further analysis of the data from the Internal Revenue Service (IRS, the U.S. tax agency) confirmed the exponential shape for the lower part of income distribution, where the overwhelming majority of population belongs, whereas the tail of income distribution for the top few percent of population was found to follow the Pareto power law <cit.>. The observation that income distribution for the majority of population is described by the exponential law was virtually unknown to the economists and statisticians, but was eventually recognized in a special issue The Science of Inequality of the Science magazine in a one-page article by Cho-2014. Over time, the exponential distribution of income was found in numerous papers for different countries, e.g., by Banerjee-2006 for Australia. A collaborative team with Chinese economists and data scientists <cit.> studied income distribution for 67 countries around the world (where it was possible to obtain reasonably reliable data). The results are shown in Fig. <ref> for the European Union (EU) and its neighboring countries in the left panel and for select non-EU countries in the right panel. The graphs are plotted in log-linear scale, where normalized income on the horizontal axis is in linear scale, whereas cumulative probability on the vertical axis is in logarithmic scale. Data points following straight lines in log-linear scale confirm that the middle part of income distribution for these countries is, indeed, exponential. The upper tail deviates from the exponential shape because of the Pareto law, but is not well sampled in the survey data, so it is truncated. Income distribution at very low income may also deviate from the exponential shape because of welfare policies varying from country to country. These low-end deviations are also truncated in Fig. <ref>. Overall, Fig. <ref> demonstrates overwhelming empirical evidence that that the low and middle part of income distribution follows a universal exponential law for many countries. For some countries, the exponential regime extents over a particularly wide range, e.g., for China (CHN) in the right panel of Fig. <ref>. Besides the empirical analysis, Tao-2019 also reformulated the arguments in favor of the exponential distribution from statistical physics to the language of economics using the standard Arrow-Debreu’s general equilibrium model, combined with Rawls’ fairness principle for free competition. § TWO-CLASS STRUCTURE OF INCOME DISTRIBUTION The papers reviewed in Sec. <ref> considered income distribution for a given year. Silva-2005 extended this analysis to the multiyear period of 1983–2001 for the USA. In subsequent papers <cit.> the range of years was expanded even further. The latest paper <cit.> studied the two-class income distribution in the USA for 36 years from 1983 to 2018. The cumulative distribution function versus rescaled income is shown for these years in the left panel of Fig. <ref>. It clearly shows a two-class structure of income distribution. The inset shows that the lower part of the distribution (about 96% of population in 2018) is well described by an exponential function, as indicated by the straight lines in log-linear scale. On the other hand, the main panel shows that the upper tail (about 4% of population in 2018) is well described by a power law, as indicated by the straight lines in log-log scale. Importantly, the data reveal only two classes: upper and lower, whereas middle class does not exist. In fact, there is no commonly accepted definition of the “middle class” in the literature, and each author who writes about it invents their own definition. Mathematically, the two-class distribution can be derived as a (quasi)stationary solution of the Fokker-Planck equation[<https://en.wikipedia.org/wiki/Fokker–Planck_equation>] (also known as the forward Kolmogorov equation in mathematics) for stochastic dynamics of income with coexisting additive and multiplicative components <cit.>. The stationary distribution interpolates between an exponential law at the low end and a power law at the high end <cit.> and is known as the Pearson Type IV distribution <cit.>. Using the two-class decomposition, income distribution can be very well fitted by using only three parameters: the mean income T of the exponential bulk (which is analogous to temperature in physics), the exponent α of the power-law tail, and the crossover income r_* separating the lower and upper classes. Historical evolution of the first two parameters is shown in the right panel of Fig. <ref>. Panel (b) shows that the Pareto exponent α has been decreasing since 1983 (albeit flattening after 2000), which indicates “fattening” of the upper tail, i.e., rich getting richer. Panel (c) of Fig. <ref> shows the income temperature T in comparison with the mean income ⟨ r⟩ of the whole distribution. The normalized difference between these two parameters represents the share of income f=(⟨ r⟩-T)/⟨ r⟩ going to the upper class. The spikes on the red curve for f in Panel (c) indicate sharp increases in inequality due to the enhanced share of income going to the upper class. The first spike coincides with the .com bubble in stock market, the second spike with the housing bubble, and the third spike with the Quantitative Easing (QE) pursued by the Federal Reserve. Clearly, inequality peaks during speculative bubbles in the financial markets. The third spike gives direct evidence that the bailout of the financial system by the Fed resulted in increase of inequality. The parameter f has increased even further in subsequent years <cit.>. Panel (a) of Fig. <ref> shows the Gini coefficient G in comparison with the theoretical formula G=(1+f)/2 derived from the two-class decomposition by Silva-2005. A very good agreement after 1995 indicates that the historical dynamics of the Gini coefficient is completely determined by the share of income f going to the upper “superthermal” class, whereas relative inequality within the lower “thermal” class remains essentially constant over the span of 36 years <cit.>. Furthermore, Shaikh-2023 proposed a metric for ranking of countries, called the Vast Majority Income, on the basis of this formula for the Gini coefficient. Ludwig-2022 also studied the shares of tax revenue coming from the lower and upper classes, according to the IRS data. By 2018, the income share of the top 1% of the population has increased to 21%, which is almost twice the total income share of 12% going the bottom half (50%) of the population. At the same time, the tax share paid by the bottom 50% of the population has decreased to 3% and became almost negligible, because their total income share is so low. In contrast, the tax share of the top 1% of the population has increased to 40%. Thus, the majority of tax revenue now comes from the top few percent of the population, where most of income is concentrated. Not only does the upper-class income share increase, but the fraction of the population belonging to the upper class increases too <cit.>. In a relative sense, the upper-class population expands (while still remaining a small fraction at 4%), while the lower-class population shrinks (while still remaining the overwhelming majority). Ludwig-2022 speculated that it is due to digitization of the economy in the last 40 years. There was a rapid proliferation of personal computers during the 1980s, followed by the spread of the Internet and the World Wide Web in the 1990s, and then by ubiquitous personal mobile devices and a shift to cloud computing at the present time. This transformation enabled the creation and relatively easy scaling-up of digital platforms for highly non-local business operations. In the past, many businesses were local: taxi companies, book stores, video rental stores, etc. Now they are largely displaced by a small number of national and global network platforms, such as Uber and Lyft for riding, Amazon for books initially and then for all kinds of goods, Netflix for DVD rentals and video streaming, etc. The founders and owners of such network platforms become super-rich, because these platforms serve a huge number of customers, in contrast to the old-fashioned local businesses. Thus the growth of the upper class may be a reflection of the ongoing transformation of business network topology from local clusters to highly connected superclusters and global hubs. § GLOBAL INEQUALITY IN ENERGY CONSUMPTION AND CO_2 EMISSIONS Sections <ref> and <ref> focus on income inequality within a given country. However, there is also global inequality between rich and poor countries around the world. Studying global monetary inequality is complicated by different currencies, whose nominal conversion rates are not particularly representative. One way around this problem is to use the purchasing power parity[<https://en.wikipedia.org/wiki/Purchasing_power_parity>] <cit.>. Lawrence-2013 took another approach and investigated global inequality in energy consumption and CO_2 emissions per capita using the data from the U.S. Energy Information Administration (EIA) for 1980–2010. Energy consumption is a physical measure of inequality in standards of living, whereas CO_2 emissions are closely correlated with it, because most energy is produced from fossil fuels. The corresponding Lorenz curves are shown in Fig. <ref>. Computer animation of the time evolution of these Lorenz curves is also available.[<https://physics.umd.edu/ yakovenk/econophysics/global.html>] In both graphs, small circles indication various countries, whereas big circles indicate some labeled countries. Not surprisingly, the Lorenz curves for energy consumption and CO_2 emissions look quite similar. The Lorenz curves in 1980 exhibit a sharp slope change in the middle, which separates two groups of countries. One group in the top-right sector has high slope, indicating high energy consumption and CO_2 emissions per capita. This group consists of “developed” countries, mostly in North America and Europe. Another group in the bottom-left sector has low slope, indicating low energy consumption and CO_2 emissions per capita. This group consists of “developing” countries, such as China, India, and Brazil. In the subsequent years since 1980, the Lorenz curves move up, indicating that global inequality decreases. By 2010, the cusp in the middle of the Lorenz curves has smoothed out. Now there is no sharp boundary between “developed” and “developing” countries anymore. This is largely due to China moving to the middle of the curve between these two groups of countries. The black curve in Fig. <ref> represents the (analytically) calculated Lorenz graph for an exponential distribution. In statistical physics, the latter corresponds to the Boltzmann-Gibbs distribution, which maximizes entropy in thermal equilibrium. We observe that the empirical Lorenz curves move toward the calculated black exponential curve, approaching it closely from below. Lawrence-2013 attributed this behavior to globalization of the world economy, which mixes the world and brings it closer to the state of maximal entropy. They predicted that global inequality will soon stop decreasing and will saturate at the Gini coefficient G=0.5 corresponding to the exponential distribution. This prediction was spectacularly confirmed in the follow-up paper by Semieniuk-2020, when the data for subsequent years up to 2017 became available. Figure <ref> shows the Gini coefficient from 1980 to 2017 for CO_2 emissions per capita. Black circles are the same data points for 1980–2010 as analyzed in <cit.>. They manifest a decreasing trend and no sign of saturation yet. In contrast, the new data points for 2011–2017 (red squares) exhibit saturation at the level of G=0.5, indicting that global inequality stopped decreasing soon after 2010. This observation confirms the prediction made by Lawrence-2013 on the basis of the maximal entropy principle. Remarkably, this prediction was made at the time when global inequality has been steadily decreasing, and there was no indication of saturation in the available data yet. The advanced prediction in <cit.> and the subsequent confirmation in <cit.> strongly support the proposition that economic globalization is, indeed, governed by the principle of maximal entropy. This observation has profound consequences for strategies and scenarios dealing with the climate change <cit.>. Various calls have been made for either lifting billions of people from poverty, which implies increased consumption and carbon emissions, or capping the level of per-capita CO_2 emissions at the top of the distribution above a certain threshold. Semieniuk-2020 constructed the Lorenz curves implied by these redistributive proposals and demonstrated that they would require an unprecedented reduction in global inequality, far below historical levels. The recent saturation of the global inequality decrease further undermines feasibility of such redistributive scenarios. A decreasing trend was also observed in global income inequality <cit.>, but saturation has not been recognized yet. § CONCLUSION The term “econophysics” was introduced in 1995 at a conference in Kolkata by the theoretical physicist Eugene Stanley for a new interdisciplinary research field applying methods of statistical physics to economics and finance <cit.>. The paper by Stanley-1996 presented a manifesto of the new field, arguing that “behavior of large numbers of humans (as measured, e.g., by economic indices) might conform to analogs of the scaling laws that have proved useful in describing systems composed of large numbers of inanimate objects.” About 30 years later, econophysics is now well recognized in physics <cit.> and is gradually penetrating into mainstream economics <cit.>. Infusion of new ideas from a different field often results not in answering old questions within old framework, but in establishing new framework with new questions. Much of misunderstanding and miscommunication between economists and physicists happens not because they are getting different answers, but because they are answering different questions <cit.>. The empirical demonstration of the two-class structure of income distribution, described by the simplest mathematical functions: exponential and power law, is a significant accomplishment of econophysics. The Pareto power law for the upper tail has been known for a long time, but the exponential distribution, describing the majority of population, was not known before. The ability to fit the whole income distribution by using only three parameters is somewhat reminiscent of Kepler's discovery that the vast amount of accumulated data for positions of planets can be reduces to a few parameters for their elliptic orbits. Such data compression is a signature of scientific progress, which subsequently led Newton his discovery of the gravity law as an explanation for the elliptic orbits. While social classes have been known in political economy since Karl Marx, realization that they are described by simple mathematical distributions is quite new. As a follow-up to <cit.>, Wright-2005,Wright-2009 demonstrated emergence of two classes in more sophisticated agent-based simulations. This work was further developed by Isaac-2019 and in the book by Cottrell-2009, integrating economics, computer science, and physics. The prediction <cit.> and confirmation <cit.> of the saturation of the global inequality decrease is another significant accomplishment of econophysics. As mentioned in the former paper, “in physics, theories that not only explain known experiments, but also make successful predictions about future observations are particularly valuable.” At the International Energy Workshop[https://www.internationalenergyworkshop.org/meetings-10.html] in 2017 at College Park, I chatted with the chief energy modeler of EIA. He said in his talk that they have dynamical models, which are complicated and have many unknown parameters, so predictions cannot be made in practice. I said that I do not have such models, but can make a valid prediction on the basis of a general principle (of entropy maximization). There is discussion in the media about a global economic stagnation, sometimes called the “economic ice age,” pointing to various specific reasons <cit.>. In contrast, Lawrence-2013 suggested that the actual underlying reason is entropic. Globalization was driving the world economy toward the state of maximal entropy, which has been achieved now, thus resulting in global stagnation. Paradoxically, this advance toward the global statistical equilibrium may bring on social and geopolitical unrest because of slowdown in upward mobility for the lower part of global distribution and a downward slide from the previously privileged positions for the upper part <cit.>. On the other hand, the global economic stagnation may help to slow down the climate change. The relentless growth, fueled by fossil fuels for the last couple centuries since the beginning of Industrial Revolution, must decelerate because of limited environmental capacity of the Earth. However, the saturation of the global inequality decrease greatly complicates any agreement on limiting carbon emissions because of vast differences between the countries at the opposite ends of global distribution. The urgently needed transition from redistributable hydrocarbon fuels toward decentralized locally-generated and locally-consumed renewable energy could help to both lower carbon emissions and reduce global inequality <cit.>. 9 [Banerjee, Yakovenko, and Di Matteo(2006)]Banerjee-2006 Banerjee, A., Yakovenko, V. M., and Di Matteo, T. (2006) “A study of the personal income distribution in Australia,” Physica A 370, 54–59, <http://dx.doi.org/10.1016/j.physa.2006.04.023> [Banerjee and Yakovenko(2010)]Banerjee-2010 Banerjee, A., and Yakovenko, V. M. (2010) “Universal patterns of inequality”, New Journal of Physics 12, 075032, <http://dx.doi.org/10.1088/1367-2630/12/7/075032> [Bott(2013)]Bott-2013 Bott, U. (2103) “The coming global economic ice age?” The Globalist, 12 August 2013, <http://www.theglobalist.com/the-coming-global-economic-ice-age/> [Chakrabarti(2005)]Chakrabarti-2005 Chakrabarti, B. K. (2005) “Econophys-Kolkata: a short story,” pp. 225–228, in Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K., eds., Econophysics of Wealth Distributions (Springer, Milan) [Cho(2014)]Cho-2014 Cho, A. (2014) “Physicists say it's simple,” Science 344, 828, <https://doi.org/10.1126/science.344.6186.828> [Cottrell, Cockshott, Michaelson, Wright, and Yakovenko(2009)]Cottrell-2009 Cottrell, A. F., Cockshott, P., Michaelson, G. J., Wright, I. P., and Yakovenko, V. M. (2009) Classical Econophysics (Routledge, Oxford), ISBN 9780415478489, <http://www.routledge.com/books/details/9780415478489> [Drăgulescu and Yakovenko(2000)]Dragulescu-2000 Drăgulescu, A. A., and Yakovenko, V. M. (2000) “Statistical mechanics of money,” The European Physical Journal B 17, 723–729, <https://doi.org/10.1007/s100510070114> [Drăgulescu and Yakovenko(2001a)]Dragulescu-2001a Drăgulescu, A. A., and Yakovenko, V. M. (2001a) “Evidence for the exponential distribution of income in the USA,” The European Physical Journal B 20, 585–589, <https://doi.org/10.1007/PL00011112> [Drăgulescu and Yakovenko(2001b)]Dragulescu-2001b Drăgulescu, A. A., and Yakovenko, V. M. (2001b) “Exponential and power-law probability distributions of wealth and income in the United Kingdom and the United States,” Physica A 299, 213–221, <http://dx.doi.org/10.1016/S0378-4371 [Drăgulescu and Yakovenko(2003)]Dragulescu-2003 Drăgulescu, A. A., and Yakovenko, V. M. (2003) “Statistical mechanics of money, income, and wealth: a short survey,” pp. 180–183, in Garrido, P. L., and Marro, J., eds., Modeling of Complex Systems: Seventh Granada Lectures, American Institute of Physics (AIP) Conference Proceedings, vol. 661, <http://dx.doi.org/10.1063/1.1571309> [Isaac(2019)]Isaac-2019 Isaac, A. G. (2019) “Exploring the Social‐Architecture Model,” Eastern Economics Journal, 45, 565–589, <https://doi.org/10.1057/s41302-018-0114-9> [Lawrence, Liu, and Yakovenko(2013)]Lawrence-2013 Lawrence, S., Liu, Q., and Yakovenko, V. M. (2013) “Global inequality in energy consumption from 1980 to 2010,” Entropy 15, 5565–5579, <http://dx.doi.org/10.3390/e15125565> [Ludwig and Yakovenko(2022)]Ludwig-2022 Ludwig, D., and Yakovenko, V. M. (2022) “Physics-inspired analysis of the two-class income distribution in the USA in 1983–2018,” Philosophical Transactions of the Royal Society A, 380, 20210162, <https://doi.org/10.1098/rsta.2021.0162> [Milanović(2012)]Milanovic-2012 Milanović, B. (2012) “Global inequality recalculated and updated: the effect of new PPP estimates on global inequality and 2005 estimates,” Journal of Economic Inequality, 10, 1–18, <http://dx.doi.org/10.1007/s10888-010-9155-y> [Milanović(2020)]Milanovic-2020 Milanović, B. (2020) “The Great Convergence: Global Equality and Its Discontents,” Foreign Affairs, 28 August 2020, <https://www.foreignaffairs.com/world/great-convergence-equality-branko-milanovic> [Milanović(2023)]Milanovic-2023 Milanović, B. (2023) “The World Is Becoming More Equal, Even as Globalization Hurts Middle-Class Westerners,” Foreign Affairs, 14 June 2023, <https://www.foreignaffairs.com/articles/world/2020-08-28/world-economic-inequality> [Pearson(1895)]Pearson-1895 Pearson, K. (1895) “Contribution to the mathematical theory of evolution II: Skew variation in homogeneous material,” Philosophical Transactions of the Royal Society of London A 186, 343–414, <http://dx.doi.org/10.1098/rsta.1895.0010> [Rosser(2016)]Rosser-2016 Rosser, J.B. (2016) “Entropy and econophysics,” The European Physical Journal Special Topics 225, 3091–3104, <https://doi.org/10.1140/epjst/e2016-60166-y> [Salmon(2023)]Salmon-2023 Salmon, F. (2023) “Global inequality at lowest level in nearly 150 years,” Axios, 14 June 2023, <https://www.axios.com/2023/06/14/global-economic-inequality> [Semieniuk and Yakovenko(2020)]Semieniuk-2020 Semieniuk, G., and Yakovenko, V. M. (2020) “Historical evolution of global inequality in carbon emissions and footprints versus redistributive scenarios”, Journal of Cleaner Production, 264, 121420, <https://doi.org/10.1016/j.jclepro.2020.121420> [Shaikh(2017)]Shaikh-2017 Shaikh, A. (2017) “Income Distribution, Econophysics and Piketty”, Review of Political Economy 29, 18–29, <https://doi.org/10.1080/09538259.2016.1205295> [Shaikh and Ragab(2023)]Shaikh-2023 Shaikh, A., and Ragab, A. (2022) “Some universal patterns in income distribution: An econophysics approach,” Metroeconomica, 74, 248–264, <https://doi.org/10.1111/meca.12412> [Silva and Yakovenko(2005)]Silva-2005 Silva, A. C., and Yakovenko, V. M. (2005) “Temporal evolution of the `thermal' and `superthermal' income classes in the USA during 1983–2001,” Europhysics Letters 69, 304–310, <http://dx.doi.org/10.1209/epl/i2004-10330-3> [Stanley et al.(1996)]Stanley-1996 Stanley, H. E., et al. (1996) “Anomalous fluctuations in the dynamics of complex systems: from DNA and physiology to econophysics,” Physica A 224, 302–321, <https://doi.org/10.1016/0378-4371(95)00409-2> [Tao et al.(2019)]Tao-2019 Tao, Y., Wu, X., Zhou, T., Yan, W., Huang, Y., Yu, H., Mondal, B., and Yakovenko, V. M. (2019) “Exponential structure of income inequality: evidence from 67 countries”, Journal of Economic Interaction and Coordination, 14, 345–376, <https://doi.org/10.1007/s11403-017-0211-6> [Tharoor(2023)]Tharoor-2023 Tharoor, I. (2023) “How the world is getting more equal — and unequal — at the same time,” The Washington Post, 16 June 2023, <https://www.washingtonpost.com/world/2023/06/16/inequality-global-income-branko-milanovic/> [Wright(2005)]Wright-2005 Wright, I., (2005) “The social architecture of capitalism,” Physica A 346, 589–620, <https://doi.org/10.1016/j.physa.2004.08.006> [Wright(2009)]Wright-2009 Wright, I. (2009) “Implicit microfoundations for macroeconomics,” Economics E-Journal, 3, 1–27, <http://dx.doi.org/10.5018/economics-ejournal.ja.2009-19> [Yakovenko and Rosser(2009)]Yakovenko-Rosser-2009 Yakovenko, V. M., and Rosser, J. B. Jr. (2009) “Colloquium: Statistical mechanics of money, wealth, and income,” Reviews of Modern Physics, 81, 1703–1725, <https://dx.doi.org/10.1103/RevModPhys.81.1703> [Yakovenko(2016)]Yakovenko-2016 Yakovenko, V. M. (2016) “Monetary economics from econophysics perspective,” The European Physical Journal Special Topics 225, 3313–3335, <http://dx.doi.org/10.1140/epjst/e2016-60213-3> [Yakovenko(2022)]Yakovenko-2022 Yakovenko, V. M. (2022) “Statistical Mechanics Approach to Econophysics,” in Meyers, R. A., ed., Encyclopedia of Complexity and Systems Science, 2nd edition, (Springer, Berlin, online), <https://doi.org/10.1007/978-3-642-27737-5_169-2> >
http://arxiv.org/abs/2307.02670v1
20230705220117
Roman CCS White Paper: Measuring Type Ia Supernovae Discovered in the Roman High Latitude Time Domain Survey
[ "Rebekah Hounsell", "Dan Scolnic", "Dillon Brout", "Benjamin Rose", "Ori Fox", "Masao Sako", "Phillip Macias", "Bhavin Joshi", "Susana Desutua", "David Rubin", "Stefano Casertano", "Saul Perlmutter", "Greg Aldering", "Kaisey Mandel", "Megan Sosey", "Nao Suzuki", "Russell Ryan" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.CO", "astro-ph.HE" ]
Roman CCS White Paper Measuring Type Ia Supernovae Discovered in the Roman High Latitude Time Domain Survey Roman Core Community Survey: High Latitude Time Domain Survey Scientific Categories: Stellar Physics and Stellar Types Additional scientific keywords: Supernovae, Cosmology, Dark energy, Infrared Photometry Submitting Author: Rebekah Hounsell, University of Maryland Baltimore County/ NASA Goddard Space Flight Center ([email protected]) List of contributing authors: Dan Scolnic, Duke University, ([email protected]) Dillon Brout, Boston University Benjamin Rose, Baylor University ([email protected]) Ori Fox, Space Telescope Science Institute ([email protected]) Masao Sako, University of Pennsylvania ([email protected]) Phillip Macias, UC Santa Cruz ([email protected]) Bhavin Joshi, Johns Hopkins University ([email protected]) Susana Desutua, NIST ([email protected]) David Rubin, UH ([email protected]) Stefano Casertano, STScI ([email protected]) Saul Perlmutter, University of California, Berkeley ([email protected]) Greg Aldering, Lawrence Berkeley National Lab ([email protected]) Kaisey Mandel, University of Cambridge ([email protected]) Megan Sosey, STScI ([email protected]) Nao Suzuki, Lawrence Berkeley National Lab ([email protected]) Russell Ryan, STScI ([email protected]) Abstract: We motivate the cosmological science case of measuring Type Ia supernovae with the Nancy Grace Roman Space Telescope as part of the High Latitude Time Domain Survey. We discuss previously stated requirements for the science, and a baseline survey strategy. We discuss the various areas that must still be optimized and point to the other white papers that consider these topics in detail. Overall, the baseline case should enable an exquisite measurement of dark energy using SNe Ia from z=0.1 to z>2, and further optimization should only strengthen this once-in-a-generation experiment. empty § INTRODUCTION The Nancy Grace Roman Space Telescope (Roman) is NASA’s next large flagship mission, due for launch in late 2026. One of the mission’s key objectives is to determine the expansion history of the universe and to test possible explanations for its apparent acceleration, including dark energy and modifications to general relativity. The three main cosmological probes it will utilize are Weak Lensing, Galaxy Redshift measurements and Type Ia supernovae (SN Ia). In order to discover and measure SNe Ia, the mission will conduct a generation-defining experiment in time-domain astronomy via a Core Community survey called the High Latitude Time Domain Survey (HLTDS). §.§ Constraining cosmological parameters with Roman Measurements of SNe Ia The measurement of SNe Ia distances across a wide range of redshift is a powerful and complementary cosmological probe compared to the weak lensing and galaxy redshift probes. A SN Ia survey has been planned within the Roman mission since its inception, and there are clear requirements for its implementation. One of the major requirements is to obtain a large sample of SN Ia: ≥100 per Δz=0.1 bin over 0.2 ≤ z ≤ 1.7. For success, Near Infra-Red (NIR) images must be obtained in multiple bands, and with a ∼5-day cadence to ensure adequate SN light curve coverage. The depth of the observations, size of the fields, and location have been left for optimization studies and will be the subject of future work within the community. An additional requirement is on the Dark Energy Task Force Figure of Merit <cit.> from the combination of the cosmological constraints. The stated requirement is that Roman must be a Stage IV Dark Energy mission. While it is difficult to predict the complementary/orthogonality of the SN Ia constraints with the other probes, a simple metric is that constraints for each probe must be 2-3× better than the Stage III experiments (e.g. the Dark Energy Supernova program). §.§ What will make the SN Ia survey successful The constraining power from SN Ia depends on a differential measurement of the brightness of SNe Ia across a redshift range. Improving constraints on cosmological parameters can be enabled by increasing the redshift range, increasing the number of SNe Ia observed, increasing the precision of each SN Ia measurement, and limiting the impact of systematic uncertainties. The various parts of the constraining power play off each other: there are limited returns when the statistical precision of all the SNe Ia reaches below a systematic floor and there are limited returns when the precision of a single distance measurement is significantly below the intrinsic scatter of a SN Ia brightness. Given these considerations, the community has proposed collecting a robust and statistically significant / complete sample of cosmology-worthy SN Ia. The proposed depth is such that within each redshift bin, the accuracy is still limited by the intrinsic scatter in individual SNe (roughly 0.1 mag and 0.06 mag for imaging and spectroscopy, respectively); the systematic accuracy of the flux scale is required to be better than 3 mmag. There are two main ways of constraining distances. The first is from measurements of the light-curve photometry and has been used in recent cosmological analyses <cit.>. A second approach, developed by the SNfactory <cit.> is to compare SNe Ia spectra directly, as in <cit.>. This alternate approach is enabled by prism measurements, and HLTDS which have an increased fraction of prism time are discussed in <cit.> and the white paper by Aldering et. al., entitled "Balanced Prism Plus Filter Cadence in the High Latitude Time Domain Survey Core Community Survey" (hereafter referred to as the Aldering prism paper). An additional consideration is the acquisition of redshifts. Almost all cosmological analyses with SNe Ia have relied on spectroscopic redshifts, either from the host galaxy or the SN itself. These measurements will be enabled by the Roman prism, grism, or external sources e.g., ground-based follow-up. In addition to spectroscopic redshifts, the usage of photometric redshifts <cit.> is viable in certain situations. The use of photometric redshifts for SN Ia cosmology is still in development, with continued investigation of systematics control and contamination <cit.>, but the potential redshifts acquired from photometry alone should be considered as part of the motivation of the observing strategy for the Roman deep fields. This affects both field location, field depth, and number of filters used. § SURVEY DESIGN In <cit.> an initial High Latitude Time Domain reference survey is presented; this survey design will be the focus of our white paper. While the 25% prism, 75% imaging HLTDS presented in <cit.> focuses on the acquisition of SN Ia, it will also be extremely beneficial for all kinds of transient studies. This design utilizes six of the Wide Field Instrument (WFI) imaging filters (F062,F087,F106,F129,F158,F184 corresponding to R,Z,Y,J,H,F in Table <ref>) and the low resolution prism (P127). The six filters selected provide the necessary wavelength range (0.62–1.84 μm) required to capture SN Ia light curves at peak across a broad range of redshift. Note that in all current investigations, the F213-band (K-band) has been excluded. While this band-pass may provide useful information, the thermal noise (4.52 counts per pixel) of this filter makes it expensive to use. Note that the K-band could be implemented successfully within an external survey or study as described in the white papers submitted by Gomez et. al., in "Characterizing Superluminous Supernovae with Roman" and Fox et. al., in "An Extended Time-Domain Survey (eTDS) to Detect High-z Transients, Trace the First Stars, and Probe the Epoch of Reionization". Multiple tiers are necessary such that specific redshift SN Ia can be targeted by the correct filters, and exposure times tailored to specific S/N requirements. In this case, the wide tier targets SNe Ia at z ≤ 1, and the deep SNe Ia at z ∼ 1.7. Each exposure time is tailored to obtaining a SNe Ia S/N of 10 at peak in the imaging filters and for the prism an integrated S/N of 25 in the V-band at rest frame when spectra +/- 5 days from peak are co-added. Such a design is supported by work conducted in <cit.>, in which several two-tier survey strategies were examined and DETF-FoM values ranging from 133 to 352 (using optimistic uncertainties) were obtained. As stated previously the initial reference survey uses the low resolution prism (0.75–1.80 μm) only 25% of the time. This value can of course be adjusted, and several alternatives are already discussed within  <cit.>. The prism would be used for spectroscopic discovery and follow-up of SN Ia. The impact of Roman's prism on transient science is discussed in the Aldering prism paper, with additional discussion in the white paper by Rose et. al., entitled "Options to Increase the Coverage Area of Prism Time Series in the High-Latitude Time Domain Core Community Survey". <https://roman.gsfc.nasa.gov/science/WFI_technical.html> §.§ Survey Duration and Cadence The initial reference surveys in <cit.> used the configuration as presented in <cit.> and <cit.> i.e., 30 hr visits, every five days, over two years, for a total observing time of six months. Six months is the minimal duration necessary to obtain a quality sample of SNe over a broad range of redshifts. Note however, that the typical light curve length at rest frame for a SN Ia is ∼45 days. At a redshift of z=2, this would be approaching 5 months of survey length. A longer duration survey would allow the procurement of further template images and spectra for subtraction from SNe outburst data and so further refinement of the sample. Less time would significantly impact the number of SN Ia obtained per redshift bin and as such compromise one of the main mission objectives. A cadence of ∼5 days is suitable for SN Ia observed at both z < 1 and above. However, alternative cadences of 4/5 days for the wide field and 8/10 days for the deep have been investigated with promising results. Shorter cadences in the wide tier however would not be beneficial for the overall SN Ia science goals as a separate part of the survey would have to be weakened and the benefit from faster sampling of light curves that are not particularly fast is limited. While potentially beneficial for other faster transients, a cadence faster than ∼5 days is not required to fully capture the SN Ia evolution and to create a robust statistical sample of low z SNe. A cadence much longer than 8–10 days for the deep field could result in too sparse data points for mid-range redshift SNe. For further discussion about the cadence please see Rubin et. al., "Optimizing the Cadence at Fixed Depth". §.§ Field location The white paper presented by Rose et. al., entitled "Considerations for Selecting Fields for the Roman High-latitude Time Domain Core Community Survey" presents a full discussion on the complexities of selecting field locations for the SN Ia survey. The Aldering prism paper also looks at prism survey speed. Selecting an appropriate field is a complex decision and reliant upon many factors such as Milky Way extinction, Zodiacal background, Continuous Viewing Zones, synergies with other mission/project surveys etc. Fields with high absolute ecliptic and galactic latitudes would therefore be preferable, and as such multiple fields per tier may be required such that an area is covered in both the northern and the southern hemispheres allowing for additional survey/ground based follow up. Having fields in different hemispheres would also allow for a jackknife re-sampling test. The location of potential Roman SN Ia fields are presented in Figure 1. In the southern hemisphere the possible field includes the Akari Deep Field South (ADFS)/Euclid Deep Field South (EDFS) region and in the northern, Extended Groth Strip (EGS), Supernova/Acceleration Probe North (SNAP-N), and European Large Area Infrared Space Observatory Survey-North 1 (ELAIS N-1) are good possibilities. § CONCLUSION From previous studies, there is a good baseline strategy for discovering and measuring SNe Ia with Roman. There are still a number of open questions, like field location and filter allocation, and the community should figure out which questions would be beneficial early for advanced survey planning, and which questions should continue to be optimized due to better understanding of the instrument and further insights from ongoing studies from other surveys and analyses. apj
http://arxiv.org/abs/2307.01103v1
20230703152928
Real-time Likelihood Methods for Improved Gamma-ray Transient Detection and Localization
[ "Matthew Kerr", "Wade Duvall", "Neil Johnson", "Richard Woolf", "J. Eric Grove", "Hannah Kim" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM" ]
0000-0002-0893-4073]M. Kerr 0000-0001-9322-6153]W. Duvall 0000-0003-4859-1711]R. S. Woolf 0000-0002-0586-193X]J. E. Grove Space Science Division, Naval Research Laboratory, Washington, DC 20375–5352, USA M. Kerr [email protected] We present a maximum likelihood (ML) algorithm that is fast enough to detect γ-ray transients in real time on low-performance processors often used for space applications. We validate the routine with simulations and find that, relative to algorithms based on excess counts, the ML method is nearly twice as sensitive, allowing detection of 240–280% more short γ-ray bursts. We characterize a reference implementation of the code, estimating its computational complexity and benchmarking it on a range of processors. We exercise the reference implementation on archival data from the Fermi Gamma-ray Burst Monitor (GBM), verifying the sensitivity improvements. In particular, we show that the ML algorithm would have detected GRB 170817A even if it had been nearly four times fainter. We present an ad hoc but effective scheme for discriminating transients associated with background variations. We show that the on-board localizations generated by ML are accurate, but that refined off-line localizations require a detector response matrix with about ten times finer resolution than is current practice. Increasing the resolution of the GBM response matrix could substantially reduce the few-degree systematic uncertainty observed in the localizations of bright bursts. § INTRODUCTION The prompt emission from γ-ray bursts (GRBs) is typically concentrated between 100 keV and 1 MeV <cit.> and varies in duration from <2 s <cit.> to minutes (long GRBs), to hours <cit.>. These soft γ rays primarily Compton scatter, limiting detection technologies. Wideband Compton telescopes reconstruct the photon interaction and provide a wide field-of-view and modest effective area, but they require very high-resolution, expensive detector elements and readout systems <cit.>. Coded masks enable indirect imaging up to 100–200 keV (when the mask becomes transparent) with modest field-of-view and angular resolution: the Burst Alert Telescope is capable of providing GRB localizations to <1' <cit.> in a 30^∘ field-of-view. Laue lenses allow direct focusing via coherent Bragg scattering <cit.>, but their bulk, cost, and narrow field-of-view are impractical for all-sky monitoring. Scintillating crystals provide effective stopping power over a wide energy range and can be read out with simple electronics. Doped NaI and CsI can be grown to large sizes, offering exceptional sensitivity-to-cost ratio, and detectors based on these crystals have been widely used for γ-ray transient detection. The Vela nuclear test monitoring system used CsI detectors in its serendipitous discovery of GRBs <cit.>. The pioneering <cit.> Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory used very large NaI scintillators that were viewed from the side with photomultiplier tubes. The Gamma-ray Burst Monitor (GBM) on the Fermi Gamma-Ray Space Telescope uses much smaller NaI pucks with Be windows read from the large face to maximize light yield, enabling a particularly low (∼10 keV) threshold <cit.>. Future experiments like Glowbug <cit.> and Starburst <cit.> will use large, thin crystals read out from the edge with silicon photomultipliers <cit.> to achieve very high sensitivity in a compact, low-voltage, low-cost design. Scintillators have fast scintillation and readout times, typically ≲1 μs, enabling photon counting but providing no position information. Instead, comparing rates between two detectors with differing incidence angles constrains the source position to a fuzzy great circle. Additional detector pairs provide further constraints and reduce the uncertainty, and many pairs yield a localization that is well-described as a single point with a gaussian-distributed uncertainty. Increasing the signal-to-background ratio further reduces the uncertainty, allowing for designs with a few large facets (Glowbug) or many smaller facets (GBM) to produce comparable localizing power. Scintillators also provide no intrinsic discrimination[Phoswich designs can provide background discrimination but increase cost and complexity.] of background events, e.g. from energetic particles trapped in earth's magnetosphere or γ rays produced by internal radioactivation. Time-dependent background variations can mimic a γ-ray transient, so a detector system must first recognize it, i.e. trigger. Triggers often initiate resource-intensive processes, such as re-pointing the spacecraft or downlinking a large volume of data for offline analysis. Thus, trigger criteria and algorithms must be carefully tuned to provide sensitivity to transients while minimizing false positives. GBM provides one example of an effective on-board trigger scheme <cit.>: it autonomously measures the background rate in each NaI detector, averages excess counts over a range of timescales, and triggers when two or more detectors register a >4.5σ excess. Large SiPM-read crystals, the availability of cheap commercial SmallSat spacecraft buses, and rideshare launch opportunities mean it is now possible to launch γ-ray transient detectors with the performance of heritage instruments at a fraction of the cost. Such detectors will have limited telemetry bandwidth, so a sensitive trigger is required to maximize the scientific merit of the data that are downlinked. For example, the short GRBs accompanying neutron star mergers are expected to be faint because future gravitational wave detectors will detect more distant mergers and because the jetted emission will not generally point towards earth. GRB 170817A was exceptionally close but was only 7% over the GBM trigger threshold <cit.>. In this work, we propose to improve trigger sensitivity and efficiency by using maximum likelihood (ML) for the on-board detection and localization of GRBs and other γ-ray transients. ML methods are already used for offline analysis of GRB spectra and positions <cit.> and for post facto detection <cit.>. We show that such methods are capable of running in real-time on the low-power, low-performance processors typically used for space applications and that they can detect transients that are half as bright as those detectable with rate-based triggers. Moreover, ML links detection and localization, so a coarse position is available immediately upon detection, enabling very low-latency alerts. The same framework can be used to rapidly refine the localization. Finally, ML methods can be tuned to different source classes, including backgrounds, allowing efficient filtering of desired transients. In the next section, we develop the formalism for ML detection and localization with scintillator detectors and show that some simple approximations yield fast but reliable estimates of transient significance. In $<ref>, we use Monte Carlo simulations of a GBM-like dataset to validate the ML estimators for detection and localization and to estimate the threshold transient flux, allowing an estimate of the relative sensitivity. We describe the computational performance of a reference implementation in <ref> in theory and as benchmarked on relevant hardware. We characterize the algorithmic performance in <ref>, demonstrating both sensitivity to bona fide GRBs and an effective method of reducing false triggers from particle backgrounds. We consider the accuracy of both coarse and follow-up localization in <ref>, finding that the reliability depends strongly on the spatial resolution of the detector response matrix. Finally, we conclude with a summary of the results and a discussion of potential new applications enabled by the method in <ref>. § FAST LIKELIHOOD METHODS γ-ray/e^- interactions in a scintillator release a pulse of optical light that is collected, amplified, and measured. For a thick scintillator, the incident energy E' is entirely converted to a pulse height E proportional to E' with a spectral resolution of order 5%. For thin crystals/higher energies, scattered photons and electrons may escape, yielding E possibly much less than E'. The conversion efficiency (effective area) also drops with increasing energy until pair production begins. The effective area and the energy redistribution can be encapsulated in a response matrix (RM) that converts γ rays from a distant source incident at angle Ω with spectral flux density F(E') (ph keV^-1 s^-1 cm^-2) into an observed counts spectrum N(E) (ph keV^-1 s^-1): N(E) = ∫_0^∞ dE' R(E,E',Ω) F(E',Ω). The RM is typically estimated with Monte Carlo simulations and particle transport codes and tabulated as a matrix, R_lijq: the response to a pixel centered on Ω_l, for a detector with index i, averaged over observed/measured energy channel centered on E_j, averaged over incident energy channel centered on E_q. R typically depends on time, and a spacecraft pointing history relates source coordinates to instrument coordinates. We adopt a source model with a background rate (b_ij) for each detector element (i) and energy channel (j) and a point source at position Ω_l with spectrum F_k(E) (k just labels this spectral shape). The predicted counts per unit time are then λ_klij = b_ij + α_kl∑_q R_lijq F_k(E_q) ≡ b_ij + α_kl F_klij. The second line re-expresses the convolution of the source spectrum with R as a fiducial “template” F for the observed counts. The fiducial template is scaled by the source amplitude α_kl to produce the final counts prediction. The observed counts n_ij follow a Poisson distribution, so an estimator for the source amplitude α̂_kl can be obtained by maximizing the log likelihood logℒ(α_kl) = ∑_i=1^∑_j=1^ n_ijlog( b_ij + α_kl F_klij) - b_ij -α_kl F_klij. Source significance can be estimated with the log likelihood ratio test statistic TS≡2×[logℒ(α̂_̂k̂l̂)-logℒ(0)]. <cit.> shows that TS asymptotically follows a χ^2_1 distribution. By appeal to the Cramér-Rao bound, this is also the most sensitive detection statistic. For detection, we must scan over possible source positions and spectral shapes, obtaining sets of α_kl and TS(α_kl) for each time interval of interest. These evaluations are expensive, and a fast method is critical for real-time applications. We can focus on transients that are near-threshold, because bright transients will produce obvious signals. Thus, we write logℒ(α_kl) = ∑_i=1^∑_j=1^ n_ijlog( 1 + α_kl F_klij/b_ij) - b_ij -α_kl F_klij + const. ≡ ∑_i=1^∑_j=1^[ n_ijlog( 1 + α_kl t_klij)] - B -α_kl F_kl, where we have defined t as the ratio of source to background counts and have defined data-integrated quantities B and F_kl. Taylor expanding the logarithm yields logℒ(α_kl) = ∑_i=1^∑_j=1^[ n_ij( α_kl t_klij - 1/2 (α_kl t_klij)^2 + 1/3 (α_kl t_klij)^3 + …)] - B -α_kl F_kl ≡ α_kl⟨ NT_kl⟩ - α_kl^2/2⟨ NT_kl^2⟩ +α_kl^3/3⟨ NT_kl^3⟩ +… -B -α_kl F_kl, where quantities like ⟨ NT⟩ are moments of the data (counts) weighted by t. Differentiating and discarding remaining terms 𝒪(α^2) and higher yields the first-order estimator α̂_1kl = ⟨ NT_kl⟩-F_kl/⟨ NT_kl^2 ⟩, with F_l now denoting the sum of all predicted counts. Inserting this estimator into the likelihood ratio test statistic and discarding terms 𝒪(α^3) yields the simple first-order estimator TS_1(α̂_1kl) = α̂_1kl^2 ⟨ NT_kl^2 ⟩ = ( ⟨ NT_kl⟩-F_kl)^2/⟨ NT_kl^2 ⟩ As we show below, it is helpful to consider a higher-order estimator formed from α̂_1kl and the 𝒪(α^3) terms in Eq. <ref>: TS_2(α̂_1kl) = TS_1(α̂_1kl) +2/3α̂_1kl^3⟨ NT_kl^3⟩ This expression neglects higher-order corrections to α̂_1kl, because (1) doing so requires choosing the correct solution to a quadratic equation and (2) the larger error is in the evaluation of TS, so this simple correction is effective. These expressions are summed over observed energy, but it is also useful to consider the estimators for each energy channel. Finally, we note that these expressions can be used to iteratively obtain the full maximum likelihood estimator α̂_kl and hence the exact TS, which we denote TS_e(α̂_kl). Thus, a real-time, maximum likelihood based technique for transient detection (and localization) is as follows: (1) group events into time bins, e.g. 64 ms, 2,048 ms, …; (2) evaluate the approximate TS (TS_1 or TS_2) over a set of pixels corresponding to possible transient positions (unocculted space for GRBs, towards the nadir for terrestrial γ-ray flashes, etc.); (3) assess the significance of any large values to evaluate trigger criteria. We discuss the practical and efficient implementation of this approach further in <ref>. However, we first illustrate general properties of the ML estimator on somewhat idealized data and compare it to existing real-time detection schemes. § CALIBRATION AND SENSITIVITY In order to determine the sensitivity of the likelihood method, it is first necessary to determine the distribution of the likelihood test statistic in the absence of a signal, and thus a false alarm rate for any non-stationary signal (GRBs or other transients). For a given false alarm rate, the threshold can be mapped to a sensitivity in terms of transient flux. We carry out this procedure with simulations. §.§ Simulation Setup Archival GBM data, with its high time resolution, is ideal for validating burst-detection algorithms. To evaluate performance with known signal and background characterists, we here simulate “GBM-like” data. First, from archival data we estimate the typical background b_ai for each NaI detector in 8 energy channels (Table <ref>). The variation in rates over detectors is modest except at the highest energies. <cit.> reported a set of three representative “Comptonized power law” models, F(E) = N_0 (E/100 keV)^αexp[ -(2+α)E/E_p]. with parameters chosen to represent an unusually soft-spectrum GRB (α=-1.95, E_p=50 keV), a typical GRB (α=-1.15, E_p=350 keV), and an unusually hard-spectrum GRB (α=-0.25, E_p=1000 keV). We adopt these as our detection templates. However, to simulate a broader population, we have devised 12 “basis” spectra that do not overlap the G20 templates and cover a range around them (see Figure <ref>.) They are concentrated most densely around the “normal” model, and by choosing randomly between them, we can approximate drawing randomly from the population of GRBs. (The softest and hardest simulated spectra are even more extreme than G20 models, so in the simulations below we reduce the weight of these two templates by 50%.) The normalizations N_0 are chosen to give a 50–300 keV flux of 1 ph cm^-2 s^-1. We fold these basis spectral models through the GBM RM, obtained from the files distributed with [https://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit/]. For consistency with the rest of our software, we resample the provided 272 spatial pixels to a 482-pixel icosahedral tessellation of the sky. The results predict the counts in 8 output channels (see Table <ref> for boundaries) for each possible incidence direction. To simulate a GRB—the alternative hypothesis—we choose one of the 12 spectral basis models and 482 incident directions at random, add the source rate to the background, scale the predicted rates by the duration δ t, and draw Poisson random variables with the resulting mean, producing an array of × (12×8) photon counts. (We use the 12 NaI detectors but neglect the 2 BGO detectors.) The null hypothesis is obtained in the same way but with zero source contribution. §.§ Calibrating the Test Statistic For each simulated data set, we evaluate the TS for the three G20 templates and for the 482 incident directions. (We reiterate that the simulation and detection templates are different.) We first examined the null hypothesis and found that the distribution of TS for a single template/pixel follows the expected asymptotic distribution (χ^2_1) almost perfectly, even for short time windows (δ t=64 ms) with typically only a few counts in each channel. However, the detection statistic is the maximum TS over pixels and templates, for which the distribution is not known. Furthermore, the sample values of TS are not independent because of finite angular resolution and because the GRB templates are not orthogonal. (The extent to which the pixels over- or undersample the angular resolution depends on the brightness of a given burst. Thus the distribution of the TS in the alternative hypothesis will in general be even more complicated.) All of this means that the significance of any apparently-large TS must be calibrated with simulations. Fortunately, because the null distribution of TS does not depend on the time window δ t, we can calibrate TS universally simply by simulating many realization of the null distribution (α→0) and calculating the maximum TS over the pixels and templates, in this specific case, 3×482=1446 values. The results for many simulations are shown in Figure <ref>. Because the exact TS is expensive to evaluate, we have carried out extensive simulations (N=10^7) only for TS_1 and TS_2. We further calculate σ_2, a detection statistic similar to the one operating on GBM. Of the 12 detectors, it is the second-highest excess rate expressed in “sigma” units: (c-b)/√(b), with c the observed counts and b the expected background. The counts are chosen in three coarse channels (see Table <ref>) that approximately match the bounds used for the on-board GBM trigger, and the the maximum excess from these three coarse channels is σ_2. It is clear (Figure <ref>) that the approximate estimators TS_1 and TS_2 undershoot the exact TS_e. For the purposes of detection, the TS estimator needs only be accurate up to a typical threshold of ∼30. To gauge the differences more accurately, we simulate bursts with a small, real signal, in order to shift the distribution to higher values of TS. The results for δ t=64 ms and 2048 ms are shown in Figure <ref>. Both approximate and exact methods work for the longer window, but in the shorter window, where there are fewer photons, the linear approximation TS_1 is lower than the exact value by up to 30%. In this case, one must either use the better approximation TS_2 or use the faster TS_1 but with lower thresholds for narrower time windows. In summary, the approximate TS estimators work well for detection. To facilitate comparison of results, we henceforth use the TS_2 statistic. Thus, from the simulations yielding Figure <ref>, we can easily estimate trigger thresholds with a known false positive rate. Given the 10^7 simulations, we choose a chance false probability of 10^-6, corresponding to a value of TS_2=29.6 and σ_2=4.7 and a false positive rate of about 1 per day assuming a minimum δ t of 64 ms. In general, these trigger thresholds could be sharpened somewhat with operational constraints, e.g. discarding the TS from pixels where the sky is occulted by the earth. We also note that these simulations yield a trigger threshold very close to the on-board GBM trigger (σ_2=4.8). Finally, we note here that these simulations demonstrate the capability of the algorithm to test the null hypothesis, i.e. if the data consistent with a slowly-varying background. It will in general then detect a wide variety of transient signals, including both pulses of charged particles and bona fide GRBs. Classifying the sources of transients is a post-detection step, which is not the focus of this work. However, in <ref> and <ref>, we show that the TS itself can be used for preliminary classification. §.§ Transient Sensitivity We can now determine the detection threshold by simulating many realizations of the alternative hypothesis over a range of fluxes to estimate the fraction yielding a value of TS_2 (σ_2) above the trigger threshold. We adopt two windows, δ t=64 ms and 1024 ms, which are of interest for detection of short GRBs and which facilitate comparison with the catalog of GBM GRBs <cit.>. The results of these simulations are shown in Figure <ref>. If we take 50% completeness for a detection threshold, we find values for 64ms of 1.8 ph cm^-2 s^-1 for TS_2 and 3.3 ph cm^-2 s^-1 for σ_2. At 1024 ms, the thresholds are 0.42 and 0.83 ph cm^-2 s^-1, respectively. In summary, the ML algorithm is almost twice as sensitive as the rate-based trigger. Using the GBM catalog, an instrumental threshold can be estimated from an observed turnover in the 64-ms GRB fluxes at ∼3 ph cm^-2 s^-1, in good agreement with our estimate of the σ_2 threshold. However, we note that the real instrument samples a much wider range of background rates and, while its sensitivity peaks in the equatorial plane (instrumental coordinates), this plane generally intersects the earth because the rocking profile exposes less sensitive aspects of the instrument to the sky. We have not attempted to capture such details. Further, these simple simulations assume a top- hat shape burst that is perfectly aligned with the time window used to extract data, and so in general realistic thresholds will be higher. However, these simplifications are common to both algorithms, so we conclude that the use of ML methods could substantially improve the onboard trigger performance of an instrument like GBM. §.§ Real-time Localization The localization power of an instrument like GBM or Glowbug depends on the relative brightness of the transient and how much contrast in count rate between detector elements the instrument design provides. The 482 pixels used for the simulations and for the reference implementation below provide a resolution of about 9^∘, which is adequate for estimating the position of transients near threshold but too coarse for precise localization. Here, we assess the reliability of coarse positions that result from the detection algorithm and that might be used in low-latency alerts. We defer discussion of refined, post-detection localization to <ref>. If the templates used for detection exactly match the detected burst spectrum, then the maximum-TS pixel should correspond to the true (simulated) direction up to the limits of statistical precision. However, our design mimics real-world applications, with a mismatch between the true spectrum and the templates, and so the inferred positions will be biased. We determined the size of the bias by simulating 10,000 bright bursts with a 50–300 keV flux of 10 ph cm^-2 s^-1 and a duration of 1.024 s, producing a typical TS_e of 7000. (NB that for localization, it is critical to use the exact TS.) For each simulation, there are three position estimates, i.e. the pixels that maximize the likelihood for each G20 template. In general, these differ, with only 6.4% of the bursts yielding the same coarse position for all three templates. However, by using the pixel/template combination that maximizes the total TS, we recovered the correct position 97% of the time. Of the remaining 3% of bursts with incorrect positions (a substantial systematic error of ≥9^∘), more than 90% were from bursts simulated with the template that is mid-way between the normal and the hard G20 templatete. From Figure <ref>, it is apparent that the “distance” between these templates is larger than between the normal and soft templates. Thus, it is clear that the problem of localization cannot be separated from the question of the true spectrum, and using the wrong template can generate tremendous (>10^∘) errors. On the other hand, even with just three templates, the coarse positions from the detection algorithm are correct almost all the time, suggesting the prescription can be used “out of the box” for GRB direction finding and for seeding of follow-up localization. Near-threshold GRBs will in general be poorly localized, but it is also of interest if the real-time localization of such faint GRBs is also accurate. Using the same approach, but adjusting the source rate to produce a mean TS of about 40, we determined the distribution of the TS difference between the TS-maximizing pixel and the pixel of the true position. We found that δTS<5.99 about 92% of the time, in reasonable agreement with the expectation of 95%. Thus, probability maps for faint-GRB localization should be reliable. § REFERENCE IMPLEMENTATION Here, we present a reference implementation (RI) of the ML approach presented in <ref> and whose idealized scientific performance was estimated in <ref>. The RI is designed to work as a real-time transient detection algorithm on γ-ray sensors and to allow real-world computational and scientific performance evaluations. Parameters governing some aspects of the RI are listed in Table <ref>; they are fully customizable, but the specific values used in benchmarking are listed. The base implementation is in C, and a parallel implementation in Python allows rapid prototyping and testing of features. The C implementation—with additional instrument-specific data handling, command and control, and housekeeping layers—also provides the burst detection algorithm for the Glowbug flight software. §.§ Data Synchronization and Aggregation For the RI, we abstract away the frontend electronics and assume as input a stream of individual events comprising a detector element index, an energy channel index, and an absolute timestamp. For our RI with 12 detectors, these take the form of a 4-byte word that provides a 1-ms resolution timestamp and an 8-channel energy resolution. We further assume that pseudo-events triggered by clock pulses (PPS, or pulse-per-second) are provided in the datastream from the onboard clock. We make no assumptions about raw data ordering, so the first step is to synchronize the data over detector elements and bin it into uniform time bins. Data are discarded until the first PPS is received, which determines the epochs. Data are stored in a circular buffer with resolution (32 ms), so a bitshift efficiently converts epoch-subtracted timestamps into buffer indices. Data are synchronized by tracking the detector whose last delivered event is the oldest, t_synch,n. When that detector next delivers data, a new oldest detector is identified, whose latest event is t_synch,n+1. By construction, all detectors have delivered at least one event after t_synch,n+1, so all bins with t_synch,n<=t<t_synch,n+1 are synchronized and the buffer indicates these samples are available to downstream processing. The output of this procedure is a uniform time series of counts spectra similar to the data product of GBM. It is necessary to consider transients with a wide range of durations, and we adopt the same procedure used onboard GBM, aggregating the data into hierarchical streams that differ in time resolution by 2×. Each stream has two phases. Thus, when each new (32-ms) sample arrives, it is combined with the preceding sample to update one phase of the 2 (64-ms) resolution stream. It is combined with the next sample to arrive to update the second phase. These two phases feed a 4 stream, which updates every 2, etc. The efficient aggregation means that the longest timescales searched for transients are set not by computational constraint but by movement of the spacecraft and by confusion with the time-variable background. The RI is tickless: whenever new events arrive, they are written into the synchronizing buffer, which will produce 0, 1, or more synchronized, samples depending on the specifics of the detector readout scheme. As soon as any samples are available, they are fed into the hierarchical summing and the transient search is run on any updated phases. (So in the RI, the 64 ms-timescale search runs every 32 ms, etc.) Tasks with an inherent timescale are tied to particular streams. E.g., the background mode updates every τ_b=1.024 s by tapping one phase of the 1.024 s stream. We use a 32-bit floating point format for all data, including counts, in the RI. Doing so simplifies data structures, and the memory requirements for the buffer are quite modest, <1 MiB per minute of data. Finally, we briefly consider the computational requirements for the synchronizing buffer, which is the only component of the RI which depends directly on the event rate. In general, operations will depend on the specific data format. In the RI, there are about 10 bitwise operations used to extract and test portions of the 4-byte word, and three integer arithmetic operations (including a modulo) to determine the buffer index to increment. For a typical scintillator, the maximum event rate is set by deadtime and is likely to be <10^5 Hz. For ten such detector elements operating at the maximum rate—an extraordinary circumstance—processing the event data will require 10^7 s^-1 integer operations. This rate is less than the ≈2×10^7 floating point operations required for transient searches (<ref> and <ref>). §.§ Background Estimation In <ref> we assumed perfect knowledge of the background, but in application it must be estimated. The primary challenge is the orbital variation in both the incident particle background and internal background from activation, especially from passages through the South Atlantic Anomaly (SAA), a region with a high flux of energetic particles. The variations further depend on the orbital precession phase, the solar cycle, and solar flaring activity. Ideally, background variations could be predicted with parametric models <cit.> or from archival data from earlier orbits. However, with high background count rates, even small errors in the background model produce spurious or missed transient detections. Thus, for the RI we estimate the background with a simple, robust moving average. Specifically, we collect N_b (120) samples of data from the τ_b stream (1.024 s) and from this estimate the mean and slope of the count rate for each detector and channel. To avoid overlap of the background samples with those being searched for transients, we project the background estimate forward in time, typically about 4 s. This linear model can only capture some of the true background variations, so we also attempt to determine if the estimate is consistent with the data. Of particular concern are (1) intervals when the instrument is entering or exiting the SAA, where the background rate may change nonlinearly over the time , and (2) rapid “spikes” in background due to incident particles, e.g. particle precipitation events. To mitigate the first issue, we invalidate the background estimate if the magnitude of the observed slope exceeds a maximum per-channel value. To mitigate spikes, we observe the residuals of the samples in the background window. If the model is adequate, then these residuals should follow an approximately normal distribution with variance equal to the typical count rate. We test for normality using a kurtosis-only version of the D'Agostino test <cit.>, and if there is substantial unmodeled rate variation (exceeding the D'Agostino test threshold), the background model is invalidated. It is often too restrictive to require linear background variation over =123 s, so the RI additionally implements nested models operating on /2 and /4 samples. If the longest window is invalid, the shorter windows are considered, with larger thresholds for the time derivative and D'Agostino test. Whenever the background model is in steady state, the preferred, longest window is used. If a (short) rate excursion occurs, all three windows will fail the check, but within /4 samples, the spike will leave the shortest window, which will furnish a new background estimate. /4 samples later, the next longest window (/2) will be adopted, and finally the spike will exit the full window and steady state is restored. This approach preserves 75% of the exposure compared to a single background window. In general, most parameters in the RI are adjustable but have been tuned for the orbit and altitude of GBM. We show examples of the background estimator in operation in <ref>. We expect that this approach will also be suitable for orbits of instruments such as Glowbug (inclination 52^∘) with suitable parameter updates, but may require additional features to capture and mitigate background variations in the auroral zones. Because the computation associated with the background model update has complexity 𝒪(), it is negligible compared to the evaluation of TS. §.§ TS Computation Whenever a new sample becomes available in one of the time-averaged streams, we search for a transient by evaluating TS in a four-dimensional loop over templates (index k), spatial directions (l), energy channels (i), and detector elements (j). From Equations <ref> and <ref>, the innermost loops are an evaluation of an inner product between the observed counts and the predicted signal-to-background ratio: NT_kl = ∑_i=1^∑_j=1^ c_ij F_klij/b_ij≡ c_ij t_klij NT^2_kl = ∑_i=1^∑_j=1^ (c_ij t_klij) t_klij NT^3_kl = ∑_i=1^∑_j=1^ (c_ij t^2_klij) t_klij. In the innermost loop, each moment requires a single multiply and add, and so from Equation <ref> it is apparent that evaluating TS_1 thus requires 2(+1) adds and multiplies and divisions. From Equation <ref>, evaluating TS_2 requires an additional (+1) adds and (+3) multiplies. Some architectures are likely to support the sequential multiplies, adds, and assignments via fused operators. Up to negligible factors, TS_2 requires about 50% more multiplies and adds than TS_1. In the RI, the maximum TS for each template is recorded, requiring floating point comparisons, likely also to be negligible except on hardware where comparisons and jumps are anomalously slow. It is convenient to scale these computational requirements per input sample. In the RI there are 7 streams (64 ms, 128 ms, …, 4096 ms), so the amortized computations per input are ℒ_N_F≡∑_i=1^ 2^1-i≈1.98× those required to evaluate 64 ms stream. Thus, for evaluation of TS_1, for each (32 ms) input sample, there are (amortized) 3.97(+1) adds and multiplies, 3.97 divides, and / multiplies for the background update. The most substantial ancillary computation is the pre-computation of t_klij, requiring multiplies every τ_b s. When the background model is “bad”, t is set to 0, producing 0 for all evaluations of TS. In addition to the standard, template-based TS computation, we have determined that a per-channel TS can help to identify transients due to spikes in the particle background. The most useful of these are the lowest channel, appropriate for soft electrons, and the highest channel, typically for rapidly rising proton rates on entering the SAA. The per-channel TS computation is very similar to that outlined above, but requires additional overhead in the “channel” loop that is surprisingly expensive. In the RI, we compute the per-channel TS for all channels, but suggest it should be tuned to application. We have further chosen a memory layout that simplifies programming logic (there is always a sum over in the innermost loop), but swapping the innermost two dimensions may be favorable on some architectures. §.§ Triggering In general, any TS value—from any stream—that surpasses a pre-defined threshold value could be considered a trigger. However, it may be advantageous to prefer longer time windows, which generally improve statistical precision, have reduced trials factors, and could provide better seed localizations for follow-up analysis. Further, the use of per-channel TS can help to reject background variations, and we have found that this is more effective with the improved precision of longer time windows. Thus we distinguish a local trigger—any time the TS surpasses the threshold value—from a global trigger, which is evaluated only on the longest averaging timescale and which can change the instrument mode to a triggered state. We assess the presence of a global trigger as follows. For each update of the longest timescale (4096 ms, every 2048 ms), we check for local triggers in all of the shorter windows that overlap the global window and select the one with the highest TS. E.g. in the event of a very short GRB (say 160 ms), then the 128 ms window should be selected, while a long GRB (say 20 s) should produce the highest TS in the 4096 ms window. We select the optimal window for both the template TS and for the per-channel TS from the lowest- and highest-energy channels. If these per-channel TS values are comparable or greater than the template TS, it indicates the transient consists primarily of very soft or very hard particles and is thus likely associated with charged particle background. In the RI, a global trigger is initiated when the template TS exceeds both the soft and hard-channel TS by a factor of 1.3. §.§ GBM Data Playback We use GBM data both for the benchmarks reported below and in <ref> to test the scientific performance of the transient search algorithm. To do this, we break the archival data set up into 1-day intervals, and further divide this data set into “orbits”, which are determined either when GBM enters the SAA or when 90 minutes have elapsed. For each day, we begin with the archival format data and take the intersection of the Good Time Intervals for each detector. We re-channelize the data from the original 256 to approximately match the 8 desired channel boundaries. We insert PPS events into the data stream at each 1-s boundary, then group data into packets of up to 250 events, converting each event timestamp, channel, and PPS flag into the 4-word format used by the RI (and by Glowbug). Finally, we order the packets according to the timestamp of the last event in them, thus emulating the staggered data delivery expected in real-time application. §.§ Benchmarking To compare with the floating point complexity estimates made above, and to provide realistic performance expectations, we benchmark the RI on several distinct platforms. Specifically, we play back the first 5400 s of GBM data acquired on 9 Feb 2014, which has an average summed event rate (within the energy range) of 7.3 kHz. For each platform, we consider the following scenarios: * The RI is run with the TS computation disabled. This allows estimation of overhead and the event processing rate, further isolated with the use of . * The RI with template-based TS_1 computation enabled. * The RI with template-based TS_2 computation enabled. * The RI with template and channel-based TS_1 computation enabled. * The RI with template and channel-based TS_2 computation enabled. Together, these benchmarks provide a full indication of the computational requirements for real-time use on an arbitrary platform. For each hardware platform, we adopt the compiler command . Further, we use the “linpack” benchmark to estimate a characteristic single-precision, single-threaded floating point performance (Table <ref>). There are exactly 168718 32 ms samples processed in the benchmark run, and so from Table <ref>, we expect the TS_1 calculation to require, in the inner loop, 9.3×10^10 each of adds and floats. For the Xeon, with estimated single-precision floating point rate of 13 GFLOPs, the estimated required time is 14.3 s, about 50% higher than the observed value, indicating some effectiveness of SIMD vectorization of loops. We expect that an implementation with hard-coded loop invariants (, , etc. could all be fixed in a flight system) would produce more aggressively optimized code, as would manual implementation of the inner loop assembly code. However, the code is already very fast on a Xeon, requiring <1% processor utilization on a single core to process the data in real-time equivalent. The Cortex A72 on the Raspberry Pi 4 is a much less performant processor, running at half the clock speed and with much smaller pools of cache and functional units. As it is also the flight processor for Glowbug, the performance on this platform is critical. It completes the benchmark run about 5× slower than the Xeon, meaning it is capable of real-time data processing with <3% processor utilization. Finally, we consider a processor designed specifically for space applications, the radiation-hardened Xiphos Q8. Its system-on-a-chip includes a 4-core Cortex A53 as well as an FPGA fabric, though we consider only evaluation of the CPU. We built a custom image with czmq support and cross-compiled the RI. Evaluation of the full suite at real-time requires <6% utilization of a single core, and the scaling is in good agreement with the relative difference in floating point performance compared to the A72. In summary, the RI implementation supports real-time ML-based burst detection on processors likely to be selected for a deployment in space, both at low- and high-cost levels. § REAL-WORLD DETECTION PERFORMANCE Here we show two examples of applying the RI implementation to archival GBM data. The first, in Figure <ref>, shows 1000 ks of data containing a pulse of incident electrons on top of an otherwise slow evolution of the background rates. This example illustrates the computation of TS. Several local triggers are generated in the faster streams because the TS is a random variable and suffers more noise in short time windows. On the other hand, in the longer windows, the template TS is always lower than the soft-channel TS. These local triggers are vetoed and indicated by the unfilled red stars. Extensive testing has shown this prescription is an effective means of identifying such particle transients, even excluding soft events to mimic an instrument with a higher threshold (30–50 keV) compared to GBM. (The ∼10 keV threshold for GBM makes electron events particularly obvious.) Following the arrival of the first bright electron pulse, the background model begins to fail the self-consistency check and is flagged as invalid, which automatically sets TS to 0. Following the initial pulse, the background model is re-established. The fainter, slower electron pulses only generate large values of TS in the slow streams and are always vetoed. The second example shows the ML algorithm applied to GRB 170817A. It generates a global trigger (filled star) with a maximum TS_2 of 118 in the 256 ms window. <cit.> note that the GRB generated significant detections in three detectors, and that the strongest signal was achieved with the 512 ms filter, with the second-highest significance yielding a σ_2 of 6.3σ. Consequently GBM would have detected GRB 170817A onboard if it were up to 40% fainter. On the other hand, with a reasonable threshold of TS=30, the ML approach would have allowed GBM to detect GRB 170817A if it were 4.3× fainter! Alternatively, it could have detected it at twice the distance, increasing the rate of such GRBs by about 8. This claim is somewhat conservative because the RI only uses data >30 keV. § PRECISE LOCALIZATION As discussed and validated above, ML-based detection automatically provides coarse localizations. In principle, finer real-time localizations could be obtained simply by searching over more directions, but degree-scale precision is likely too expensive for low-power processors. Instead, these coarse localizations are appropriate as seeds for a follow-up analysis that produces a refined position estimate. The RI does not have such a follow-up capability, but the additional computational complexity is small. A straightforward approach is to generate a degree-scale “TS map” by evaluating the TS over a grid that covers the full coarse pixel from the seed location. Such a map is sufficient to generate a centroid and confidence region and requires only ∼100 additional TS evaluations, a negligible increase on those performed by the detection algorithm. Here, we test how well this procedure might work—and the reliability of ML localizations generally. It is well-demonstrated that GBM localizations—which use ML—suffer from a systematic error of a few degrees <cit.>. Such localizations rely on two key ingredients: the “true” spectrum of the burst, and the “true” instrument RM to that burst. The GBM RM is based on Monte Carlo particle transport simulations <cit.> of the Fermi Gamma-ray Space Telescope sampled over 272 incident directions. If the mass model used in the simulations is incomplete or inaccurate, then the RM will also suffer inaccuracies. On the other hand, <cit.> note the importance of an accurate spectral template and claim to reduce systematic errors by using using more degrees of freedom in the spectral model. We consider a related possibility: the effect of the number of pixels used in the representation of the RM. To do this, we used <cit.> to create a detailed mass model for Glowbug (instrument only), performed Monte Carlo simulations of an incident broadband γ-ray spectrum on Glowbug with the underlying <cit.> radiation transport package, and evaluated the RM at three spatial resolutions, using 192, 642, and 2562 incident directions. We do not consider atmospheric scattering, which is an important consideration in GRB localization but which essentially only changes the response matrix. We then simulated 2000 realizations of data as in <ref> with the following differences: (1) we used the 2562-pixel Glowbug RM instead of the GBM RM; (2) we used only the “normal” G20 template (but continue to select a random incident pixel) with a 50–300 keV flux of 3 ph cm^-2 s^-1; (3) we simulated 1.024 s time windows; (4) we restricted the incidence polar angle to cosθ>0, i.e. the portions of the sky with the best Glowbug sensitivity. These parameters yield a fairly bright burst relative to the background that can typically be localized with a precision of a few degrees. For each simulation, we processed the data using the three spatial resolutions, specifically (1) determining the coarse pixel that maximized the TS according to the GRB template; (2) adaptively refining the localization by spherically, linearly interpolating the instrument response to predict the counts on a finer grid; and (3) producing a final 15^∘×15^∘ TS map on a uniform (Cartesian projection) grid around the maximum TS position. NB that unlike as in <ref>, we use the same GRB spectral template for both simulation and evaluating the TS in order to isolate the systematic effect of RM resolution. A good localization must deliver accurate estimates of both the position and its uncertainty, since the latter often governs whether or not it is possible to follow-up transients with narrow-field instruments. The TS is a random field distributed as χ^2_2, so confidence intervals around the best-fit position can be estimated as, e.g., the contour by which the TS drops from the maximum value by 2.30 (68%) or 5.99 (95%). Alternatively, for cases where the likelihood surface is not particularly gaussian, the likelihood can be combined with a uniform prior and treated numerically to derive Bayesian credible intervals. Thus, we can assess the quality of the localization by analysis of the TS maps produced for the three resolution levels. The TS maps from the simulations can be grouped into several classes. An example from most common class appears in Figure <ref>. The transient is well-localized at all three resolutions, and the uncertainty contours are effectively gaussian, as indicated by the agreement between the TS map contours with a quadratic fit to the surface. Such a result could be accurately encapsulated as a single position and an uncertainty ellipse. As the spatial resolution of the RM increases, the size of the error ellipse decreases. (The TS difference between the simulated position and the best-fit position also decreases.) As we will see, this is a general trend. The second example, Figure <ref>, shows a class where the smoothing in the low-pixel RM generates misleadingly gaussian TS maps while the high-pixel RM yields a TS map with a more complicated structure. In these cases, it is more suitable to describe the position with the full map. This more complex surface includes the true position in the high-probability region, whereas it is in low-probability region (outside the 95% contours) in the low-resolution versions. Finally, we consider a transient incident in the equatorial plane of Glowbug. Because the detectors form a half cube, such low-latitude bursts can be moved “up” and “down” without changing the predicted counts very much, and the uncertainty regions thus become narrow ellipses, as shown in Figure <ref>. Here, because the true count rate varies little over the long ellipse, the systematic differences between the RMs are magnified, and we see the high-resolution RM yields a TS map with substantially more structure and that again better includes the true position. To synthesize these results, we consider two key questions: what is the relative size of the uncertainty regions, and how often is the true position of the transient within them? In Figure <ref>, we show the relative ratios of sizes of R_68, the region expected to contain the true transient position 68% of the time, estimated directly from the TS map as the summed area of all pixels within δTS<2.30 of the peak. From this, it is clear that the confidence regions are larger when using the low-resolution RMs: the median region is 17% larger for 192 pixels and 13% for 642 pixels. But the tails are heavy, with e.g. 16% of the regions are 40% larger (192 pixels) and 33% larger; and the worst 5% of the regions are 61% larger and 54% larger, respectively. Next, we consider the reliability of the uncertainty regions, which is best assessed by considering the difference in TS between the best-fit and simulated positions, δTS. The probability to observe a transient with a given offset is then simply p=∫_0^δTSχ^2_2(x) dx. Then, if the TS distribution truly follows the χ^2_2 distribution (the uncertainty estimates are accurate), p should follow a uniform distribution. Figure <ref> shows the cumulative distribution of p, from which it is clear that the low-pixel RMs yield likelihood surfaces that underestimate the uncertainty, particularly for the 192-pixel version. All of these effects would be magnified if considering even more off-axis bursts with cosθ<0. In conclusion, we find that it is critical to use a high-resolution response matrix, with about an order of magnitude more pixels than are currently in use by, e.g., GBM analysis tools. Doing so yields both improved constraints on position and reduces systematic error stemming from inaccuracies in the TS maps used to infer positions. The magnitude of these errors are often several degrees, in the case of these bright bursts, and thus it is plausible that this imprecision in the RM, along with potential inaccuracies in the Fermi mass model, contributes substantially to the observed systematic errors in GBM positions. § DISCUSSION We have developed maximum likelihood algorithms that are fast enough to run in real-time on low-power processors. Their trigger thresholds can be calibrated with a predictable false-positive rate, and for a given trigger threshold they deliver a detection threshold about half that of existing on-board triggers. In a test with archival GBM data (GRB 170817A), the sensitivity was improved by more than 4×. From an instrumental design standpoint, this is a substantial sensitivity boost. With thin scintillators, both the signal and background rates scale approximately linearly with effective area, A, so the signal-to-noise ratio for a given transient only scales as √(A). Because we have shown that the computational burden is modest and requires no unusual hardware, adopting ML algorithms allows an instrument to reach the same sensitivity as one about four times larger for “free”. From a scientific standpoint, it is also a substantial gain. The transients of most interest for multimessenger astronomy are relatively nearby, and thus uniformly distributed through a detection volume such that log N–log S∝ S^-3/2. This degree of improvement of sensitivity yields a transient rate that is increased to 1.8^3/2 to 2.0^3/2 (240–280%) of the baseline rate. We note that with GBM data, this sensitivity improvement can be and has already been exploited in the “sub-threshold” search pipeline <cit.>, which is possible post facto because GBM can downlink every photon event with TDRSS. (In principle the CTIME data would be sufficient for ML detection, but the low spectral resolution would complicate analysis.) Access to such high-bandwidth telemetry is not guaranteed for future experiments, and in particular Starburst will not be able to send its full event data to the ground. However, we have shown it is possible to realize ML techniques with on-board processing, ensuring that faint transients can be identified while only downlinking data of interest. Because the RI presented here only requires <10% of a single core of low-power processors, we can consider even more ambitious applications. Possibilities include searches for ms-scale transients, such as terrestrial γ-ray flashes <cit.> or the use of a truly fine spatial grid to provide real-time near-degree-scale localizations. At a minimum, the use of an expanded library of GRB templates relative to the RI could reduce systematic errors in rapid localization. Such rapid, accurate localization would facilitate robotic follow-up observations of rapidly-fading afterglows. Our investigation into the robustness of precise burst localization revealed a surprising dependence on the spatial resolution of the instrument RM, and we suspect this effect contributes to the systematic uncertainty of GRB localizations with GBM. <cit.> find evidence for a two-component model in which about half of GRBs have a small uncertainty concentrated around ∼2^∘, and the other half occupy a long tail with a typical value of ∼4^∘. (Previous analyses of BATSE data <cit.> and GBM data <cit.> found similar distributions.) Both values are consistent with classes of systematic errors we observed in our analysis with low-resolution RMs, and we speculate that the two components may simply result from the proximity of the true GRB position to one of the RM sampling points. This idea could be tested with a higher-resolution RM for GBM. We used the Glowbug RM for our study because we had developed substantial machinery for creating mass models, performing Monte Carlo, and parsing the results into detector RMs. The GBM mass model is substantially more complicated (including the entire Fermi spacecraft), and creating a higher resolution RM is a substantial undertaking. However, archival and future GBM data are critical for multi-messenger astronomy, and obtaining more reliable localizations make it a worthwhile endeavor. Finally, the relative ease of performing ML detection on-board also informs GRB experimental design. In particular, it increases the value of instruments with good “geometry factors”, i.e. those that expose discrete detector elements to different portions of the sky, like GBM. Given the rapidly improving availability of affordable SmallSat spacecraft buses and the development of large-area (heavy) scintillator detectors, this medium-scale form factor may be the “sweet spot” for future networks of γ-ray sensors. This work is supported by the Office of Naval Research. Fermi aasjournal
http://arxiv.org/abs/2307.02095v1
20230705081545
Laser-assisted deformed $α$ decay of the ground state even-even nuclei
[ "Jun-Hao Cheng", "Wen-Yu Zhang", "Qiong Xiao", "Jun-Gang Deng", "Tong-Pu Yu" ]
nucl-th
[ "nucl-th" ]
APS/123-QED Department of Physics, National University of Defense Technology, 410073 Changsha, People's Republic of China Department of Physics, National University of Defense Technology, 410073 Changsha, People's Republic of China Department of Physics, National University of Defense Technology, 410073 Changsha, People's Republic of China College of Science, China Three Gorges University, 443002 Yichang, People's Republic of China [email protected] Department of Physics, National University of Defense Technology, 410073 Changsha, People's Republic of China In the present work, the influence of ultra-intense laser fields on the α decay half-life of the deformed ground state even-even nucleus with the mass number 52 ≤ Z ≤ 118 is systematically studied. The calculations show that the laser field changes the α decay half-life by varying the α decay penetration probability in a small range. Moreover, the analytical formulas for the rate of change of the α decay penetration probability in the ultra-intense laser fields have been derived by the spherical approximation, which agrees well with the numerical solutions for nuclei with more significant proton numbers. This provides a fast way to estimate the rate of change of the α decay penetration probability for superheavy nuclei. Furthermore, the relationship between laser properties and the average rate of change of the α decay penetration probability is investigated. The calculations indicate that the shorter the wavelength of the laser pulse is, the larger the average rate of change of the penetration probability. Laser-assisted deformed α decay of the ground state even-even nuclei Tong-Pu Yu August 1, 2023 ==================================================================== § INTRODUCTION During the past twenty decades, many decay modes and exotic nuclei have been discovered with the advent of radioactive ion beam facilities around the world, e.g., Dubna, Rikagaku Kenkyusho (RIKEN), Heavy Ion Research Facility in Lanzhou (HIRFL), Berkeley, GSI, and Grand Accelerateur National d’Ions Lourds (GANIL) <cit.>. As one of the main decay modes of superheavy nuclei, α decay has always attracted much attention in synthesizing and researching superheavy nuclei <cit.>. Theoretically, α decay was one of the early successes of quantum mechanics. Gamow <cit.>, Condon, and Gurney <cit.> independently used barrier tunneling theory based on quantum mechanics to calculate α decay lifetimes. Experimentally, α decay spectra of neutron-deficient nuclei and heavy and superheavy nuclei provide important nuclear structural information, which is an irreplaceable means for researchers to understand the structure and stability of heavy and superheavy nuclei <cit.>. Meanwhile, α decay process is also essential for crucial issues such as understanding the nuclear cluster structure in superheavy nuclei <cit.>, studying the chronology of the solar system <cit.> and finding stable superheavy element islands <cit.>. The advent of laser fields with a wide range of frequencies, intensities, and durations provides a unique opportunity to study nuclear physics in the laboratory. Studying laser-nucleus interactions has been driven by the rapid development of intense laser technologies over the past few decades, e.g. the chirped pulse amplification technique <cit.>. Recently, it took only ten months for the peak laser field intensity to be increased from 5.5 × 10^22 W/cm^2 to the current 10^23 W/cm^2 <cit.>. Furthermore, the Shanghai Ultra Intensive Ultrafast Laser Facility (SULF) <cit.> or the Extreme Light Infrastructure for Nuclear Physics (ELI-NP) <cit.> is expected to further increase the peak laser intensity by one to two orders of magnitude in the short term from the existing intensity. The rapid rise in peak intensity and energy of delivered lasers has made direct laser-nuclear interactions one of the hottest topics in nuclear physics <cit.>. Recent works have noted that the extreme laser fields can directly increase the probability of light nuclear fusion and heavy nuclear fission <cit.>. Excitingly, J. Feng et al. experimentally presented femtosecond pumping of isomeric nuclear states by the Coulomb excitation of ions with the quivering electrons induced by laser fields <cit.>. This adds to the study of direct laser-nucleus interactions. For decay, many efforts have been dedicated to discussing how high-intensity lasers can interfere with the half-life of the natural decay of nuclei <cit.>. From an energy point of view, the cycle-averaged kinetic energy of emitted particles in the laser field with an intensity of 10^23 W/cm^2 can exceed 3 MeV, which is already on the order of decay energy <cit.>. However, the theoretical calculations can not be verified due to the lack of experimental data. It is essential to find a reasonable and adequate experimental scheme for studying the effect of laser light on nuclear decay. A feasible experimental protocol requires an evaluation of effects of the laser and properties of the nucleus on the laser-nucleus interaction and a way to select the right nucleus and adjust the laser parameters to obtain significant experimental results, which has seldom been investigated in detail. Even-even nuclei capable of α decay are characterised by a large half-life time span and a wide variety of parent nuclei, which has the potential to become the object of future experimental studies of direct laser-nuclear interactions. To date, many approaches have been used to study nucleus α decay, such as the deformed Cosh potentials <cit.>, the deformed Woods-Saxon (WS) potential <cit.>, the Gamow-like model <cit.>, the liquid drop model <cit.>, the cluster model <cit.>, the coupled-channels method <cit.>, the deformed version of the density-dependent cluster model (DDCM) with microscopic double-folding potentials <cit.> and others <cit.>. These models reproduce, to varying degrees, the potential of the emitting particles in the parent nucleus. In the present work, consideration of the effect of the deformation of the nucleus on the α decay half-life is necessary since the laser-nucleus interaction introduces a new electric dipole term in the nuclear Hamiltonian, which is closely related to the angle between the vector E(t) and the vector r. Taking into account of the deformation of the parent nucleus, we systematically study the rate of change of the α decay half-life of the deformed ground state even-even nucleus with the mass number 52 ≤ Z ≤ 118 by using the state-of-the-art laser. The Coulomb potential of the emitted α particle-daughter nucleus is calculated by the double-folding model <cit.>. Moreover, we chose the deformed Woods-Saxon nuclear potential in the calculation <cit.>, which has been shown in our previous work to be competent for calculating the total potential energy between the nucleus-emitted particle <cit.>. Furthermore, we give an analytical formula for calculating the rate of change of penetration probability Δ P and investigate the relationship between the rate of change of penetration probability and the properties of the parent nucleus itself. Finally, we investigate the relationship between the laser properties and the average rate of change of the α decay penetration probability Δ P_avg. The calculations show that the shorter the wavelength of the laser pulse is, the larger the average rate of change of the α decay penetration probability. This article is organised as follows. In the next section, the theoretical framework for calculating the α decay half-life in ultra-intense laser fields is described in detail. In Section <ref>, the detailed calculation results and discussions are provided. In Section <ref>, a brief summary is given. § THEORETICAL FRAMEWORK §.§ The theoretical method α decay half-life T_1/2, an important indicator of nuclear stability, can be written as T_1/2=ħln2/Γ, where ħ represents the reduced Planck constant, and Γ is the α decay width depending on the α particle formation probability S_α, the normalized factor F and penetration probability P. In the density-dependent cluster model (DDCM), the α decay width can be written as <cit.> Γ=ħ^2/4μS_αFP, where μ=M_d M_α/M_d+M_α is the reduced mass of the daughter nucleus and the α particle in the center-of-mass coordinate with M_α and M_d being masses of the α particle and the daughter nucleus, respectively. Considering the influence of the nucleus deformation, we obtain the total penetration probability P by averaging P_φ in all directions. This is widely used in both α decay and fusion reaction calculations <cit.>, which can be written as <cit.> P=1/2∫_0^πP_φ sinφ dφ, P_φ=exp [- 2 ∫_R_2^R_3 k(r, t, φ, θ) dr]. Similarly, the total normalised factor F can be obtained by averaging F_φ in all directions. It is given by <cit.> F=1/2∫_0^π F_φsinφ dφ, F_φ∫_R_1^R_2 dr 1/k(r, t, φ, θ) cos^2(∫_R_1^r dr' k(r', t, φ, θ)-π/4)=1 where the classical turning points R_1, R_2 and R_3 can be determined by the equation V(r, t, φ, θ) = Q_α. φ represents the orientation angle of the symmetry axis of the daughter nucleus with respect to the emitted α particle. θ is related to the laser-nucleus interaction, which will be provided in more detail in the following subsection. k(r, t, φ, θ) is the wave number, which can be written as k(r, t, φ, θ)= √(2μ/ħ^2| V(r, t, φ, θ)-Q_α|), where r is the separation between the mass center of α particle and the mass center of core and Q_α is the α decay energy. In this work, the total interaction potential V(r, t, φ, θ) between the daughter nucleus and the emitted α particle can be given by V(r, t, φ, θ)=λ(φ) V_N(r, φ)+V_l(r)+V_C(r, φ)+V_i(r, t, φ, θ). where V_l(r), V_C(r, φ) and V_N(r, φ) are the centrifugal, Coulomb, and nuclear potentials, respectively. V_i(r, t, φ, θ) describes the interaction of the electromagnetic field with the decay system <cit.>, which will be provided in more detail in the following subsection. Meanwhile, λ(φ) can be obtained by using the Bohr-Sommerfeld quantization condition. In the present work, the emitted α-daughter nucleus nuclear potential V_N(r, φ) was chosen as the classic Woods-Saxon (WS) <cit.> nuclear potential. For the Woods-Saxon form, the nuclear potential is approximated as the axial deformation <cit.>, which can be written as V_N(r, φ)= V'/1+exp[(r-R_d(φ))/s], with R_d(φ)=r_0A_d^1/3[1+β_2 Y_20(φ)+β_4 Y_40(φ)+β_6 Y_60(φ)]. Here, Y_ml(φ) represents shperical harmonics function, β_2, β_4 and β_6 respectively denote the calculated quadrupole, hexadecapole and hexacontatetrapole deformation of the nuclear ground-state. A_d represents the mass number of the daughter nucleus. By systematically searching for the radius r_0, the diffuseness s, and the depth of the nuclear potential V', we find that the most convenient value for the attenuation calculation is r_0=1.06, s = 0.88 fm and V'=161.95 MeV in the case of S_α=0.43 <cit.>. In this work, we obtain the deformed Coulomb potential by the double-folding mode. It can be written as V_C(r^→, φ)=∫_^∫_^ρ_d(r⃗_d)ρ_α(r⃗_α)/|r⃗+r⃗_d-r⃗_α| dr⃗_α dr⃗_d, where ρ_α and ρ_d are the density distributions of the emitted α particle and the daughter nucleus, respectively. r⃗_α and r⃗_d are the radius vectors in the charge distributions of the emitted α particle and daughter nuclei. Simplified appropriately by the Fourier transform <cit.>, the Coulomb potential can be approximated as V_C(r^→, φ)=V_C^(0)(r⃗, φ)+V_C^(1)(r⃗, φ)+V_C^(2)(r⃗, φ), where V_C^(0)(r⃗, φ), V_C^(1)(r⃗, φ) and V_C^(2)(r⃗, φ) are the bare Coulomb interaction, linear Coulomb coupling and second-order Coulomb coupling, respectively <cit.>. The Langer modified V_l(r) is chosen in the form <cit.>. It can be written as V_l(r)=ħ^2(l+1/2)^2/2μr^2, where l is the orbital angular momentum carried by the α particle. In the present work, we have only focused on the α decay of the even-even nuclei, thus l=0 is taken in the calculations. §.§ Laser-nucleus interaction §.§.§ The quasistatic approximation The full width at half maximum (FWHM) of laser pulses with peak intensities exceeding 10^23 W/cm^2 currently available in the laboratory is approximately 19.6 fs (=1.96 × 10^-14 s). The laser cycles produced by a near-infrared laser with a wavelength of approximately 800 nm and an X-ray free-electron laser <cit.> with a photon energy of 10 keV are approximately 10^-15 s and 10^-19 s, respectively. For α decay, the emitted α particle oscillates back and forth at high frequencies within the parent nuclei, with a small probability of tunneling out whenever the preformed α particle hits the potential wall. The time scale of the emitted α particle passing through the potential wall can be estimated. Since the typical decay energy for α decay is approximately several MeV, the velocity of the preformed α particles is approximately 10^7 m/s and the size of the parent nucleus is approximately 1 fm, the frequency of the oscillations can be roughly estimated to be 10^22 Hz. The length of the tunnel path is less than 100 fm, and the time for the emitted α particle to pass through the tunnel is under 10^-20 s. The highest peak intensity laser pulse currently achievable has an optical period much longer than this time. Therefore, the laser field does not change significantly during the passage of the emitted α particles through the potential barrier and the process can be considered quasistatic. A similar quasistatic approximation is usually used to describe the tunneling ionisation of atoms in strong-field atomic physics <cit.>. It is also shown that failure to consider this quasistatic conditions can lead to inaccurate theoretical calculations, e.g., Ref. <cit.>. Finally, the kinetic energy of the emitted α particles is only a few MeV. They move much slower than the speed of light in vacuum. This means that the effect of the laser electric field on the emitted α particles is expected to be much larger than that of the laser magnetic field. Therefore, we can neglect the magnetic component of the laser field in the current work. §.§.§ The relative motion of the emitted α particle and the daughter nucleus in the center of mass coordinates In the quasistatic approximation, the time-dependent Schrödinger equation (TDSE) can be used to describe the interaction between the daughter nucleus and the emitted α particle <cit.>, which can be written as i ħ∂Φ(r⃗_α, r⃗_d, t)/∂ t=H(t)Φ(r⃗_α, r⃗_d, t), where H(t) is the time-dependent minimum-coupling Hamiltonian. Since the existing intense laser wavelengths are much larger than the spatial scale of α decay, the spatial dependency of the vector potential in a radiation gauge can be ignored. The time-dependent minimum-coupling Hamiltonian can be given by H(t)=∑_i1/2m_i[p⃗_⃗i⃗-q_i/cA⃗(t)]^2+V(r), where i represents the parameters related to the emitted α particle and the daughter nucleus, respectively. For the center of mass coordinates (R⃗, P⃗, r⃗, p⃗): r⃗_α=R⃗+m_dr⃗/(m_α+m_d) p⃗_α=p⃗+m_αP⃗/(m_α+m_d) r⃗_d=R⃗-m_αr⃗/(m_α+m_d) p⃗_d=-p⃗+m_dP⃗/(m_α+m_d). The time-dependent minimum-coupling Hamiltonian can be written as H(t)=1/2M[P⃗-q/cA⃗(t)]^2+1/2μ[p⃗-Q_eff/cA⃗(t)]^2+V(r), where M=m_α+m_d, q=q_α+q_d. The effective charge for relative motion Q_eff describes the tendency of the laser electric field to separate the emitted α particle from the daughter nuclei. It can be written as Q_eff=q_α m_d-q_d m_α/M. By introducing unitary transformations, the wave function can be transformed into the center of mass coordinates ϕ(r⃗, R⃗, t)=Û_̂r̂Û_̂R̂Φ(r⃗, R⃗, t), where Û_̂r̂=exp[-iQ_eff/cA⃗(t) ·r⃗/ħ] and Û_̂R̂=exp[-iq/cA⃗(t) ·R⃗/ħ]. The TDSE can be rewritten as i ħ∂ϕ(r⃗, R⃗, t)/∂ t= [-ħ^2/2μ∇^2_r+V(r)-Q_effr⃗E⃗(t) -ħ^2/2M∇^2_R-qR⃗E⃗(t)]ϕ(r⃗, R⃗, t), where cE⃗(t)=-dA⃗(t)/dt is the time-dependent laser electric field. By factorizing the wave function ϕ(r⃗, R⃗, t)=χ_1(R⃗, t)χ_2(r⃗ , t), we split the TDSE into two separate equations describing the center of mass coordinates and the relative motion between the daughter nucleus and the emitted α particle. They can be written as i ħ∂χ_1(R⃗ , t)/∂ t=[-ħ^2/2M∇^2_R-qR⃗E⃗(t)]χ_1(R⃗ , t), i ħ∂χ_2(r⃗ , t)/∂ t=[-ħ^2/2μ∇^2_r+V(r)-Q_effr⃗E⃗(t)]χ_2(r⃗ , t). The equation of relative motion is related to the laser electric field. The interaction potential energy between the relative motion particle and the laser field can be written as V_i(r^→, t, θ)=-Q_effr^→·E^→(t) =-Q_eff r E(t) cosθ, where θ is the angle between vector the r⃗ and vector E⃗(t). §.§.§ Laser-nucleus interaction The laser electric field with a linearly polarized Gaussian plane wave form can be expressed as E(t)=E_0 f(t) sin (ωt), where ω is the angular frequency. The peak of the laser electric field E_0 is related to the peak of the laser intensity I_0, which can be given by <cit.> E_0 [V cm^-1]=(2I_0/cϵ_0)^1/2=27.44(I_0[W cm^-2])^1/2, where ϵ_0 and c are the permittivity of free space and the speed of light in vacuum, respectively. The sequence of Gaussian pulses with an envelope function of temporal profile f(t) can be given by f(t)=exp(-t^2/τ^2), where τ represents the pulse width of the envelope, which can be written in the form related to the pulse period T_0 τ=x T_0. In the present work, we write E(t) in the form related to the wavelength λ for the discussion in the next section. It can be written as E(t)=E_0exp (-t^2/x^2 T_0^2)sin (ω t)=E_0exp (-c^2 t^2/λ^2 x^2)sin(2 πc/λt), where the pulse period T_0=1/ν and ν=ω/2π=c/λ is the laser frequency. The laser electric fields should also change the proton emission energy Q_α. The change in the decay energy Δ Q_α is equal to the energy of the emitted α particle accelerated by the laser electric field during the penetration of the potential barrier. It can be given by Δ Q_α=eZ_α E(t)R_d(φ)cosθ. The α decay energy with considering the laser electric field effect Q^*_α can thus be rewritten as Q^*_α=Q_α+Δ Q_α In this framework, the total emitted α particle-daughter nucleus interaction potential with and without considering the laser electric field influence is shown in Fig. <ref>. In this figure, the blue and red curves represent the total potential V(r, φ) and the total potential V(r, t, φ, θ) in the laser electric field, respectively. Q^*_α and Q_α correspond to the α decay energy with and without considering the laser electric field effect, respectively. R^*_i(i=1, 2, 3) and R_i(i=1, 2, 3) refer to the classical turning points with and without considering the laser electric field effect, respectively. The schematic diagram shows that the laser electric field affects both the position of the classical turning points and the kinetic energy of the emitted α particle. § RESULTS AND DISCUSSION In the present work, the α decay energy Q_α, parity, spin, and the α decay half-lives experimental data for 190 ground state even-even nuclei from Z = 52 to Z = 118 are taken from the latest evaluated nuclear properties table NUBASE2020 <cit.> and the latest evaluated atomic mass table AME2020 <cit.>. β_2, β_4 and β_6 are taken from FRDM2012 <cit.>. To describe the effect of the laser electric field on the α decay half-life, we define the rate of relative change of the α decay half-life Δ T, Δ T=T(E, θ)-T(E=0, θ)/T(E=0, θ). The normalized factor F is determined by the principal quantum number G <cit.>, which is very insensitive to the external laser field because the integration is performed inside the nucleus from R_1 to R_2. One can safely treat the normalized factor F as a laser-independent constant <cit.>. Thus we assume that the external laser fields mainly affect the half-life of α decay by modifying the α decay penetration probability. The rate of the relative change of penetration probability Δ P is defined as Δ P=P(E, θ)-P(E=0, θ)/P(E=0, θ). As θ 0, we can consider laser field strength as a smaller laser field strength E(t)cosθ. In this work, we only consider the case of θ=0, Eq. (<ref>) can be rewritten as Δ T=P(E=0)-P(E)/P(E). §.§ Gaussian laser-assisted α decay for the ground state even-even nuclei Based on the high-intensity laser pulses available in the current laboratory, we systematically investigate the effect of a peak intensity of 10^23 W/cm^2 laser pulse on the α decay of the ground state even-even nuclei. The detailed results are listed in Table <ref>. In this table, the first four columns represent the parent nuclei, the minimum orbital angular momentum l_min, the α decay energy Q_α, and the logarithmic form of the experimental α decay half-lives, respectively. The following three columns represent the logarithmic form of the theoretical α decay half-lives without considering the laser field, the rate of the relative change of penetration probability with I_0=10^23 W/cm^2, the rate of the relative change of α decay half-life with I_0=10^23 W/cm^2, respectively. The standard deviation σ indicates the divergence between the theoretical α decay half-lives and the experimental data. It can be written as σ=√(∑ [lgT^exp_1/2(s)-lgT^cal_1/2(s)]^2/n). From Table <ref>, we can obtain the standard deviation σ=0.325. Moreover, it can be seen from Table <ref> that the difference between the theoretical half-life and the experimental data for the vast majority of nuclei is trivial. This means that the theoretical α decay half-lives can reproduce the experimental data well, and the model we use is trustworthy. It can also be seen from Table <ref> that the α decay penetration probability and the α decay half-life of different parent nuclei have different rates of change under the influence of laser intensity of 10^23 W/cm^2, and the rate of change ranges from 0.0009% to 0.135%. As a particular case, the parent nucleus ^108Xe corresponds to both Δ P and Δ T equal to 0. The reason is that A_α=2Z_α=4, if A_d=2Z_d, then Q_eff=0, so Δ P and Δ T are equal to 0. In other words, if the daughter nuclei and the emitted α particles have the same charge-to-mass ratio, they will move cooperatively in the laser field, and the laser electric field will not have the effect of separating the two particles. As the angle between the vector r⃗ and the vector E⃗(t) is equal to 0, the addition of the laser field increases the kinetic energy of the emitted α particles and reduces the distance between the classical turning points R_2 and R_3, which leads to an increase in the penetration probability of α decay and a decrease in the half-life of α decay. Furthermore, as seen from Table <ref>, the most sensitive parent nuclei to the high-intensity laser are ^144Nd, which has the lowest decay energy among all parent nuclei. To investigate which properties of the parent nucleus are related to Δ T and Δ P, we rewrite Eq. (<ref>) as follows: P_φ=exp [- 2(2μ)^1/2/ħ∫_R_2^R_3√(V_φ lCN(1+V_i(r, t, φ, θ)/V_φ lCN)) dr], where V_φ lCN=λ(φ) V_N(r, φ)+V_l(r)+V_C(r, φ)-Q_α represents the integrand function without laser modification. Since V_i(r, t, φ, θ) ≪ V_φ lCN, the laser electric field can be regarded as a perturbation, and we take the Taylor expansion of Eq. (<ref>) P_φ= exp [- 2(2μ)^1/2/ħ∫_R_2^R_3√(V_φ lCN) × (1+V_i(r, t, φ, θ)/2V_φ lCN+V_i^2(r, t, φ, θ)/8 V^2_φ lCN+...) dr] ≈ exp [- 2(2μ)^1/2/ħ∫_R_2^R_3√(V_φ lCN) × (1+V_i(r, t, φ, θ)/2V_φ lCN+V_i^2(r, t, φ, θ)/8 V^2_φ lCN) dr] = exp [χ_φ^(0)+χ_φ^(1)+χ_φ^(2)] = exp [χ_φ^(0)] exp [χ_φ^(1)+χ_φ^(2)], where χ_φ^(0), χ_φ^(1), and χ_φ^(2) can be expressed as χ_φ^(0)= - 2(2μ)^1/2/ħ∫_R_2^R_3√(V_φ lCN) dr =P_φ(E=0), χ_φ^(1)= E(t)×(2μ)^1/2Q_effcosθ/ħ∫_R_2^R_3r/√(V_φ lCN) dr, χ_φ^(2)= E^2(t)×(2μ)^1/2(Q_effcosθ)^2/4ħ∫_R_2^R_3r^2/V_φ lCN^3/2 dr = 1/cϵ_0I(t)×(2μ)^1/2(Q_effcosθ)^2/2ħ∫_R_2^R_3r^2/V_φ lCN^3/2 dr, where I(t) is the laser field intensity proportional to the square of the laser electric field intensity E^2(t). P_φ(E=0) is the α penetration probability without laser modification. The rate of change of penetration probability can be written as Δ P_φ≈ exp [χ_φ^(0)] exp [χ_φ^(1)+χ_φ^(2)]-exp [χ_φ^(0)]/exp [χ_φ^(0)] = exp [χ_φ^(1)+χ_φ^(2)]-1. As χ_φ^(1)+χ_φ^(2) approaches 0, exp [χ_φ^(1)+χ_φ^(2)] approaches 1+χ_φ^(1)+χ_φ^(2), Eq. (<ref>) can be rewritten as: Δ P_φ=χ_φ^(1)+χ_φ^(2). Similarly, Eq. (<ref>) can be rewritten as Δ T_φ=-χ_φ^(1)+χ_φ^(2)/1+χ_φ^(1)+χ_φ^(2). Due to the difficulty in integrating Eq. (<ref>), the analytical solution of the rate of the relative change of penetration probability Δ P can not be obtained precisely. To proceed, we used a spherical Gamow-like model that has been shown to be capable of reproducing the experimental data of α decay well instead of Eqs. (<ref>) and (<ref>) <cit.>. In this model, the total potential V(r) between the emitted α particle and the daughter nucleus is considered as a square situation well in the case of r < R_in, where R_in =1.15(A_d^1/3+ A_α^1/3 ) fm is the geometrical touching distance. For the case of r > R_in, the total potential V(r) between the emitted α particle and the daughter nucleus is reduced to the Coulomb potential between the two particles.This means that the intersection point R_out of α and Q_α is determined as R_out = Z_d Z_α e^2/Q_α. For the spherical approximation, Eq. (<ref>) can be rewritten as ccccccc The effect of laser pulse with a peak intensity of 10^23 W/cm^2 on the α decay of the ground state even-even nuclei. Here lgT_1/2^cal is the logarithmic form of the α decay half-life calculated in this work. Nucleus l_min Q_α(MeV) lgT_1/2^exp (s) lgT_1/2^cal (s) Δ P Δ T 6c – continued from previous page Nucleus l_min Q_α(MeV) lgT_1/2^exp (s) lgT_1/2^cal (s) Δ P Δ T 6rContinued on next page ^106Te 0 4.285 -4.108 -4.279 1.15×10^-4 -1.15×10^-4 ^108Te 0 3.42 0.628 0.447 1.85×10^-4 -1.85×10^-4 ^108Xe 0 4.575 -4.143 -4.391 0 0 ^110Xe 0 3.875 -0.84 -1.031 1.45×10^-4 -1.45×10^-4 ^112Xe 0 3.331 2.324 2.347 2.10×10^-4 -2.10×10^-4 ^114Ba 0 3.585 1.694 1.884 1.75×10^-4 -1.75×10^-4 ^144Nd 0 1.901 22.859 23.083 1.35×10^-3 -1.35×10^-3 ^146Sm 0 2.529 15.332 15.41 7.86×10^-4 -7.86×10^-4 ^148Sm 0 1.987 23.298 23.45 1.28×10^-3 -1.28×10^-3 ^148Gd 0 3.271 9.352 9.186 4.33×10^-4 -4.33×10^-4 ^150Gd 0 2.807 13.752 13.726 6.16×10^-4 -6.16×10^-4 ^152Gd 0 2.204 21.533 21.558 1.09×10^-3 -1.09×10^-3 ^150Dy 0 4.351 3.107 2.812 2.09×10^-4 -2.09×10^-4 ^152Dy 0 3.726 6.93 6.885 3.47×10^-4 -3.47×10^-4 ^154Dy 0 2.945 13.976 13.658 5.94×10^-4 -5.94×10^-4 ^152Er 0 4.934 1.057 0.812 1.94×10^-4 -1.94×10^-4 ^154Er 0 4.28 4.677 4.435 2.64×10^-4 -2.64×10^-4 ^156Er 0 3.481 9.989 10.057 4.20×10^-4 -4.20×10^-4 ^154Yb 0 5.474 -0.355 -0.642 1.59×10^-4 -1.59×10^-4 ^156Yb 0 4.809 2.408 2.501 2.09×10^-4 -2.09×10^-4 ^156Hf 0 6.025 -1.638 -1.942 1.34×10^-4 -1.34×10^-4 ^158Hf 0 5.405 0.808 0.629 1.68×10^-4 -1.68×10^-4 ^160Hf 0 4.902 3.276 3.048 2.15×10^-4 -2.15×10^-4 ^162Hf 0 4.417 5.687 5.788 2.76×10^-4 -2.76×10^-4 ^174Hf 0 2.494 22.8 23.9 1.11×10^-3 -1.10×10^-3 ^158W 0 6.615 -2.845 -3.153 1.15×10^-4 -1.15×10^-4 ^160W 0 6.065 -0.989 -1.185 1.38×10^-4 -1.38×10^-4 ^162W 0 5.678 0.42 0.365 1.66×10^-4 -1.66×10^-4 ^164W 0 5.278 2.218 2.173 1.93×10^-4 -1.93×10^-4 ^166W 0 4.856 4.738 4.319 2.33×10^-4 -2.33×10^-4 ^168W 0 4.5 6.2 6.368 2.85×10^-4 -2.85×10^-4 ^180W 0 2.515 25.7 25.247 1.16×10^-3 -1.16×10^-3 ^162Os 0 6.765 -2.678 -2.849 1.12×10^-4 -1.12×10^-4 ^164Os 0 6.485 -1.662 -1.874 1.30×10^-4 -1.30×10^-4 ^166Os 0 6.142 -0.593 -0.639 1.46×10^-4 -1.46×10^-4 ^168Os 0 5.816 0.685 0.679 1.65×10^-4 -1.65×10^-4 ^170Os 0 5.536 1.889 1.891 1.93×10^-4 -1.93×10^-4 ^172Os 0 5.224 3.207 3.373 2.22×10^-4 -2.22×10^-4 ^174Os 0 4.871 5.251 5.234 2.62×10^-4 -2.62×10^-4 ^186Os 0 2.821 22.8 22.594 9.55×10^-4 -9.54×10^-4 ^166Pt 0 7.295 -3.532 -3.746 1.03×10^-4 -1.03×10^-4 ^168Pt 0 6.985 -2.695 -2.807 1.15×10^-4 -1.15×10^-4 ^170Pt 0 6.708 -1.856 -1.874 1.33×10^-4 -1.33×10^-4 ^172Pt 0 6.463 -0.994 -1.015 1.41×10^-4 -1.41×10^-4 ^174Pt 0 6.363 0.061 0.039 1.62×10^-4 -1.62×10^-4 ^176Pt 0 5.885 1.197 1.253 1.82×10^-4 -1.82×10^-4 ^178Pt 0 5.573 2.428 2.626 2.10×10^-4 -2.10×10^-4 ^180Pt 0 5.276 4.028 4.065 2.40×10^-4 -2.40×10^-4 ^182Pt 0 4.951 5.623 5.793 2.81×10^-4 -2.81×10^-4 ^184Pt 0 4.599 7.768 7.885 3.35×10^-4 -3.35×10^-4 ^186Pt 0 4.32 9.728 9.718 3.83×10^-4 -3.83×10^-4 ^188Pt 0 4.007 12.528 12.019 4.63×10^-4 -4.63×10^-4 ^190Pt 0 3.269 19.183 18.796 7.17×10^-4 -7.16×10^-4 ^170Hg 0 7.775 -3.509 -4.396 9.73×10^-5 -9.73×10^-5 ^172Hg 0 7.525 -3.636 -3.7 1.07×10^-4 -1.07×10^-4 ^174Hg 0 7.233 -2.699 -2.814 1.19×10^-4 -1.19×10^-4 ^176Hg 0 6.897 -1.651 -1.718 1.27×10^-4 -1.27×10^-4 ^178Hg 0 6.398 -0.526 -0.598 1.50×10^-4 -1.50×10^-4 ^180Hg 0 6.258 0.73 0.554 1.70×10^-4 -1.69×10^-4 ^182Hg 0 5.995 1.892 1.625 1.93×10^-4 -1.93×10^-4 ^184Hg 0 5.66 3.442 3.125 2.18×10^-4 -2.18×10^-4 ^186Hg 0 5.204 5.701 5.446 2.60×10^-4 -2.60×10^-4 ^188Hg 0 4.709 8.722 8.312 3.28×10^-4 -3.28×10^-4 ^178Pb 0 7.789 -3.602 -3.75 1.06×10^-4 -1.06×10^-4 ^180Pb 0 7.419 -2.387 -2.642 1.18×10^-4 -1.18×10^-4 ^182Pb 0 7.065 -1.26 -1.492 1.33×10^-4 -1.33×10^-4 ^184Pb 0 6.774 -0.213 -0.478 1.47×10^-4 -1.47×10^-4 ^186Pb 0 6.471 1.072 0.648 1.63×10^-4 -1.63×10^-4 ^188Pb 0 6.109 2.468 2.123 1.88×10^-4 -1.88×10^-4 ^190Pb 0 5.697 4.245 3.981 2.27×10^-4 -2.27×10^-4 ^192Pb 0 5.221 6.546 6.419 2.74×10^-4 -2.74×10^-4 ^194Pb 0 4.738 9.944 9.301 3.40×10^-4 -3.40×10^-4 ^210Pb 0 3.793 16.567 16.082 6.46×10^-4 -6.45×10^-4 ^186Po 0 8.502 -4.469 -5.035 9.50×10^-5 -9.50×10^-5 ^188Po 0 8.083 -3.569 -3.911 1.11×10^-4 -1.11×10^-4 ^190Po 0 7.693 -2.611 -2.778 1.23×10^-4 -1.23×10^-4 ^192Po 0 7.32 -1.492 -1.594 1.39×10^-4 -1.39×10^-4 ^194Po 0 6.987 -0.407 -0.473 1.90×10^-4 -1.90×10^-4 ^196Po 0 6.658 0.775 0.764 1.74×10^-4 -1.74×10^-4 ^198Po 0 6.31 2.266 2.14 2.36×10^-4 -2.36×10^-4 ^200Po 0 5.981 3.793 3.577 2.25×10^-4 -2.25×10^-4 ^202Po 0 5.7 5.143 4.87 2.53×10^-4 -2.53×10^-4 ^204Po 0 5.485 6.275 6.069 2.80×10^-4 -2.80×10^-4 ^206Po 0 5.327 7.144 6.78 3.04×10^-4 -3.04×10^-4 ^208Po 0 5.216 7.961 7.391 3.23×10^-4 -3.23×10^-4 ^210Po 0 5.407 7.078 6.302 3.07×10^-4 -3.07×10^-4 ^212Po 0 8.954 -6.531 -6.805 1.41×10^-4 -1.41×10^-4 ^214Po 0 7.833 -3.787 -3.785 1.22×10^-4 -1.22×10^-4 ^216Po 0 6.906 -0.842 -0.736 1.98×10^-4 -1.98×10^-4 ^218Po 0 6.115 2.269 2.452 2.95×10^-4 -2.95×10^-4 ^194Rn 0 7.863 -3.108 -2.626 1.21×10^-4 -1.21×10^-4 ^196Rn 0 7.616 -2.328 -1.879 1.35×10^-4 -1.35×10^-4 ^198Rn 0 7.35 -1.163 -1.005 1.53×10^-4 -1.53×10^-4 ^200Rn 0 7.044 0.07 0.135 1.65×10^-4 -1.65×10^-4 ^202Rn 0 6.773 1.09 1.138 1.77×10^-4 -1.77×10^-4 ^204Rn 0 6.547 2.012 2.019 1.94×10^-4 -1.94×10^-4 ^206Rn 0 6.384 2.737 2.675 2.14×10^-4 -2.14×10^-4 ^208Rn 0 6.261 3.367 3.189 2.22×10^-4 -2.21×10^-4 ^210Rn 0 6.159 3.954 3.597 2.34×10^-4 -2.34×10^-4 ^212Rn 0 6.385 3.157 2.619 2.23×10^-4 -2.23×10^-4 ^214Rn 0 9.208 -6.587 -6.711 1.11×10^-4 -1.11×10^-4 ^216Rn 0 8.197 -4.538 -4.094 1.41×10^-4 -1.41×10^-4 ^218Rn 0 7.262 -1.472 -1.123 1.82×10^-4 -1.82×10^-4 ^220Rn 0 6.405 1.745 2.143 2.38×10^-4 -2.38×10^-4 ^222Rn 0 5.59 5.519 5.93 3.17×10^-4 -3.17×10^-4 ^202Ra 0 7.88 -2.387 -1.979 1.32×10^-4 -1.32×10^-4 ^204Ra 0 7.636 -1.222 -1.2 1.45×10^-4 -1.45×10^-4 ^206Ra 0 7.416 -0.62 -0.408 1.54×10^-4 -1.54×10^-4 ^208Ra 0 7.273 0.104 0.062 1.65×10^-4 -1.65×10^-4 ^210Ra 0 7.151 0.602 0.486 1.73×10^-4 -1.73×10^-4 ^214Ra 0 7.273 0.387 0.006 1.74×10^-4 -1.74×10^-4 ^216Ra 0 9.525 -6.764 -6.616 1.05×10^-4 -1.05×10^-4 ^218Ra 0 8.541 -4.587 -4.297 1.32×10^-4 -1.32×10^-4 ^220Ra 0 7.594 -1.742 -1.442 1.69×10^-4 -1.69×10^-4 ^222Ra 0 6.678 1.526 1.866 2.25×10^-4 -2.25×10^-4 ^224Ra 0 5.789 5.497 5.851 3.04×10^-4 -3.04×10^-4 ^226Ra 0 4.871 10.703 11.159 4.41×10^-4 -4.41×10^-4 ^208Th 0 8.204 -2.62 -2.21 1.35×10^-4 -1.35×10^-4 ^210Th 0 8.069 -1.796 -1.777 1.35×10^-4 -1.35×10^-4 ^212Th 0 7.958 -1.499 -1.456 1.45×10^-4 -1.45×10^-4 ^214Th 0 7.827 -1.06 -1.04 1.51×10^-4 -1.51×10^-4 ^216Th 0 8.073 -1.58 -1.823 1.44×10^-4 -1.44×10^-4 ^218Th 0 9.849 -6.914 -6.826 1.00×10^-4 -1.00×10^-4 ^220Th 0 8.974 -4.991 -4.715 1.21×10^-4 -1.21×10^-4 ^222Th 0 8.132 -2.65 -2.407 1.51×10^-4 -1.50×10^-4 ^224Th 0 7.299 0.017 0.282 1.92×10^-4 -1.92×10^-4 ^226Th 0 6.453 3.265 3.64 2.44×10^-4 -2.44×10^-4 ^228Th 0 5.52 7.781 8.231 3.49×10^-4 -3.49×10^-4 ^230Th 0 4.77 12.376 12.848 4.85×10^-4 -4.85×10^-4 ^232Th 0 4.082 17.645 18.248 6.83×10^-4 -6.82×10^-4 ^216U 0 8.53 -2.161 -2.432 1.29×10^-4 -1.29×10^-4 ^218U 0 8.775 -3.451 -3.141 1.24×10^-4 -1.24×10^-4 ^222U 0 9.478 -5.328 -5.326 1.11×10^-4 -1.11×10^-4 ^224U 0 8.628 -3.402 -3.123 1.40×10^-4 -1.40×10^-4 ^226U 0 7.701 -0.57 -0.274 1.79×10^-4 -1.79×10^-4 ^228U 0 6.799 2.748 3.046 2.27×10^-4 -2.27×10^-4 ^230U 0 5.992 6.243 6.706 3.04×10^-4 -3.04×10^-4 ^232U 0 5.414 9.337 9.786 3.80×10^-4 -3.80×10^-4 ^234U 0 4.858 12.889 13.245 4.95×10^-4 -4.95×10^-4 ^236U 0 4.573 14.869 15.328 5.38×10^-4 -5.38×10^-4 ^238U 0 4.27 17.149 17.742 6.53×10^-4 -6.53×10^-4 ^228Pu 0 7.94 0.322 -0.285 1.65×10^-4 -1.65×10^-4 ^230Pu 0 7.178 2.021 2.403 2.08×10^-4 -2.08×10^-4 ^234Pu 0 6.31 5.723 5.942 2.87×10^-4 -2.87×10^-4 ^236Pu 0 5.867 7.955 8.16 3.33×10^-4 -3.32×10^-4 ^238Pu 0 5.593 9.442 9.678 3.80×10^-4 -3.80×10^-4 ^240Pu 0 5.256 11.316 11.677 4.38×10^-4 -4.37×10^-4 ^242Pu 0 4.984 13.073 13.445 4.93×10^-4 -4.93×10^-4 ^244Pu 0 4.666 15.41 15.74 5.70×10^-4 -5.70×10^-4 ^234Cm 0 7.365 2.285 2.386 2.08×10^-4 -2.08×10^-4 ^236Cm 0 7.067 3.351 3.537 2.31×10^-4 -2.31×10^-4 ^238Cm 0 6.67 5.314 5.201 2.67×10^-4 -2.67×10^-4 ^240Cm 0 6.398 6.419 6.452 2.91×10^-4 -2.91×10^-4 ^242Cm 0 6.216 7.148 7.334 3.11×10^-4 -3.11×10^-4 ^244Cm 0 5.902 8.757 8.944 3.47×10^-4 -3.47×10^-4 ^246Cm 0 5.475 11.172 11.377 4.09×10^-4 -4.09×10^-4 ^248Cm 0 5.162 13.079 13.352 4.81×10^-4 -4.81×10^-4 ^238Cf 0 8.133 -0.076 0.47 1.79×10^-4 -1.79×10^-4 ^240Cf 0 7.711 1.612 1.902 2.00×10^-4 -2.00×10^-4 ^242Cf 0 7.517 2.534 2.61 2.14×10^-4 -2.14×10^-4 ^244Cf 0 7.329 3.19 3.311 2.28×10^-4 -2.27×10^-4 ^246Cf 0 6.862 5.109 5.224 2.63×10^-4 -2.63×10^-4 ^248Cf 0 6.361 7.46 7.528 3.06×10^-4 -3.06×10^-4 ^250Cf 0 6.128 8.616 8.692 3.35×10^-4 -3.35×10^-4 ^252Cf 0 6.217 7.935 8.217 3.29×10^-4 -3.29×10^-4 ^254Cf 0 5.926 9.224 9.736 3.69×10^-4 -3.69×10^-4 ^246Fm 0 8.379 0.218 0.379 1.73×10^-4 -1.73×10^-4 ^248Fm 0 7.995 1.538 1.649 1.95×10^-4 -1.95×10^-4 ^250Fm 0 7.557 3.27 3.241 2.18×10^-4 -2.18×10^-4 ^252Fm 0 7.154 4.961 4.834 2.50×10^-4 -2.50×10^-4 ^254Fm 0 7.307 4.067 4.178 2.41×10^-4 -2.40×10^-4 ^256Fm 0 7.025 5.064 5.346 2.63×10^-4 -2.62×10^-4 ^252No 0 8.548 0.562 0.547 1.74×10^-4 -1.74×10^-4 ^254No 0 8.226 1.755 1.599 1.90×10^-4 -1.90×10^-4 ^256No 0 8.581 0.466 0.398 1.78×10^-4 -1.78×10^-4 ^256Rf 0 8.926 0.327 0.094 1.65×10^-4 -1.65×10^-4 ^258Rf 0 9.196 -0.595 -0.751 1.61×10^-4 -1.61×10^-4 ^260Sg 0 9.9 -1.772 -2.009 1.37×10^-4 -1.37×10^-4 ^266Hs 0 10.346 -2.409 -2.543 1.32×10^-4 -1.32×10^-4 ^268Hs 0 9.765 0.146 -0.988 1.49×10^-4 -1.49×10^-4 ^270Hs 0 9.065 0.954 1.029 1.82×10^-4 -1.82×10^-4 ^270Ds 0 11.115 -3.688 -3.775 1.17×10^-4 -1.17×10^-4 ^282Ds 0 9.145 2.401 1.513 1.82×10^-4 -1.82×10^-4 ^286Cn 0 9.235 1.477 1.943 1.82×10^-4 -1.82×10^-4 ^286Fl 0 10.355 -0.658 -0.575 1.46×10^-4 -1.46×10^-4 ^288Fl 0 10.075 -0.185 0.187 1.51×10^-4 -1.51×10^-4 ^290Fl 0 9.855 1.903 0.786 1.63×10^-4 -1.63×10^-4 ^290Lv 0 10.995 -2.046 -1.585 1.31×10^-4 -1.31×10^-4 ^292Lv 0 10.785 -1.796 -1.064 1.32×10^-4 -1.32×10^-4 ^294Og 0 11.865 -3.155 -3.053 1.15×10^-4 -1.15×10^-4 Δ P=χ^(1)+χ^(2), where χ^(1) can be expressed as χ^(1)= E(t)×(2μ)^1/2Q_effcosθ/ħ∫_R_in^R_outr/√(V_0) dr = B ∫_R_in^R_outr/√(V_0) dr. Here V_0 represents the difference between the total potential energy V(r) and the decay energy Q_α, which can be written as V_0=Z_d Z_α e^2/r-Q_α=Q_α(R_out/r-1). We introduce a parameter κ=arccos√(R_in/R_out), then χ^(1) can be integrated analytically, χ^(1)= B Q_α^-1/2∫_R_in^R_outr/√(R_out/r-1) dr = B Q_α^-5/2 (Z_d Z_α e^2)^2 [3/4κ+1/4(3+2cos^2κ)sinκcosκ]. For α decay, R_in≪ R_out, and we can obtain κ≈π/2, sinκ≈ 1, cosκ=√(R_in/R_out). Bringing Eq. (<ref>) into Eq. (<ref>), we get the analytical solution of χ^(1), χ^(1)≈ B_1Q_α^-5/2+B_2Q_α^-2+B_3Q_α^-1, where B_1, B_2, and B_3 can be expressed as B_1=3 π E(t)√(2 μ) Q_effcosθ (Z_d Z_α e^2)^2/8 ħ, B_2=3 E(t)√(2 μ) Q_effcosθ (Z_d Z_α e^2)^3/2√(R_in)/4 ħ, B_3=E(t)√(2 μ) Q_effcosθ (Z_d Z_α e^2)^1/2 (R_in)^3/2/2 ħ. Similarly, χ^(2) can be expressed as χ^(2)= E^2(t)×(2μ)^1/2(Q_effcosθ)^2/4ħ∫_R_in^R_outr^2/V_0^3/2 dr = C ∫_R_in^R_outr^2/V_0^3/2 dr. The integral result of the Eq. (<ref>) in the integration region R_in to R_out is divergent. The reason for this violation of the actual law is that the V_i(r, t, φ, θ) ≪ V_0 preconditioner is not satisfied in the most terminal integral region. To obtain the analytic solution of χ^(2), the intersection R_c satisfying equation V_i(r, t, φ, θ) = V_0 is used to replace the upper limit of integration R_out, which can be expressed as R_c=-Q_α+√((Q_α^2)+4 (Z_d Z_α e^2) E(t) Q_eff)/2 E(t) Q_eff. By calculation, we find that the integral part R_out-R_c that is rounded off is only one ten-thousandth of the total integral length. Moreover, it is evident from Fig. <ref> that the area close to the integral end is much smaller than the rest of the integral, which means this approximation is reasonable. Then, we introduce a parameter f=R_c/R_out, Eq. (<ref>) can thus be rewritten as χ^(2)≈ C_1 Q_α^-9/2, where C_1 can be given by C_1=E^2(t)√(2 μ) (Q_effcosθ)^2 (Z_d Z_α e^2)^3/4 ħ(T_rc-35/16π). The parameter T_rc of the above equation can be expressed with f T_rc=105-35f-14f^2-8f^3/24(1/f-1)√(1/f-1)+35/8arccos√(f). Now, we obtain the analytical solution of the rate of the relative change of penetration probability Δ P Δ P ≈ B_1Q_α^-5/2+B_2Q_α^-2+B_3Q_α^-1+C_1 Q^-9/2. Similarly, by approximating Eq. (<ref>), we give here the analytical solution of χ^(0) χ^(0)= 2(2μ)^1/2/ħ∫_R_in^R_out√(V_0) dr ≈ -A_1 Q_α^-1/2+A_2, where A_1 and A_2 can be expressed as A_1=π√(2 μ) Z_d Z_α e^2/ħ, A_2= 2 √(2 μ Z_d Z_α e^2 R_in)/ħ, Here A_1 is the so-called Gamow constant, and the term exp(χ^(0)) is the α decay penetration probability without laser electric field. The formula for the α decay penetration probability under the influence of laser electric field can be written as P ≈exp [-A_1 Q_α^-1/2+A_2+B_1Q_α^-5/2 +B_2Q_α^-2+B_3Q_α^-1+C_1 Q_α^-9/2]. To verify the correctness of Eq. (<ref>), we used the Gamow-like model and Eq. (<ref>) to calculate the rate of the relative change of penetration probability Δ P for different nuclei in the case of the peak laser intensity I_0=10^23 W/cm^2 and θ=0, respectively. The detailed results are shown in Figs. <ref> and <ref>. The x-axes of Figs. <ref> and <ref> represent the α decay energy Q_α and the proton number of the parent nucleus, respectively. Their y-axes both represent Δ P. One can see, Δ P is always positive in the case of the laser intensity of I_0=10^23 W/cm^2. The reason for this phenomenon is that the positivity and negativity of B_1, B_2, and B_3 are all related to the positivity and negativity of E(t), and C_1 is always positive, which means that Δ P is positive in the case of positive E(t). Moreover, Eq. (<ref>) can reproduce the calculation of the Gamow-like model well, and the standard deviation between these two methods is only 0.198. It can also be seen from Fig. <ref> that Δ P is negatively correlated with Q_α, which implies that a more pronounced instantaneous penetration probability change rate can be obtained in future experiments using a parent nucleus with smaller decay energies under the same laser conditions. Furthermore, this conclusion also explains why ^144Nd is the most sensitive parent nuclei to the high-intensity laser. Figure <ref> shows that the analytical solutions and numerical calculations for nuclei with more significant proton numbers match better, which provides a fast way to estimate Δ P for superheavy nuclei. As a comparison, the rates of change of the α decay penetration probability for some of the parent nuclei given by Ref.<cit.> are listed in Tab. <ref>, whose calculations are also based on the WKB approximation. In this table, the third column stands for the rate of the relative change of penetration probability Δ P for the identical parent nuclei obtained by the Eq. (<ref>) in the case of the peak laser intensity I_0=10^26 W/cm^2 and θ=0. It can be seen from Table <ref> that our computed results are in better agreement with the values given in Ref.<cit.>, which indicates that our proposed analytical formula is plausible. It should be noted that although we can accelerate the α decay within a short laser pulse, the overall change in the α decay of nuclei affected by laser is very insignificant, considering the long dark spans between laser shots. Therefore, obtaining the most significant change of α decay within one pulse becomes a key issue for future experiments. §.§ Parameter influences of laser wavelength and pulse width The theoretical calculations in the previous subsection showed that the instantaneous penetration probability change rate is inversely proportional to Q_α for a fixed electric field strength or at some t moment. However, the laser pulse has a duration and specific profile, which may lead to different impacts of the electric field on Δ P at different moments. The effect of a complete laser pulse on α decay should be of more interest, i.e., the magnitude of the average rate of change of the α decay penetration probability Δ P_avg in the whole laser pulse. Since the laser electric field is a function of time and oscillates back and forth with the change of time, the influence of χ_φ^(1) in Eq. (<ref>) on the α decay penetration probability will be mostly cancelled out. This means that the average rate of change of the α decay penetration probability Δ P_avg within one laser pulse will be much smaller than the maximum instantaneous rate of change of the penetration probability. To obtain a more significant average rate of change of penetration probability, J. Qi et al. <cit.> and our previous work <cit.> proposed experimental schemes based on elliptically polarized laser fields and asymmetric chirped laser pulses, respectively. As seen from Eq. (<ref>), the rate of change of the penetration probability is related to the properties of the decaying parent nucleus and the laser pulse. The properties of the nucleus itself can hardly be changed, while enhancing the laser intensity strength requires significant experimental effort. In the present work, we studied the effects of easily tunable laser wavelength and pulse width on Δ P_avg in terms of the properties of the laser itself and obtained some feasible methods that can boost Δ P_avg within a single laser pulse. To investigate the difference between the effects of the individual laser pulse at different wavelengths on Δ P_avg, we set the laser pulse width to be a fixed value and calculated Δ P_avg for different nuclei corresponding to individual laser pulse at wavelengths of 200 nm, 400 nm, 600 nm, and 800 nm, respectively. Here the theoretical model in Section <ref> is used for the calculation, where θ=0 and τ=5 T_0. As a comparison, the peak intensities of the laser pulses were set to be I_0=10^23 W/cm^2, 10^25 W/cm^2, and 10^27 W/cm^2, respectively. Figure <ref> shows the Δ P_avg of different nuclei under the influence of a single laser pulse of different wavelengths. The x-axis represents the mass number of the parent nucleus, and the y-axis represents Δ P_avg. Moreover, to better illustrate the wavelength variation under the condition of a constant laser pulse width, a schematic diagram of the laser electric field waveforms corresponding to laser wavelengths of 400 nm and 800 nm is given in the red box on the right side of Fig. <ref> (a). From this figure, one can clearly see that some of the parent nuclei have negative Δ P_avg values in the case of I_0=10^23 W/cm^2 and the parent nuclei with negative Δ P_avg become less in the case of I_0=10^25 W/cm^2. Once the laser intensity reaches I_0=10^27 W/cm^2, all parent nuclei correspond to positive Δ P_avg values. The reason is that the χ^(1) terms oscillate back and forth with time t to cancel each other out, while the χ^(2) terms, which are proportional to the square of the electric field intensity, have less effect in the relatively weak laser. As the laser intensity increases, χ^(2), which is constantly positive, takes up more weight in calculating the rate of change of penetration probability, resulting in a constant positive average rate of change of the α decay penetration probability for all parent nuclei. Figure <ref> also shows that smaller wavelengths lead to more significant Δ P_avg for most of the nuclei, which means that shorter wavelength laser pulses are preferable for enhancing the average rate of change of penetration probability in future experiments. We set different laser pulse widths by adjusting the x in Eq. (<ref>). Here, θ is used as in Fig. <ref>, and the laser wavelength used for the calculation is fixed at 800 nm. In the present work, the x value describing the pulse width is set from 1 to 5, and a more considerable x value means a more extended pulse width and a longer laser pulse duration. Similarly, the peak intensities of the laser pulses were set to be I_0=10^23 W/cm^2, 10^25 W/cm^2, and 10^27 W/cm^2, respectively. Figure <ref> shows the Δ P_avg of different nuclei under the influence of single laser pulses of different pulse widths. The x-axis and y-axis have the same meaning as in Fig. <ref>. The schematic diagram of the laser electric field waveforms with x ranging from 1 to 5 is given in the red box on the right side of Fig. <ref> (a). It is evident from these figures that the effect of the change in pulse width on the average rate of change of the α decay penetration probability is negligible compared to the adjustment of wavelength. A slight Δ P_avg boost can be obtained at low peak laser pulse intensities using long pulse widths for most parent nuclei. There is almost no difference in the average rate of change of the α decay penetration probability corresponding to different pulse widths in the case of high laser pulse intensity. Although a tuned laser pulse width has limited influence on Δ P_avg, we can improve Δ P_avg by tuning the wavelength, e.g., short-wavelength X-ray or free-electron laser. § SUMMARY In summary, we systematically study the laser-assisted α decay of the deformed ground state even-even nucleus and aim to obtain achievable quantitative evaluations of the laser influences on α decay. The calculations show that the α decay penetration probability and the α decay half-life of different parent nuclei have different rates of change under the influence of laser intensity of 10^23 W/cm^2, and the rate of change ranges from 0.0009% to 0.135%. Moreover, we obtained analytical formulas for the rate of change of the α decay penetration probability in the ultra-intense laser fields. We also found that the decay energy is negatively related to the rate of change of the penetration probability. Finally, we investigated the effects of laser pulse width and wavelength on the average rate of change of the α decay penetration probability. The results show that using short-wavelength laser pulses in future experiments can obtain a more significant average rate of change of the α decay penetration probability. This work was supported by the National Key R&D Program of China (Grant No. 2018YFA0404802), National Natural Science Foundation of China (Grant No. 12135009), the Science and Technology Innovation Program of Hunan Province (Grant No. 2020RC4020), and the Hunan Provincial Innovation Foundation for Postgraduate (Grant No. CX20210007). 87 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Geesaman et al.(2006)Geesaman, Gelbke, Janssens, and Sherrill]doi:10.1146/annurev.nucl.55.090704.151604 author author D. Geesaman, author C. Gelbke, author R. Janssens, and author B. Sherrill, 10.1146/annurev.nucl.55.090704.151604 journal journal Annual Review of Nuclear and Particle Science volume 56, pages 53 (year 2006), http://arxiv.org/abs/https://doi.org/10.1146/annurev.nucl.55.090704.151604 https://doi.org/10.1146/annurev.nucl.55.090704.151604 NoStop [Hofmann and Münzenberg(2000)]RevModPhys.72.733 author author S. Hofmann and author G. Münzenberg, 10.1103/RevModPhys.72.733 journal journal Rev. Mod. Phys. volume 72, pages 733 (year 2000)NoStop [Pfützner et al.(2012)Pfützner, Karny, Grigorenko, and Riisager]RevModPhys.84.567 author author M. Pfützner, author M. Karny, author L. V. Grigorenko, and author K. Riisager, 10.1103/RevModPhys.84.567 journal journal Rev. Mod. Phys. volume 84, pages 567 (year 2012)NoStop [Andreyev et al.(2013)Andreyev, Huyse, Van Duppen, Qi, Liotta, Antalic, Ackermann, Franchoo, Heßberger, Hofmann, Kojouharov, Kindler, Kuusiniemi, Lesher, Lommel, Mann, Nishio, Page, Streicher,  ŠŠáro, Sulignano, Wiseman, and Wyss]PhysRevLett.110.242502 author author A. N. Andreyev, author M. Huyse, author P. Van Duppen, author C. Qi, author R. J. Liotta, author S. Antalic, author D. Ackermann, author S. Franchoo, author F. P. Heßberger, author S. Hofmann, author I. Kojouharov, author B. Kindler, author P. Kuusiniemi, author S. R. Lesher, author B. Lommel, author R. Mann, author K. Nishio, author R. D. Page, author B. Streicher, author i. c. v.  ŠŠáro, author B. Sulignano, author D. Wiseman, and author R. A. Wyss, 10.1103/PhysRevLett.110.242502 journal journal Phys. Rev. Lett. volume 110, pages 242502 (year 2013)NoStop [Kalaninová et al.(2013)Kalaninová, Andreyev, Antalic, Heßberger, Ackermann, Andel, Drummond, Hofmann, Huyse, Kindler, Lane, Liberati, Lommel, Page, Rapisarda, Sandhu,  ŠŠáro, Thornthwaite, and Van Duppen]PhysRevC.87.044335 author author Z. Kalaninová, author A. N. Andreyev, author S. Antalic, author F. P. Heßberger, author D. Ackermann, author B. Andel, author M. C. Drummond, author S. Hofmann, author M. Huyse, author B. Kindler, author J. F. W. Lane, author V. Liberati, author B. Lommel, author R. D. Page, author E. Rapisarda, author K. Sandhu, author i. c. v.  ŠŠáro, author A. Thornthwaite, and author P. Van Duppen, 10.1103/PhysRevC.87.044335 journal journal Phys. Rev. C volume 87, pages 044335 (year 2013)NoStop [Ma et al.(2015)Ma, Zhang, Gan, Yang, Yu, Jiang, Wang, Tian, Wang, Guo, Ding, Ren, Zhou, Zhou, Xu, and Xiao]PhysRevC.91.051302 author author L. Ma, author Z. Y. Zhang, author Z. G. Gan, author H. B. Yang, author L. Yu, author J. Jiang, author J. G. Wang, author Y. L. Tian, author Y. S. Wang, author S. Guo, author B. Ding, author Z. Z. Ren, author S. G. Zhou, author X. H. Zhou, author H. S. Xu, and author G. Q. Xiao, 10.1103/PhysRevC.91.051302 journal journal Phys. Rev. C volume 91, pages 051302 (year 2015)NoStop [Yang et al.(2015)Yang, Zhang, Wang, Gan, Ma, Yu, Jiang, Tian, Ding, Guo, Wang, Huang, Sun, Wang, Zhou, Ren, Zhou, Xu, and Xiao]10.1140/epja/i2015-15088-9 author author H. B. Yang, author Z. Y. Zhang, author J. G. Wang, author Z. G. Gan, author L. Ma, author L. Yu, author J. Jiang, author Y. L. Tian, author B. Ding, author S. Guo, author Y. S. Wang, author T. H. Huang, author M. D. Sun, author K. L. Wang, author S. G. Zhou, author Z. Z. Ren, author X. H. Zhou, author H. S. Xu, and author G. Q. Xiao, 10.1140/epja/i2015-15088-9 journal journal Euro. Phys. J. A volume 51, pages 88 (year 2015)NoStop [Carroll et al.(2014)Carroll, Page, Joss, Uusitalo, Darby, Andgren, Cederwall, Eeckhaudt, Grahn, Gray-Jones, Greenlees, Hadinia, Jones, Julin, Juutinen, Leino, Leppänen, Nyman, O'Donnell, Pakarinen, Rahkila, Sandzelius, Sarén, Scholey, Seweryniak, and Simpson]PhysRevLett.112.092501 author author R. J. Carroll, author R. D. Page, author D. T. Joss, author J. Uusitalo, author I. G. Darby, author K. Andgren, author B. Cederwall, author S. Eeckhaudt, author T. Grahn, author C. Gray-Jones, author P. T. Greenlees, author B. Hadinia, author P. M. Jones, author R. Julin, author S. Juutinen, author M. Leino, author A.-P. Leppänen, author M. Nyman, author D. O'Donnell, author J. Pakarinen, author P. Rahkila, author M. Sandzelius, author J. Sarén, author C. Scholey, author D. Seweryniak, and author J. Simpson, 10.1103/PhysRevLett.112.092501 journal journal Phys. Rev. Lett. volume 112, pages 092501 (year 2014)NoStop [Gamow(1928)]gamow1928quantentheorie author author G. Gamow, 10.1007/BF01343196 journal journal Zeitschrift für Physik volume 51, pages 204 (year 1928)NoStop [Gurney and Condon(1928)]gurney1928wave author author R. W. Gurney and author E. U. Condon, 10.1038/122439a0 journal journal Nature volume 122, pages 439 (year 1928)NoStop [Astier et al.(2010)Astier, Petkov, Porquet, Delion, and Schuck]PhysRevLett.104.042701 author author A. Astier, author P. Petkov, author M.-G. Porquet, author D. S. Delion, and author P. Schuck, 10.1103/PhysRevLett.104.042701 journal journal Phys. Rev. Lett. volume 104, pages 042701 (year 2010)NoStop [Tohsaki et al.(2001)Tohsaki, Horiuchi, Schuck, and Röpke]PhysRevLett.87.192501 author author A. Tohsaki, author H. Horiuchi, author P. Schuck, and author G. Röpke, 10.1103/PhysRevLett.87.192501 journal journal Phys. Rev. Lett. volume 87, pages 192501 (year 2001)NoStop [Delion et al.(2004)Delion, Sandulescu, and Greiner]PhysRevC.69.044318 author author D. S. Delion, author A. Sandulescu, and author W. Greiner, 10.1103/PhysRevC.69.044318 journal journal Phys. Rev. C volume 69, pages 044318 (year 2004)NoStop [Karlgren et al.(2006)Karlgren, Liotta, Wyss, Huyse, VandeVel, and VanDuppen]PhysRevC.73.064304 author author D. Karlgren, author R. J. Liotta, author R. Wyss, author M. Huyse, author K. VandeVel, and author P. VanDuppen, 10.1103/PhysRevC.73.064304 journal journal Phys. Rev. C volume 73, pages 064304 (year 2006)NoStop [Kinoshita et al.(2012)Kinoshita, Paul, Kashiv, Collon, Deibel, DiGiovine, Greene, Henderson, Jiang, Marley, Nakanishi, Pardo, Rehm, Robertson, Scott, Schmitt, Tang, Vondrasek, and Yokoyama]doi:10.1126/science.1215510 author author N. Kinoshita, author M. Paul, author Y. Kashiv, author P. Collon, author C. M. Deibel, author B. DiGiovine, author J. P. Greene, author D. J. Henderson, author C. L. Jiang, author S. T. Marley, author T. Nakanishi, author R. C. Pardo, author K. E. Rehm, author D. Robertson, author R. Scott, author C. Schmitt, author X. D. Tang, author R. Vondrasek, and author A. Yokoyama, 10.1126/science.1215510 journal journal Science volume 335, pages 1614 (year 2012), http://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.1215510 https://www.science.org/doi/pdf/10.1126/science.1215510 NoStop [Wang et al.(2022)Wang, Liu, Lu, Chen, Long, Li, Chen, Chen, Bai, Li, Peng, Liu, Wu, Wang, Li, Xu, Liang, Leng, and Li]9894358 author author X. Wang, author X. Liu, author X. Lu, author J. Chen, author Y. Long, author W. Li, author H. Chen, author X. Chen, author P. Bai, author Y. Li, author Y. Peng, author Y. Liu, author F. Wu, author C. Wang, author Z. Li, author Y. Xu, author X. Liang, author Y. Leng, and author R. Li, 10.34133/2022/9894358 journal journal Ultrafast Science volume 2022, pages 9894358 (year 2022)NoStop [Strickland and Mourou(1985)]STRICKLAND1985219 author author D. Strickland and author G. Mourou, https://doi.org/10.1016/0030-4018(85)90120-8 journal journal Opt. Comm. volume 56, pages 219 (year 1985)NoStop [Yoon et al.(2019)Yoon, Jeon, Shin, Lee, Lee, Choi, Kim, Sung, and Nam]Yoon:19 author author J. W. Yoon, author C. Jeon, author J. Shin, author S. K. Lee, author H. W. Lee, author I. W. Choi, author H. T. Kim, author J. H. Sung, and author C. H. Nam, 10.1364/OE.27.020412 journal journal Opt. Express volume 27, pages 20412 (year 2019)NoStop [Yoon et al.(2021)Yoon, Kim, Choi, Sung, Lee, Lee, and Nam]Yoon:21 author author J. W. Yoon, author Y. G. Kim, author I. W. Choi, author J. H. Sung, author H. W. Lee, author S. K. Lee, and author C. H. Nam, 10.1364/OPTICA.420520 journal journal Optica volume 8, pages 630 (year 2021)NoStop [Li et al.(2018)Li, Gan, Yu, Wang, Liu, Guo, Xu, Xu, Hang, Xu, Wang, Huang, Cao, Yao, Zhang, Chen, Tang, Li, Liu, Li, He, Yin, Liang, Leng, Li, and Xu]Li:18 author author W. Li, author Z. Gan, author L. Yu, author C. Wang, author Y. Liu, author Z. Guo, author L. Xu, author M. Xu, author Y. Hang, author Y. Xu, author J. Wang, author P. Huang, author H. Cao, author B. Yao, author X. Zhang, author L. Chen, author Y. Tang, author S. Li, author X. Liu, author S. Li, author M. He, author D. Yin, author X. Liang, author Y. Leng, author R. Li, and author Z. Xu, 10.1364/OL.43.005681 journal journal Opt. Lett. volume 43, pages 5681 (year 2018)NoStop [Yu et al.(2018)Yu, Xu, Liu, Li, Li, Liu, Li, Wu, Yang, Yang, Wang, Lu, Leng, Li, and Xu]Yu:18 author author L. Yu, author Y. Xu, author Y. Liu, author Y. Li, author S. Li, author Z. Liu, author W. Li, author F. Wu, author X. Yang, author Y. Yang, author C. Wang, author X. Lu, author Y. Leng, author R. Li, and author Z. Xu, 10.1364/OE.26.002625 journal journal Opt. Express volume 26, pages 2625 (year 2018)NoStop [Mişicu and Rizea(2019)]Mi_icu_2019 author author Ş. Mişicu and author M. Rizea, 10.1088/1361-6471/ab1d7c journal journal J. Phys. G volume 46, pages 115106 (year 2019)NoStop [Tanaka et al.(2020)Tanaka, Spohr, Balabanski, Balascuta, Capponi, Cernaianu, Cuciuc, Cucoanes, Dancus, Dhal, Diaconescu, Doria, Ghenuche, Ghita, Kisyov, Nastasa, Ong, Rotaru, Sangwan, Söderström, Stutman, Suliman, Tesileanu, Tudor, Tsoneva, Ur, Ursescu, and Zamfir]doi:10.1063/1.5093535 author author K. A. Tanaka, author K. M. Spohr, author D. L. Balabanski, author S. Balascuta, author L. Capponi, author M. O. Cernaianu, author M. Cuciuc, author A. Cucoanes, author I. Dancus, author A. Dhal, author B. Diaconescu, author D. Doria, author P. Ghenuche, author D. G. Ghita, author S. Kisyov, author V. Nastasa, author J. F. Ong, author F. Rotaru, author D. Sangwan, author P.-A. Söderström, author D. Stutman, author G. Suliman, author O. Tesileanu, author L. Tudor, author N. Tsoneva, author C. A. Ur, author D. Ursescu, and author N. V. Zamfir, 10.1063/1.5093535 journal journal Matter and Radiation at Extremes volume 5, pages 024402 (year 2020), http://arxiv.org/abs/https://doi.org/10.1063/1.5093535 https://doi.org/10.1063/1.5093535 NoStop [Wang et al.(2021a)Wang, Zhou, Liu, and Wang]PhysRevLett.127.052501 author author W. Wang, author J. Zhou, author B. Liu, and author X. Wang, 10.1103/PhysRevLett.127.052501 journal journal Phys. Rev. Lett. volume 127, pages 052501 (year 2021a)NoStop [Lv et al.(2019)Lv, Duan, and Liu]PhysRevC.100.064610 author author W. Lv, author H. Duan, and author J. Liu, 10.1103/PhysRevC.100.064610 journal journal Phys. Rev. C volume 100, pages 064610 (year 2019)NoStop [Ghinescu and Delion(2020)]PhysRevC.101.044304 author author S. A. Ghinescu and author D. S. Delion, 10.1103/PhysRevC.101.044304 journal journal Phys. Rev. C volume 101, pages 044304 (year 2020)NoStop [von der Wense et al.(2020)von der Wense, Bilous, Seiferle, Stellmer, Weitenberg, Thirolf, and A. Palffy]V2020 author author L. von der Wense, author P. V. Bilous, author B. Seiferle, author S. Stellmer, author J. Weitenberg, author P. G. Thirolf, and author G. K. A. Palffy, 10.1140/epja/s10050-020-00177-x journal journal Euro. Phys. J. A volume 56, pages 176 (year 2020)NoStop [Mi şicu(2022)]PhysRevC.106.034612 author author i. m. c. Mi şicu, 10.1103/PhysRevC.106.034612 journal journal Phys. Rev. C volume 106, pages 034612 (year 2022)NoStop [Wang(2022)]PhysRevC.106.024606 author author X. Wang, 10.1103/PhysRevC.106.024606 journal journal Phys. Rev. C volume 106, pages 024606 (year 2022)NoStop [Bekx et al.(2022)Bekx, Lindsey, Glenzer, and Schlesinger]PhysRevC.105.054001 author author J. J. Bekx, author M. L. Lindsey, author S. H. Glenzer, and author K.-G. Schlesinger, 10.1103/PhysRevC.105.054001 journal journal Phys. Rev. C volume 105, pages 054001 (year 2022)NoStop [Qi et al.(2019)Qi, Li, Xu, Fu, and Wang]PhysRevC.99.044610 author author J. Qi, author T. Li, author R. Xu, author L. Fu, and author X. Wang, 10.1103/PhysRevC.99.044610 journal journal Phys. Rev. C volume 99, pages 044610 (year 2019)NoStop [Queisser and Schützhold(2019)]PhysRevC.100.041601 author author F. Queisser and author R. Schützhold, 10.1103/PhysRevC.100.041601 journal journal Phys. Rev. C volume 100, pages 041601 (year 2019)NoStop [Li and Wang(2021)]Li_2021 author author T. Li and author X. Wang, 10.1088/1361-6471/ac1712 journal journal Journal of Physics G: Nuclear and Particle Physics volume 48, pages 095105 (year 2021)NoStop [Liu et al.(2021)Liu, Duan, Ye, and Liu]PhysRevC.104.044614 author author S. Liu, author H. Duan, author D. Ye, and author J. Liu, 10.1103/PhysRevC.104.044614 journal journal Phys. Rev. C volume 104, pages 044614 (year 2021)NoStop [Lv et al.(2022)Lv, Wu, Duan, Liu, and Liu]Lv2020 author author W. J. Lv, author B. B. Wu, author H. Duan, author S. W. Liu, and author J. Liu, 10.1140/epja/s10050-022-00697-8 journal journal Euro. Phys. J. A volume 58, pages 54 (year 2022)NoStop [Wang(2020)]PhysRevC.102.011601 author author X. Wang, 10.1103/PhysRevC.102.011601 journal journal Phys. Rev. C volume 102, pages 011601 (year 2020)NoStop [Qi et al.(2020)Qi, Fu, and Wang]PhysRevC.102.064629 author author J. Qi, author L. Fu, and author X. Wang, 10.1103/PhysRevC.102.064629 journal journal Phys. Rev. C volume 102, pages 064629 (year 2020)NoStop [Feng et al.(2022)Feng, Wang, Fu, Chen, Tan, Li, Wang, Li, Zhang, Ma, and Zhang]PhysRevLett.128.052501 author author J. Feng, author W. Wang, author C. Fu, author L. Chen, author J. Tan, author Y. Li, author J. Wang, author Y. Li, author G. Zhang, author Y. Ma, and author J. Zhang, 10.1103/PhysRevLett.128.052501 journal journal Phys. Rev. Lett. volume 128, pages 052501 (year 2022)NoStop [Mişicu and Rizea(2013)]Mi_icu_2013 author author Ş. Mişicu and author M. Rizea, 10.1088/0954-3899/40/9/095101 journal journal J. Phys. G volume 40, pages 095101 (year 2013)NoStop [Delion and Ghinescu(2017)]PhysRevLett.119.202501 author author D. S. Delion and author S. A. Ghinescu, 10.1103/PhysRevLett.119.202501 journal journal Phys. Rev. Lett. volume 119, pages 202501 (year 2017)NoStop [Kis and Szilvasi(2018)]Kis_2018 author author D. P. Kis and author R. Szilvasi, 10.1088/1361-6471/aab0d5 journal journal J. Phys. G volume 45, pages 045103 (year 2018)NoStop [Bai et al.(2018)Bai, Deng, and Ren]BAI201823 author author D. Bai, author D. Deng, and author Z. Ren, https://doi.org/10.1016/j.nuclphysa.2018.05.004 journal journal Nucl. Phys. A volume 976, pages 23 (year 2018)NoStop [Pálffy and Popruzhenko(2020)]PhysRevLett.124.212505 author author A. Pálffy and author S. V. Popruzhenko, 10.1103/PhysRevLett.124.212505 journal journal Phys. Rev. Lett. volume 124, pages 212505 (year 2020)NoStop [Soylu and Evlice(2015)]SOYLU201559 author author A. Soylu and author S. Evlice, https://doi.org/10.1016/j.nuclphysa.2015.01.008 journal journal Nuclear Physics A volume 936, pages 59 (year 2015)NoStop [Ni and Ren(2011)]PhysRevC.83.067302 author author D. Ni and author Z. Ren, 10.1103/PhysRevC.83.067302 journal journal Phys. Rev. C volume 83, pages 067302 (year 2011)NoStop [Coban et al.(2012)Coban, Bayrak, Soylu, and Boztosun]PhysRevC.85.044324 author author A. Coban, author O. Bayrak, author A. Soylu, and author I. Boztosun, 10.1103/PhysRevC.85.044324 journal journal Phys. Rev. C volume 85, pages 044324 (year 2012)NoStop [Zdeb et al.(2013)Zdeb, Warda, and Pomorski]PhysRevC.87.024308 author author A. Zdeb, author M. Warda, and author K. Pomorski, 10.1103/PhysRevC.87.024308 journal journal Phys. Rev. C volume 87, pages 024308 (year 2013)NoStop [Cheng et al.(2019)Cheng, Chen, Deng, Wu, Li, and Chu]CHENG2019350 author author J.-H. Cheng, author J.-L. Chen, author J.-G. Deng, author X.-J. Wu, author X.-H. Li, and author P.-C. Chu, https://doi.org/10.1016/j.nuclphysa.2019.05.002 journal journal Nuclear Physics A volume 987, pages 350 (year 2019)NoStop [Guo et al.(2015)Guo, Bao, Gao, Li, and Zhang]GUO2015110 author author S. Guo, author X. Bao, author Y. Gao, author J. Li, and author H. Zhang, https://doi.org/10.1016/j.nuclphysa.2014.12.001 journal journal Nuclear Physics A volume 934, pages 110 (year 2015)NoStop [Zhang et al.(2006)Zhang, Zuo, Li, and Royer]PhysRevC.74.017304 author author H. Zhang, author W. Zuo, author J. Li, and author G. Royer, 10.1103/PhysRevC.74.017304 journal journal Phys. Rev. C volume 74, pages 017304 (year 2006)NoStop [Gon çalves and Duarte(1993)]PhysRevC.48.2409 author author M. Gon çalves and author S. B. Duarte, 10.1103/PhysRevC.48.2409 journal journal Phys. Rev. C volume 48, pages 2409 (year 1993)NoStop [Buck et al.(1990)Buck, Merchant, and Perez]PhysRevLett.65.2975 author author B. Buck, author A. C. Merchant, and author S. M. Perez, 10.1103/PhysRevLett.65.2975 journal journal Phys. Rev. Lett. volume 65, pages 2975 (year 1990)NoStop [Xu and Ren(2006a)]PhysRevC.74.014304 author author C. Xu and author Z. Ren, 10.1103/PhysRevC.74.014304 journal journal Phys. Rev. C volume 74, pages 014304 (year 2006a)NoStop [Xu and Ren(2005)]XU2005303 author author C. Xu and author Z. Ren, https://doi.org/10.1016/j.nuclphysa.2005.06.011 journal journal Nuclear Physics A volume 760, pages 303 (year 2005)NoStop [Dzyublik(2017)]Dzyublik2017 author author A. Y. Dzyublik, https://doi.org/10.5506/APhysPolBSupp.10.69 journal journal Acta Physica Polonica B Proceedings Supplement volume 10, pages 69 (year 2017)NoStop [Delion et al.(2015)Delion, Liotta, and Wyss]PhysRevC.92.051301 author author D. S. Delion, author R. J. Liotta, and author R. Wyss, 10.1103/PhysRevC.92.051301 journal journal Phys. Rev. C volume 92, pages 051301 (year 2015)NoStop [Delion et al.(2006)Delion, Peltonen, and Suhonen]PhysRevC.73.014315 author author D. S. Delion, author S. Peltonen, and author J. Suhonen, 10.1103/PhysRevC.73.014315 journal journal Phys. Rev. C volume 73, pages 014315 (year 2006)NoStop [Peltonen et al.(2008)Peltonen, Delion, and Suhonen]PhysRevC.78.034608 author author S. Peltonen, author D. S. Delion, and author J. Suhonen, 10.1103/PhysRevC.78.034608 journal journal Phys. Rev. C volume 78, pages 034608 (year 2008)NoStop [Xu and Ren(2006b)]PhysRevC.73.041301 author author C. Xu and author Z. Ren, 10.1103/PhysRevC.73.041301 journal journal Phys. Rev. C volume 73, pages 041301 (year 2006b)NoStop [Ismail et al.(2017)Ismail, Seif, Adel, and Abdurrahman]ISMAIL2017202 author author M. Ismail, author W. Seif, author A. Adel, and author A. Abdurrahman, https://doi.org/10.1016/j.nuclphysa.2016.11.010 journal journal Nuclear Physics A volume 958, pages 202 (year 2017)NoStop [REN and XU(2008)]doi:10.1142/S0217732308029885 author author Z. REN and author C. XU, 10.1142/S0217732308029885 journal journal Modern Physics Letters A volume 23, pages 2597 (year 2008), http://arxiv.org/abs/https://doi.org/10.1142/S0217732308029885 https://doi.org/10.1142/S0217732308029885 NoStop [Gurvitz and Kalbermann(1987)]PhysRevLett.59.262 author author S. A. Gurvitz and author G. Kalbermann, 10.1103/PhysRevLett.59.262 journal journal Phys. Rev. Lett. volume 59, pages 262 (year 1987)NoStop [Ni and Ren(2010)]PhysRevC.81.064318 author author D. Ni and author Z. Ren, 10.1103/PhysRevC.81.064318 journal journal Phys. Rev. C volume 81, pages 064318 (year 2010)NoStop [Xu and Ren(2008)]PhysRevC.78.057302 author author C. Xu and author Z. Ren, 10.1103/PhysRevC.78.057302 journal journal Phys. Rev. C volume 78, pages 057302 (year 2008)NoStop [Qi et al.(2014)Qi, Andreyev, Huyse, Liotta, Van Duppen, and Wyss]QI2014203 author author C. Qi, author A. Andreyev, author M. Huyse, author R. Liotta, author P. Van Duppen, and author R. Wyss, https://doi.org/10.1016/j.physletb.2014.05.066 journal journal Physics Letters B volume 734, pages 203 (year 2014)NoStop [Deng et al.(2020)Deng, Zhang, and Royer]PhysRevC.101.034307 author author J.-G. Deng, author H.-F. Zhang, and author G. Royer, 10.1103/PhysRevC.101.034307 journal journal Phys. Rev. C volume 101, pages 034307 (year 2020)NoStop [Deng and Zhang(2020)]PhysRevC.102.044314 author author J.-G. Deng and author H.-F. Zhang, 10.1103/PhysRevC.102.044314 journal journal Phys. Rev. C volume 102, pages 044314 (year 2020)NoStop [Deng and Zhang(2021a)]Deng_2021 author author J.-G. Deng and author H.-F. Zhang, 10.1088/1674-1137/abcc5a journal journal Chin. Phys. C volume 45, pages 024104 (year 2021a)NoStop [Deng and Zhang(2021b)]DENG2021136247 author author J.-G. Deng and author H.-F. Zhang, https://doi.org/10.1016/j.physletb.2021.136247 journal journal Physics Letters B volume 816, pages 136247 (year 2021b)NoStop [Cheng et al.(2022)Cheng, Li, and Yu]PhysRevC.105.024312 author author J.-H. Cheng, author Y. Li, and author T.-P. Yu, 10.1103/PhysRevC.105.024312 journal journal Phys. Rev. C volume 105, pages 024312 (year 2022)NoStop [Ismail et al.(2012)Ismail, Ellithi, Botros, and Abdurrahman]PhysRevC.86.044317 author author M. Ismail, author A. Y. Ellithi, author M. M. Botros, and author A. Abdurrahman, 10.1103/PhysRevC.86.044317 journal journal Phys. Rev. C volume 86, pages 044317 (year 2012)NoStop [Ni and Ren(2015)]NI2015108 author author D. Ni and author Z. Ren, https://doi.org/10.1016/j.aop.2015.03.001 journal journal Annals of Physics volume 358, pages 108 (year 2015), note school of Physics at Nanjing UniversityNoStop [Qian et al.(2011)Qian, Ren, and Ni]PhysRevC.83.044317 author author Y. Qian, author Z. Ren, and author D. Ni, 10.1103/PhysRevC.83.044317 journal journal Phys. Rev. C volume 83, pages 044317 (year 2011)NoStop [Stewart et al.(1996)Stewart, Kermode, Beachey, Rowley, Grant, and Kruppa]STEWART1996332 author author T. Stewart, author M. Kermode, author D. Beachey, author N. Rowley, author I. Grant, and author A. Kruppa, https://doi.org/10.1016/S0375-9474(96)00404-6 journal journal Nuclear Physics A volume 611, pages 332 (year 1996)NoStop [Buck et al.(1996)Buck, Johnston, Merchant, and Perez]PhysRevC.53.2841 author author B. Buck, author J. C. Johnston, author A. C. Merchant, and author S. M. Perez, 10.1103/PhysRevC.53.2841 journal journal Phys. Rev. C volume 53, pages 2841 (year 1996)NoStop [Möller et al.(2016)Möller, Sierk, Ichikawa, and Sagawa]MOLLER20161 author author P. Möller, author A. Sierk, author T. Ichikawa, and author H. Sagawa, https://doi.org/10.1016/j.adt.2015.10.002 journal journal Atomic Data and Nuclear Data Tables volume 109-110, pages 1 (year 2016)NoStop [Takigawa et al.(2000)Takigawa, Rumin, and Ihara]PhysRevC.61.044607 author author N. Takigawa, author T. Rumin, and author N. Ihara, 10.1103/PhysRevC.61.044607 journal journal Phys. Rev. C volume 61, pages 044607 (year 2000)NoStop [Ismail et al.(2003)Ismail, Seif, and El-Gebaly]ISMAIL200353 author author M. Ismail, author W. Seif, and author H. El-Gebaly, https://doi.org/10.1016/S0370-2693(03)00600-2 journal journal Physics Letters B volume 563, pages 53 (year 2003)NoStop [Gao-Long et al.(2008)Gao-Long, Xiao-Yun, and Zu-Hua]Gao_Long_2008 author author Z. Gao-Long, author L. Xiao-Yun, and author L. Zu-Hua, 10.1088/0256-307x/25/4/023 journal journal Chinese Physics Letters volume 25, pages 1247 (year 2008)NoStop [Morehead(1995)]doi:10.1063/1.531270 author author J. J. Morehead, 10.1063/1.531270 journal journal Journal of Mathematical Physics volume 36, pages 5431 (year 1995), http://arxiv.org/abs/https://doi.org/10.1063/1.531270 https://doi.org/10.1063/1.531270 NoStop [Mao et al.(2022)Mao, He, Gao, Zeng, Yun, Du, Lu, Sun, and Zhao]9760631 author author D. Mao, author Z. He, author Q. Gao, author C. Zeng, author L. Yun, author Y. Du, author H. Lu, author Z. Sun, and author J. Zhao, 10.34133/2022/9760631 journal journal Ultrafast Science volume 2022, pages 9760631 (year 2022)NoStop [Brabec et al.(1996)Brabec, Ivanov, and Corkum]PhysRevA.54.R2551 author author T. Brabec, author M. Y. Ivanov, and author P. B. Corkum, 10.1103/PhysRevA.54.R2551 journal journal Phys. Rev. A volume 54, pages R2551 (year 1996)NoStop [Chen et al.(2000)Chen, Liu, Fu, and Zheng]PhysRevA.63.011404 author author J. Chen, author J. Liu, author L. B. Fu, and author W. M. Zheng, 10.1103/PhysRevA.63.011404 journal journal Phys. Rev. A volume 63, pages 011404(R) (year 2000)NoStop [Kondev et al.(2021)Kondev, Wang, Huang, Naimi, and Audi]NUBASE2020 author author F. Kondev, author M. Wang, author W. Huang, author S. Naimi, and author G. Audi, @noop journal journal Chinese Physics C volume 45, pages 030001 (year 2021)NoStop [Huang et al.(2021)Huang, Wang, Kondev, Audi, and Naimi]CPC-2021-0034 author author W. Huang, author M. Wang, author F. Kondev, author G. Audi, and author S. Naimi, @noop journal journal Chinese Physics C volume 45, pages 030002 (year 2021)NoStop [Wang et al.(2021b)Wang, Huang, Kondev, Audi, and Naimi]CPC-2020-0033 author author M. Wang, author W. Huang, author F. Kondev, author G. Audi, and author S. Naimi, @noop journal journal Chinese Physics C volume 45, pages 030003 (year 2021b)NoStop [Dong et al.(2010)Dong, Zuo, Gu, Wang, and Peng]PhysRevC.81.064309 author author J. Dong, author W. Zuo, author J. Gu, author Y. Wang, and author B. Peng, 10.1103/PhysRevC.81.064309 journal journal Phys. Rev. C volume 81, pages 064309 (year 2010)NoStop
http://arxiv.org/abs/2307.01768v2
20230704151543
On the foundations of entropic cosmologies: inconsistencies, possible solutions and dead end signs
[ "Hussain Gohar", "Vincenzo Salzano" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2307.00673v1
20230702214630
ENN: A Neural Network with DCT-Adaptive Activation Functions
[ "Marc Martinez-Gost", "Ana Pérez-Neira", "Miguel Ángel Lagunas" ]
eess.SP
[ "eess.SP", "cs.LG", "cs.NE" ]
Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation Jingjie Guo1(), Weitong Zhang 4, Matthew Sinclair3,4, Daniel Rueckert1,2,4, Chen Chen4,5 August 1, 2023 ============================================================================================================= The expressiveness of neural networks highly depends on the nature of the activation function, although these are usually assumed predefined and fixed during the training stage. In this paper we present Expressive Neural Network (ENN), a novel architecture in which the non-linear activation functions are modeled using the Discrete Cosine Transform (DCT) and adapted using backpropagation during training. This parametrization keeps the number of trainable parameters low, is appropriate for gradient-based schemes, and adapts to different learning tasks. This is the first non-linear model for activation functions that relies on a signal processing perspective, providing high flexibility and expressiveness to the network. We contribute with insights in the explainability of the network at convergence by recovering the concept of bump, this is, the response of each activation function in the output space to provide insights. Finally, through exhaustive experiments we show that the model can adapt to classification and regression tasks. The performance of ENN outperforms state of the art benchmarks, providing up to a 40% gap in accuracy in some scenarios. Neural networks, adaptive activation functions, discrete cosine transform, explainable machine learning. § INTRODUCTION Function approximation is a fundamental problem across many domains, such as data analysis, control systems and communications. When the explicit function is not available but input-output data pairs, the function can be uncovered by minimizing a criterion loss in a supervised setting. The problem increases in complexity when the function is non-linear, for which many signal processing techniques have been developed. For instance, least squares <cit.>, orthogonal function approximation <cit.>, kernel methods <cit.> and neural networks <cit.>, among others. The last decades have suffered an unprecedented growth in the development of artificial neural networks for function approximation due to its empirical success. The inception of neural networks as universal approximators boosted its development across many fields and applications, with different architectures built according to the task and data types to handle. Nevertheless, the expressiveness of the neural network is related to the non-linear activation function, which is usually assumed fixed. An overlooked field of research is in adaptive activation functions (AAF), where not only the weights of the neural networks are trained, but the non-linearities too <cit.>. In our previous work <cit.> we introduced the Discrete Cosine Transform (DCT) to approximate an univariate non-linear function in a joint communication and computing setting. Further, in <cit.> we show how a gradient-based algorithm can be used to tune the DCT coefficients to approximate a function in a supervised setting. However, extending the results to multivariate functions is not trivial: The number of required parameters to approximate the function increases exponentially with the number of variables, and their corresponding indexes are unknown when the explicit function is not available. This is, it is cumbersome how the top relevant coefficients can be learnt in a supervised fashion using labeled data. In this work we propose to extend the capabilities of the DCT with a novel neural network architecture that integrates the DCT to represent and adapt the activation functions. We call this architecture Expressive Neural Network (ENN). We exploit the fact that a 2-layer neural network can theoretically represent any function and expand the representation capabilities of the network by adapting the activation functions at each neuron. The advantage of approximating an univariate function with the DCT is twofold: a small number of coefficients is required due to its high energy compaction, and the approximation error is easily controlled by the magnitude of the disregarded coefficients. In this work we also show that the DCT coefficients can be learnt using backpropagation in a supervised fashion and in the same pass as the standard network linear weights. In this way, the architecture is no different from a standard feed-forward neural network with fixed activation functions. From the learning perspective, using the DCT to model the activation functions brings the following benefits: * Network size: The number of parameters in the network grows linear with the number of neurons, which is small due to energy compaction of DCT. * Backpropagation: The algorithm can be implemented because the DCT coefficients are real and ordered in decreasing magnitude. Besides, there exist analytical closed-form solutions for backpropagation. * Gradient behavior: The basis functions (cosines) are real and bounded, which prevents exploding gradients. Likewise, since the Fourier representation creates a periodic function that does not saturate, it also prevents vanishing gradients. * Task adaptability: The output non-linearity is automatically adapted depending on the task (e.g., classification or regression). While we provide a general formulation for multivariate real functions, we constraint the analysis to bivariate functions. This allows to visualize the results and intuitively understand how the network is adapting the activation functions. In this respect, we recover the concept of bump, which is the non-linear enclosure that each activation function generates in the output space <cit.>. The output space corresponds to a weighted sum of all bumps generated at the hidden layer. This concept allows to gain insights in how the network decides to exploit the periodic nature of the DCT model and create the boundaries for classification problems. In this work we focus on two general problems of function approximation, namely classification and regression. In the former there are two hypothesis associated to a function, this is, ℋ_0 when f(x_1,x_2)<0 and ℋ_1 otherwise. In regression the goal is to approximate the function f(x_1,x_2). The main contributions of this paper are described in the following: * We define ENN, a novel neural network architecture where the non-linear activation functions are parameterized by the DCT. This allows to adapt DCT coefficients in a supervised fashion and learn specific non-linearities according to the task. This results in a highly flexible and expressive architecture. * We provide insights in the field of explainable machine learning (XML). We recover the concept of bump, which allows to interpret how the non-linearities are adapted and what is the output response of the neural network. Furthermore, the concept allows to intuitively control several parameters (e.g., weight initialization). * We develop analytical closed-form expressions to adapt the non-linearities with backpropagation. While we choose the Least Mean Squares (LMS) algorithm to update the network parameters, the architecture remains a feed-forward neural network and any alternative algorithm can be used. * We provide extensive experiments in both classification and regression setups for which ENN outperforms all the benchmarks. In classification tasks, ENN outperforms fixed activation functions up to 40% in accuracy. With that we show how the expressiveness of network highly depends on the activation function, without the need of increasing the size of the network. * With respect to XML, we show how the DCT model helps to dimension the network width, this is, the number of hidden neurons. Particularly, the network converges to duplicated activation functions with opposed linear weights in the output layer. This is, the network cancels out the information coming from several branches when the task does not require that much expressiveness. The remainder of this paper is organized as follows. In Section II, we present a literature review on neural networks for function approximation. The Fourier models for non-linear representation are presented in Section III. We present ENN in Section IV and propose the learning procedure for supervised tasks in Section V. The simulation results are shown in Section VI and we conclude the paper in Section VII. Notation: Lowercase and uppercase bold symbols correspond to vectors and matrices, respectively; 𝐬[m] corresponds to the m-th entry of vector 𝐬; ℝ stands for the set of real numbers and ∇ for the gradient. § LITERATURE REVIEW The universal approximation theorem is a well-known result in mathematics stating that a 2-layer neural network can represent any continuous function with an arbitrary number of neurons <cit.>. The theory behind neural networks has been further developed, providing bounds on the number of required neurons. In a different line of research, the Kolmogorov-Arnold (KA) representation theorem shows how a multivariate function can be represented by functions of only univariate functions <cit.>: f(x_1,…,x_n) = ∑_i=1^2n+1Φ_i( ∑_j=1^nϕ_ij(x_j) ), where Φ_i and ϕ_j are termed the outer and inner functions, respectively. This result seems to be tightly connected to a 2-layer neural network, since the inner functions correspond to the hidden layer transformation and the outer functions to the output neuron. What is more, the inner functions do not depend on the function f to implement, which resembles the activation functions in neural network architectures. However, the formulation is not exactly identical: the KA representation requires n inner functions for each input variable, while a neural network implements a unique function for each linear combination of inputs. In this way, although there is extensive literature motivating the development of neural networks with the KA theorem <cit.>, there are still many gaps to be resolved. What is more, the theorem is not constructive and the functions in (<ref>) are highly non-linear. Despite its success in many applications, neural networks undergo several shortcomings that limit the interpretability of the results: optimization algorithms are easily trapped in local minima, convergence heavily depends on initialization and fail to converge when high non-linearities exist. Usually, each neuron in feedforward neural networks implements a linear combination and a non-linear mapping. The former is usually trained, while the latter remains fixed. The non-linear activation function allows to generate non-linear mappings, which increases the expressiveness of the network. A common choice is the sigmoid function, although it exhibits well-known issues in its implementation: since the sigmoid saturates at large magnitudes the propagated gradients vanish, slowing down the learning process. The rectified linear unit (ReLU) replaced the sigmoid activation function because it does not suffer from vanishing gradients and it is computationally efficient, which results in faster convergence. Despite its popularity, ReLU also experience several weaknesses that hinder the learning capacity of the architecture: the corresponding neuron can become dead when the output remains negative for a long time; likewise, the output is unbounded, which may produce the opposite effect of exploding gradient, affecting the stability of the network. Many variations and novel activation functions have been designed to enhance the performance of neural networks, although this highly depends on the application and the statistics of the data. In this respect, few authors have tackled the problem of adaptive activation functions (AAF), in which the non-linear mapping is also trained to enhance the expressiveness of the network. In <cit.> the saturation level and slope of an S-shaped activation function can be tuned independently, which increases the expressiveness with respect to the sigmoid function. In a different vein, several authors proposed to approximate an arbitrary activation function using cubic splines <cit.>. Later on, some authors introduced an activation function as a weighted sum of basis, such as sine, Gaussian or sigmoid <cit.>. More recently, the authors in <cit.> propose an asymmetric S-shaped activation function, but only for regression tasks. The issues derived from these perspective is that they either constraint the geometry of the activation function, or the parametrization is too complex. In this work we empirically prove that having pre-specified activation functions result in suboptimal performance. In the context of deep learning, several AAFs have been designed as well. In <cit.> the authors propose a gradient adaptive piecewise linear activation function as a sum of hinge-shaped functions. Although they theoretically prove that any piecewise linear function can be expressed as so, this constraints the function to be linear at both extremes. In other words, it does not prevent the neural network from exploding gradients. In <cit.> an S-shaped rectified linear activation function is proposed. While this model can learn non-linear and non-convex representations, it is not smooth and still constraints the activation to be defined in a piece-wise fashion. The authors in <cit.> develop a piecewise polynomial form which can approximate any continuous function. However, the exponential nature of the polynomials may increase the dynamic range of the parameters, which is a non-desirable property for gradient-based procedures. In <cit.>, the ReLU is substituted by an S-shaped activation, which slightly outperforms the ReLU and its variants in different deep learning tasks. In <cit.> different authors propose to model the activation as a sum of basis functions. In the following section we propose a Fourier-based parametrization for non-linear functions. This is the first approach to model AAFs from a signal processing perspective. The proposed model does not constraint the shape of the activation function and keeps the number of trainable parameters low while circumventing the shortcomings of the previously proposed non-linear models. § FOURIER MODELS FOR NON-LINEAR FUNCTIONS Consider a scalar univariate function f(x) to be approximated over the input variable range x∈[-1,1]. One of the most popular approximations for non-linear systems is the Volterra model. Given the function representation by the inverse Fourier transform, f(x) = 1/2π∫_-∞^∞ F(w)e^jwx dw, where w is the frequency variable and F(w) are the Fourier coefficients, the Volterra model is obtained by using the Taylor series expansion of the exponential family: f(x) = 1/2π∫_-∞^∞ F(w)(∑_n=0^∞(jw)^n/n!x^n) dw = ∑_n=0^∞( 1/2π1/n!∫_-∞^∞ F(w)(jw)^n dw ) x^n = ∑_n=0^∞ c_nx^n The building blocks of (<ref>) are power functions of the input variable, which generate a polynomial approximation. Notice that the coefficients c_n correspond to the n-th derivative at the origin divided by the factorial of the index. Volterra has not been widely used in practice because the exponential nature of the basis functions heavily increase the dynamic range, even when the input is bounded. Moreover, outside the dynamic range the approximation is unbounded and cannot be controlled. This happens because Volterra is a Taylor approximation and it presents high accuracy near the origin only. All these concerns make this approximation not suitable for gradient-based learning algorithms. One possible approach to solve these issues is the representation using a finite number of terms in the Fourier approximation in (<ref>), which corresponds to the Discrete Fourier Transform (DFT). Outside the dynamic range of x the function is periodically extended, generating discontinuities at the border. As a result, the number of coefficients required to approximate the function is highly sensitive to these discontinuities. In fact the oscillatory behavior around the discontinuity has an amplitude proportional to the number of continuous derivatives of f(x). To prevent this phenomenon the function can be extended with even symmetry and, then, periodically extended. This smooths out the edges and its corresponding derivatives, reducing the number of required coefficients with respect to the DFT. This is known as the Discrete Cosine Transform (DCT), which has the following expression: f(x) ≈∑_q=1^Q g_qF_q cos(π (q-1)(2z+1)/2N), with z=N/2(1+x), g_1=1/√(N) and g_q=√(2/N) otherwise. The F_q∈ℝ are termed the DCT coefficients. Notice that for functions with odd symmetry, only the odd coefficients are retained. In general, the quality of approximation is more than sufficient for Q=12 (i.e., 6 coefficients in odd functions). For the sake of brevity, the following definition will be used when needed: cos_i(x) = cos( π/2N(2i-1)(N(x+1)+1 ) ) In <cit.> we propose an adaptive design in which the DCT coefficients of (<ref>) are tuned using the LMS algorithm in a supervised setting to approximate an univariate function. There are plenty of advantages in using the DCT representation: The required coefficients to provide the same quality are fewer and these are real and ordered in decreasing magnitude. Since the basis functions are orthogonal, the approximation error can be easily controlled by the magnitude of the disregarded coefficients, and it simplifies the convergence of the learning procedure. Furthermore, see that the index appears in the phase of (<ref>), so the approximation is real and bounded, even when the input exceeds the dynamic range. All these features make the DCT an appropriate function approximation, whose coefficients can be learnt by a gradient-based rule. §.§ Extension to several inputs with the DCT The extension of the DCT representation to multiple variables is not trivial. Consider a bivariate function, that could be parameterized by a 2-dimensional (2D) DCT as f(x_1,x_2) = ∑_n=1^N∑_m=1^N F_nmcos_n(x_1)cos_m(x_2) ≈∑_n=1^Q∑_m=1^Q F_nmcos_n(x_1)cos_m(x_2) Note that this architecture has the following drawbacks: In general the number of coefficients grows exponentially with the number of inputs M_0 as Q^M_0, along with the complexity of the DCT in several dimensions; while the coefficients in the DCT are ordered in decreasing magnitude, the structure of the indexes is broken in the 2D-DCT. This is, the location (n,m) of the relevant coefficients changes with the function of interest; this implies that, unless the function is known a priori, the indexes (n,m) are unknown and a supervised learning procedure is hard to implement. This happens for instance in classification problems, where the function is unknown and to be discovered. In the following we will present ENN, a neural network architecture integrating the DCT in a single dimension and whose coefficients can be trained in a supervised fashion. § DCT-ADAPTIVE ACTIVATION FUNCTIONS A perceptron (or neuron) is a non-linear processor involving a weighted sum and a non-linear function. It is described by the following expressions: z = a_0 + ∑_i=1^M a_ix_i=𝐚^T[1 𝐱^T]^T z = N/2(z+1) ŷ = σ(z)≈∑_q=1^Q/2 F_qcos(π (2q-1)(2z+1)/2N) =∑_q=1^Q/2 F_qcos_q(z) In (<ref>), 𝐱 contains M_0 inputs and a 1 is appended for the bias term a_0. The normalization in (<ref>) is needed to map the input to [0,N], assuming the input is confined to [-1,1]. Nevertheless, as it will be seen later, the first linear transformation may map z to a different range, providing expressiveness to the network. Finally, the non-linear activation function σ(·) is approximated by the DCT with Q/2 coefficients. Without loss of generality, g_q is assumed to be integrated in the DCT coefficient F_q. In (<ref>) we assume the non-linearity to have odd symmetry, so that only the odd coefficients are retained. As it will be explained later on in Sec. <ref> and shown empirically in Sec. <ref>, this does not prevent the network from learning only odd activation functions. A total of M_0+Q/2+1 parameters per perceptron are to be trained, which only represents an increment of Q/2 coefficients with respect to a standard perceptron with a fixed activation functions. As mentioned in the previous section, Q is small due to the energy compaction property of the DCT (e.g., around Q/2=6 coefficients). Fig. <ref> shows the block diagram of the perceptron. The normalization is not shown and assumed to be a part of the non-linear transformation σ(·). [Linear discriminant] Assume M_0=2, we want to discriminate when x_1>x_2. This can be framed as a classification problem, for which we can take 𝐚=a[0 1 -1]^T, where a is a positive constant. Then, any odd monotone increasing function will discriminate the two hypothesis. For the sake of simplicity, take a=1 and the non-linearity approximated with only one coefficient (i.e., F_1=-1). This results in z = x_1-x_2 z = N/2(x_1-x_2+1) ŷ = -cos(π (2z+1)/2N)≈sin(π/2(x_1-x_2)) The solution in (<ref>) provides a soft-decision, whereas a hard-decision would take the sign, i.e., sign(ŷ). While there are infinite solutions for this example, the proposed scheme only increases the complexity by one extra coefficient (i.e., F_1). Implementing a linear σ(·) would require many coefficients and provide no further benefit in terms of performance. In Example <ref> there are infinite minimums, all optimal. Nevertheless, in more complex problems (e.g., high-order discriminants) the expressiveness of a single perceptron is not enough. In light of the universal approximation theorem, in the following we will increase the capabilities of the network by including a hidden layer of several neurons. This architecture, termed ENN, will be fully-adaptive as both the linear weights and the activation functions will be trained in a supervised fashion. §.§ Expressive Neural Network (ENN) As a single layer perceptron limits the capabilities of the network, we will include another processing layer to increase its expressiveness. For a general multi-layer perceptron of L layers, the following expressions show the perceptron signals at the l-th layer: 𝐳_l = 𝐀_l^T[1 𝐬_l-1^T]^T 𝐳_l = N/2(𝐳_l+1) 𝐬_l = σ_l,m (𝐳_l) for l=1,…,L. The first input and last output correspond to 𝐬_0=𝐱 and ŷ=𝐬_L, respectively. The matrix of linear weights is 𝐀_l=[𝐚_l^(1) … 𝐚_l^(M_l)] and M_l stands for the number of perceptrons at layer l, with M_0 being the number of inputs. The operations in (<ref>) and (<ref>) are performed element-wise. The AAF in the ENN, this is, the m-th element of (<ref>) is computed as 𝐬_l[m] = ∑_q=1^Q/2 F_q,l^(m)cos_q(𝐳_l[m]), where F_q,l^(m) corresponds to the q-th coefficient of the m-th perceptron at the l-th layer. As explicitly shown in (<ref>) and (<ref>) the activation function is not necessarily the same at each neuron, although the number of DCT coefficients Q is kept constant for the whole network. The last activation function could be substituted by an step function in the classification setup and a linear for regression. However, this will be maintained so that the network can self-adapt depending on the nature of the problem. As opposed to the M_0-dimensional DCT in Sec. <ref>, the proposed architecture does not require implementing the DCT in M_0 dimensions and can be trained in a supervised fashion. Furthermore, the number of coefficients grows linearly as M_0Q. Throughout the rest of the paper we assume a two input variable, namely M_0=2, and L=2 layers. These layers are sequentially termed hidden and output layer. Fig. <ref> shows the architecture of the two-layer perceptron with M_1 neurons in the hidden layer and expression (<ref>) shows the input-output relationship. §.§ Expressiveness of periodic activation functions As it will be shown in the experimental results, the expressiveness of the DCT comes from its periodic nature. Fig. <ref> shows the sigmoid function and the corresponding DCT representation in the [-1,1] range using Q/2=6 coefficients. While the sigmoid function saturates outside the range, the DCT approximation offers a periodic infinite non-linearity. This offers more capacity to the activation function, as the network may choose to work at an increasing range (e.g., [-1,0]), decreasing range (e.g., [-3,-1]), bump range (e.g., [-1,3]), valley range (e.g., [-3,1]), or even with several periods simultaneously (e.g., [-5,3]). While imposing odd symmetry in the DCT representation allows to have a continuous derivative, it does not constraint the resulting activation function to be odd. This Fourier model for non-linearities is the first one to provide such flexibility to activation functions, which is not possible with a fixed non-linearity not implemented with the DCT. To understand how the non-linear activation function maps the input to the output space, we use the concept of bump. Consider the perceptron in Fig. <ref> with M_0=2. Notice that all input pairs satisfying the following equality are mapped to the same non-linear output σ(c): a_0 + a_1x_1 + a_2x_2=c, where c is a constant value. Fig. <ref> shows how expression (<ref>) corresponds to a line in the input space. Depending on the linear weights, only a certain range of the non-linearity is covered (or multiple ranges). In Fig. <ref> it shown the mapping σ(c). It also displays to what extent the function is used between dashed lines, this is, for a minimum and maximum c. Fig. <ref> shows the generated bump, by evaluating all the input data pairs through the linear transformation and the non-linear mapping. § SUPERVISED LEARNING §.§ Backpropagation rules Consider a dataset 𝒟, consisting of (𝐱_i,y_i)∈𝒟, where the former is a M_0-sized vector training sample and the latter is the associated reference (i.e., label in classification, or function value in neuromorphic computing). Given y_i and ŷ_̂î, the error can be computed and propagated throughout the network to adjust the learnable parameters. We assume the loss to be the mean squared error (MSE), ε^2=(y_i-ŷ_i(𝐱_i))^2, for both classification and regression problems. Notice in (<ref>) we explicitly write the output as a function of the input data. Given a learnable parameter w, the chosen algorithm to update it is LMS. The update rule at a given iteration corresponds to w ← w - μ∇ε =w - με(- ∂ỹ/∂ w), this is, the parameter is updated by the product of the error by the gradient of the output with respect to the parameter. The hyperparameter μ is the step size, which controls the convergence speed of LMS. We choose the simplest algorithm to learn the parameters in the neural network because the focus of this work is on the architecture, not on the algorithm. In this respect, the experimental results show that even using the instantaneous gradient, LMS converges to a minimum and outperforms state-of-the-art models. Thus, the LMS may be replaced by any other algorithm to speed convergence, which is out of the scope of this work. Due to the same reason, we do not include the iteration index in the parameter updates. For the 2-layer architecture in Fig. <ref>, there are 2 set of parameters per layer that need to be updated, namely, the DCT coefficients and the weights from the linear transformation. Starting at the output, the DCT coefficients of the layer l=2 are updated as F_m,2← F_m,2 + με∂ỹ/∂ F_m,2= F_m,2 + 4α/Qεcos_m(z_2), for m=1,…,Q/2. The superscript in the parameter is omitted because there is only one perceptron at the output. In general, the step size is modeled as twice the mismatch α divided by the power of the corresponding input. Since the Q/2 cosines used to approximate the non-linearity are orthonormal, the total power is Q/2. This exhibits another advantage of this parametrization as the power remains always constant. In order to update the linear weights from the output layer, the backpropagation procedure allows to propagate the error across the network with respect to previously computed derivatives, which makes it a very efficient algorithm. Therefore, these are updates as a_2[k]← a_2[k] + με∂ỹ/∂ a_2[k]= a_2[k] + με∂ỹ/∂ z_2∂z̃_̃2̃/∂ a_2[k] = a_2[k] - 2α/P_1επ/2s_1[k]∑_m=1^Q/2F_m,2(2m-1)cos_m(z_2), for k=0,…,M_1, and P_1=𝐬_1^T𝐬_1. The superscript has also been suppressed in this case. Accordingly, the DCT coefficients in the first layer are updated as F_q,1^(k)← F_q,1^(k) + με∂ỹ/∂ F_q,1^(k)= F_q,1^(k) + με∂ỹ/∂ z_2∂ z_2/∂ s_1[k]∂ s_1[k]/∂ F_q,1^(k) = F_q,1^(k) - 4α/Qεπ/2 a_2[k]cos_q(z_1[k])∑_m=1^Q/2F_m,2(2m-1)sin_m(z_2), for q=1,…,Q/2 and k=1,…,M_1. And the linear combination of the first layer is updates as a_1^(k)[m]← a_1^(k)[m] + με∂ỹ/∂ a_1^(k)[m] = a_1^(k)[m] + με∂ỹ/∂ z_2∂ z_2/∂ s_1[k]∂ s_1[k]/∂ z_1[k]∂ z_1[k]/∂ a_1^(k)[m] = a_1^(k)[m] + 2α/P_0επ^2/4 a_2[k]s_0[m] ∑_p=1^Q/2F_p,2(2p-1)sin_p(z_2) ∑_q=1^Q/2F_q,1^ (k)(2q-1)sin_q(z_1[k]), for m=1,…,M_0, k=1,…,M_1, and P_0=𝐬_0^T𝐬_0. Without loss of generality, s_0[0]=s_1[0]=1, which correspond to the bias terms in the linear combination at each layer. Notice that in all the update rules, the derivative exists and inherits the periodic nature of the cosine transform. To enhance the stability of the convergence, the power of each input, namely P_0 and P_1, can be computed with a dampening effect. For instance, P_0 ←β P_o + (1-β) 𝐬_0^T𝐬_0 at each iteration. The α parameter is set smaller in the output layer, which intuitively reduces the propagation of artifacts to previous layers during training. §.§ Initialization and Training Procedure Instead of learning all the parameters from scratch, we devise a sequence of training stages to speed up the convergence of ENN. For both classification and regression problems, the training procedure is split into two phases: * Phase 1: train the linear weights with fixed sigmoid activation functions. This is well-known to converge for a two-layer neural network. The difference with respect to the literature is that the activation function is defined using the DCT expression in (<ref>), this is, the function is periodic. In the hidden layer, the linear weights are initialized to generate bump diversity. These weights can be interpreted as the orientation of the activation functions in the output space. See Fig. <ref> with an example for M_1=4, with linear weights corresponding to [0,0,1], [0,1,0], [0,1,-1] and [0,-1,1] and a logarithmic compander approximated with Q=12 coefficients. Finally, the output layer is initialized with linear weights in a uniform distribution in [-0.5,0.5]. This allows to constraint the dynamics and show that the convergence does not depend on the initialization. * Phase 2: using the linear weights from Phase 1 as the initialization, train both the linear weights and the activation functions. § EXPERIMENTAL RESULTS In the following we will test the ENN for both binary classification and regression problems. As benchmarks, we will test also the following models, in which they all have the same architecture and differ in the activation functions: * ReLU: it uses a fixed non-trainable ReLU, this is, σ(z)=max{0,z}. * Sigm: it uses a fixed non-trainable sigmoid activation function. Notice that the output saturates for an input of large magnitude. * F-DCT: it uses a fixed non-trainable sigmoid activation function, but modeled with the DCT. Thus, the function does not saturate and it is periodic. Although not an odd function, we include the ReLU because it is the standard activation function in current neural network architectures. The Sigm model is also included to compare it with F-DCT and assess the gains of a periodic activation function. All architectures are built with M_1=6 neurons in the hidden layer, and ENN with Q=12 parameters in all the non-linearities. However, recall that only Q/2 coefficients are different from zero because we impose odd symmetry. Regarding the LMS setup, for the non-linearities we set α=10^-3 in the hidden layer and α=10^-4 in the output layer. For the linear weights we set α=5·10^-5 in the hidden layer and α=5·10^-3 in the output layer. We keep the step-size constant for all neurons in the same layer. The dampening parameter is set to β=0.999 for both layers. For both classification and regression problems, a synthetic dataset is generated. All samples come from uniform distributions in the [-1,1] range for each input variable. The train and test sets contain 400.000 and 50.000 samples, respectively. For comparison fairness all the models are trained with the same sequences in all the setups. Notice that the output activation functions in ReLU and Sigm has to be specifically selected to be either sigmoid in classification or linear in regression. Conversely, in ENN it will be automatically adapted to approach the required function, which is an advantage of this architecture with respect to the benchmarks. §.§ Classification Table <ref> shows different classification problems that have been evaluated. It displays the ideal decision map along with the order of the discriminant. The metric chosen to compare the different models is the test accuracy, this is, the percentage of correctly classified samples in the test set. As expected, the complexity of the problem increases with the order of the discriminant. As seen in Example <ref>, a single neuron is sufficient to implement a linear discriminant. This goes to show that all models are capable of producing an almost perfect boundary. With no surprise, for quadratic and cubic discriminants all models are capable of finding the appropriate boundary. When moving to high-order classification problems is when the non-adaptive models are not capable of providing a satisfactory solution. Conversely, the ENN model manages to find a local minimum with high accuracy and with very consistent results for a wide variety of problems. Fig. <ref> shows the AAFs for the 8th classification problem. Notice that only two of out the six activation functions have been adapted, namely Fig. <ref> and Fig. <ref>. The corresponding bumps are shown in Fig. <ref>, where the different orientations are exploited to generate diversity and, ultimately, non-linear boundaries. Another advantage of ENN is that it allows to model the network width, this is, the required number of hidden neurons. The bumps in Fig. <ref> and <ref> are identical, while the corresponding linear weights in the output layer have the same magnitude and opposite sign. This means that the network neglects the information that comes from these two branches. This also happens for bumps in Fig. <ref> and <ref>. In conclusion, the network only needs two neurons to generate the non-linear boundary. However this is not the case for the non-adaptive models, which have a very small accuracy even with 6 neurons. ENN exploits the periodic nature of the Fourier representation to generate expressive non-linearities. Fig. <ref> shows the AAFs for the 7th problem, along with the corresponding bumps in Fig. <ref>. As expected, the architecture exploits the different orientations and periodic nature of the DCT. Notice that there is almost no drop in accuracy between the ring and the face classification problem, although the latter is more complex. This is due to the expressiveness of the DCT, which allows to generate several peaks per bump. In other words, by allowing the activation to be non-monotone it can be used to approximate the ring and both circles still with 6 neurons. Fig. <ref> shows the different decision boundaries for the 7th problem. Despite the accuracy of the non-adaptive models in Table <ref> being around 80%, the quality is very poor. Only the ENN boundary resembles the ideal map. The output activation function converges to the same configuration for all classification problems and it is shown in Fig. <ref>. As anticipated theoretically, the learnt AAF approaches a step function. The fact of constraining the model to Q=12 coefficients makes it infeasible to generate an ideal step function. Nevertheless, this solution is steeper than a standard sigmoid function, which reduces the error at data close to the boundary and increases the accuracy. §.§ Regression The network can also be trained for regression problems, this is, to approximate a function. Particularly, we compute bivariate scalar functions. Table <ref> shows the MSE achieved by the four different models for three different experiments. Fig. <ref> shows the different bumps for the regression problem (x_1^2+x_2^2)/2. Notice that this time the network decides to work at different ranges of the sigmoid function. Particularly, Fig. <ref> corresponds to a symmetric extension of the function, while Fig. <ref> is intrinsically a bump. As expected, the output activation is adapted to the learning task and different from classification. As seen in Fig. <ref> the activation function approaches a linear function (in [-0.25, 0.25]) for all regression problems. Notice that in a non-adaptive setting one needs to choose the last non-linearity according to the task. This represents an improvement with respect to non-adaptive architectures as it is general enough to encompass different tasks. §.§ Effect of the number of coefficients Increasing the number of coefficients Q in the DCT representation theoretically provides more expressiveness to the network. However, in practice, increasing the number of trainable parameters hinders the convergence of the algorithm because there are more local minima. Fig. <ref> shows the map for the 7th classification problem trained with Q={10,12,20} coefficients. As expected, the accuracy improves from 10 to 12 coefficients, but it does not improve beyond that, probably because the algorithm gets trapped at a local minimum. To prevent this from happening, we first train the network with Q=12 and, once it converges, we allow the network to train up to Q=20 coefficients. We term this procedure annealing because we slowly adapt the optimization space to avoid suboptimal local minima. Fig. <ref> shows the output AAF for the cases Fig. <ref> and <ref>. Increasing the number of coefficients allows to create a steeper transition, reducing the number of errors around the boundary. Doing so increases the Gibbs effect, although it produces no harm to the learning process because those samples are far from the boundary and always correctly classified as either 1 or -1. § CONCLUSIONS In this paper we have presented ENN, a novel neural network architecture with adaptive activation functions. Under a signal processing perspective we use the DCT to design the non-linear functions, whose coefficients can be trained using backpropagation. The formulation is general enough to encompass different learning tasks. Specifically, ENN is able to adapt the activation functions for classification and regression problems. A two-step training procedure is also proposed, which speeds up the convergence. We provide insights in the interpretability of the network by recovering the concept of bump, this is, the response of each neuron in the output space. Through extensive experiments we determine that the expressiveness of a neural network highly depends on the activation function. Particularly, the key strength of ENN is the periodic nature of the DCT, providing high accuracy for a wide range of non-linear classification problems. In some cases outperforming state of the art fixed activation functions up to 40% in accuracy. IEEEtran
http://arxiv.org/abs/2307.02731v1
20230706023104
Visualization Supporting Talent Portrait Generation:an Empirical Study
[ "Yuqing Fan", "Shenghui Cheng" ]
stat.AP
[ "stat.AP" ]
Visualization Supporting Talent Portrait Generation: an Empirical Study Yuqing Fan^1,2, Shenghui Cheng^1,2 ^1 Research Center for the Industries of the Future, Westlake University,Hangzhou,China ^2 School of Engineering,Westlake University,Hangzhou,China, ================================================================================================================================================================================================= In today’s era of scientific and technological advancements, the importance of talent resources is increasingly highlighted. This article will attempt to summarize the academic trajectories and successes of numerous scientists from both past and present, aiming to reproduce the correlation between scientists’ personal development and their academic output. Firstly, this article analyzes the life trajectories of researchers, visualizing their research accomplishments, collaborative partners, and research inheritance, and analyze based on the results. § INTRODUCTION The development of science is not only about the constant updating of scientific theories and technological means, but also the process of inheritance and development of scientific knowledge, scientific traditions, and scientific culture among generations of scientists. Scientists at the forefront of the world often have distinct teacher-student relationships. <cit.> once said in "General History of Chinese Academics": "The history of academia is derived from academia. Therefore, the nature of academia determines the nature of its history." With the emergence of talent shortages, brain drain, and structural contradictions in employment, the use of visual methods for human resource management has become increasingly popular in describing talent development. The "14th Five Year Plan" Big data Industry Development Plan proposes to build a prosperous and orderly industrial ecology and improve the level of Big data public services such as talent training. Many local governments, enterprises, and institutions also use visual and quantitative features to search for talents that meet their needs. In this article, we suggest combining visualization methods with researchers' career trajectories to analyze their careers from three perspectives: (1) research projects led by the researchers themselves, (2) collaborative projects in which the researchers participate, and (3) teacher-student relationships between researchers. In addition, this article will quantitatively study the output of researchers through longitudinal analysis and use it to predict the future development of researchers. § RESEARCH STATUS Currently, there is no paper in domestic and foreign research that uses visualization techniques to evaluate scientists themselves and quantitatively analyze their achievements at the same time. However, individual topics have been studied. For example, <cit.> evaluate the extent of researchers' involvement in various fields through an evaluation graph with eight dimensions, including social sciences, natural sciences, and engineering. Then they use the t-sne visualization method to evaluate the similarity between different researchers' studies. <cit.> integrate scientists in the field of geology with scientific discoveries, terms, and concepts, helping future geologists learn and research through spatial data and process diagrams. <cit.> use the Gartner model, business process diagrams, and T-type evaluation diagrams to evaluate the capabilities and research status of foreign data scientists. In terms of academic inheritance research, various departments of universities in the West have commemorative webpages for the school's inheritance. For example, the University of Manchester commemorates the school's history in the school history museum, including the research process and achievements of 25 Nobel Prize winners since the Victorian era, as well as the research achievements of famous researchers in the school's history. European higher education organizations also document a large number of great achievements and inventions, as well as famous researchers and their lineage, in famous universities in Europe since the establishment of the University of Bologna in 1088. There are also studies on academic inheritance topics in China. <cit.> discusses the academic inheritance of Chinese scientists from the perspectives of learning Western modern science and rediscovering the traditional academic values of China. Qualitative methods have become a popular method in recent years to evaluate the performance of candidates when applying. At present, the most popular method in China is the study of competency models. It is generally believed that competence is a personal condition and behavioral characteristic that directly affects work performance, referring to the deep-seated characteristics of individuals who can distinguish outstanding achievements from ordinary people in a certain job<cit.>. For example, <cit.> proposed a human resource performance management system based on personal competence. Every employee must possess corresponding abilities to achieve their job performance goals. In addition, <cit.>, <cit.>, and <cit.> have all discussed the practice and application of competency models from different perspectives. In addition to the competency model, <cit.> classified the current domestic intellectuals into six categories: basic research talents, applied research talents, technology development talents, achievement transformation talents, technology management service talents, and experimental technology talents. They also created a user tag based on all published journal, conference, and patent results, to evaluate a talent from different perspectives. Of course, there are also many domestic studies scoring talents in the form of Indexation. For example, Qiu Junping and Miao Wenting used the Hirsch index to evaluate researchers in the field of library and information science <cit.>. In addition to the h index, the Ht index proposed by <cit.>, the Prathap index proposed by <cit.>, and the RAC index proposed by <cit.> all try to evaluate talents in a quantitative way. § FRAMEWORK OF DATA VISUALIZATION Peter F. Drucker, the father of modern management, has the following opinion on Talent management: "Only measured talents can be effectively managed." Because of the small number of employees, the traditional Talent management system often relies on qualitative methods. However, as companies expand, people increasingly need more complex management methods. The commonly used quantitative analysis methods in human resources include comparative analysis, attribute analysis, and graphic analysis. Comparative analysis and comparison of data to demonstrate differences between different data groups; Attribute analysis focuses on studying the changes between things, tending towards traditional quantitative analysis; Graphic analysis utilizes graphics as a means to provide readers with a better reading experience<cit.>. In the past, human resource management in universities mainly focused on managing the current situation of human resources, lacking foresight. Currently, due to limited information, human resource services have become increasingly passive. <cit.> believe that there are five main problems in current human resource management in universities: the inability to achieve precise matching between positions and in-service personnel, as well as potential recruitment talents <cit.>; The quantification level of performance evaluation indicators is not high <cit.>; The information is limited and cannot be predicted; The phenomenon of brain drain is widespread <cit.>; Human resource management is too rigid to meet individual needs <cit.>. Therefore, this article proposes a framework map for analyzing data from researchers. For any government agency or enterprise or institution that wants to recruit, visual charts can be created, compared, and evaluated through the above methods. §.§ Data collection and clssification There are a large number of platforms at home and abroad that provide scientific research data for researchers, especially journals and related data. Many researchers themselves create personal interfaces or provide relevant personal information on school platforms. In addition, platforms such as CNKI, Google Academic, ResearchGate, and Web of Science all provide specific data on the authors, citations, and downloads of journal and conference papers. In the process of data collection, a common situation is that the publication time of the paper on the personal interface does not match the data platform. This is due to the fact that the publication process of the paper spans years, resulting in different information between the two. In this case, the publication time of the personal interface should prevail. The detailed framework for data collection is shown in Figure <ref>. There are also various ways to collect data on academic inheritance relationships: firstly, if the researcher's field is mathematics or computer science, they can be directly queried from websites such as Mathematical Genealogy Engineering. However, these websites only provide data from researchers related to the subject. If the research object does not belong to these fields, it should be collected through methods such as the researcher's personal interface, the school's paper database, and the departmental interface. After collecting the data, the data should be classified and all articles of the research object should be classified as research projects led by the researcher themselves and research projects participated by the researcher. This article believes that in most cases, if the first author of the article is the research subject, or if the corresponding author is the research subject and the first author is the school of the research subject, then the research is a research project led by the researcher himself, and vice versa, it is a scientific research project participated by the researcher. §.§ Framework for analysis of Journals The purpose of establishing a literature analysis framework in this article is to reflect the changes in the quality of papers while reflecting the number of papers published each year. Therefore, this article adopts a columnar line overlay chart to simultaneously include two sets of data. The innovation of this article lies in the addition of the label of more influential papers, taking into account data dimensionality reduction <cit.>, to demonstrate the proportion of outstanding papers by researchers in the total number of papers per year. Due to the different number of factors in journals of different disciplines, the proportion of high factor papers varies among disciplines. Taking statistics as an example, journals with factors ranging from 4 to 5 are usually considered influential. Therefore, excellent papers can include papers published in journals with high impact factors, as well as papers with high citations. However, overall, the proportion of excellent papers should be around 10-20%, and should not exceed 20%. In the future, the improvement direction of the literature analysis framework is to consider more elements of high-dimensional analysis <cit.>. §.§ Framework for analysis of co-authors The collaborators of researchers include all research projects that involve researchers, and this article analyzes these projects in two different ways. Firstly, this article analyzed the working institutions of all participants in these projects and conducted spatial data analysis on these institutions. Subsequently, the article analyzed the cooperation between participants in each project, except for the research subjects, and found the frequency of academic cooperation from two aspects. In the process of visualization, we should focus on the ethnic level of academic cooperation, and reflect the number of cooperation in the form of thick and thin lines. In the future, collaborators can also conduct clustering analysis of data texts <cit.> or multi factor analysis using different colors for labeling <cit.>. §.§ Framework for analysis of academic inheritance Academic inheritance analysis also includes two different visualization methods. The first part shows the Doctoral advisor of the research goal and the upward inheritance. In the process of visualization, it is necessary to collect the birth and death dates of each researcher on the inheritance tree, so as to build a complete Gantt chart. The second section displays the graduation destinations of all doctoral students under the guidance of the researchers, and displays them through spatial data. It should be noted that the visualization process should include students heading to work in the industry, and each student's position should be processed according to the latest position. §.§ Dealing with missing data Missing data is a very common phenomenon in data processing. Generally speaking, when there is a missing data, the most common approach is the data filling method, which converts a dataset with missing data into a complete dataset by filling in the mean, mode, or classification values such as 1 and -1 <cit.>. Within the framework of this article, common missing data values include: impact factors of journals that cannot be queried, graduates that cannot be queried, and the biographies of ancient scientists. Due to the importance of these values, the use of data filling method is not reasonable. Generally speaking, as long as the total missing rate of each data item is not higher than 5%, only complete data can be analyzed <cit.>. § DATA VISUALIZATION This section will introduce our visualization works to describe the efforts of scientists, we will estimate a faculty's work with 3 different perspectives: 1) Journals published in recent years, 2) Collaborators in the world and 3) Academic Inheritance. §.§ Analysis of Journals As shown in Figure <ref>, it displays all the papers published by Professor Richard Cook from the Department of Actuarial Science and Statistics at the University of Waterloo since 2009. The figure presents several significant papers published by Professor Cook, which are either published in high-impact factor journals or have gained a large amount of application. From the graph, it is not difficult to see that the number of articles published by Professor Cook has been around 5 to 6 per year since 2009. Among them, the number of publications clearly shows differences with troughs and peaks. The years 2011-2012 and 2016-2017 had a lower number of articles published. However, the average impact factor of the published articles has shown a gradual increase. It has remained stable at around 2.5 in the early years and has been increasing in recent years, reaching a peak of 4 in one year. This is related to the fact that journals in the field of statistics generally have lower impact factors. As a comparison, we selected Professor Richard Cook's doctoral graduate, Liqun Diao, for comparison. Figure <ref> shows the publication status of Liqun Diao's papers in the past six years. It is not difficult to see that as an assistant professor, Liqun Diao's number of published papers, more influential articles, and impact factors remain around 2-3. However, the data in recent years has also shown an upward trend. §.§ Analysis of co-authors As shown in Figure <ref>, it displays the other researchers that Professor Richard Cook from the Department of Actuarial Science and Statistics at the University of Waterloo has collaborated with since 2009. It is evident that as a professor at a Canadian university, Professor Cook's main collaborators come from schools and institutions in both Canada and the United States, with a small number also from schools and institutions in France. The next figure <ref> presents the exact people co-authored with Richard Cook. The result obvious reveals the distribution of members, the downer part of members are his students and colleagues within the University of Waterloo, the left hand part of the graph are another academic team from Aix-Marseille Univesity in France, and the upper right hand part reveals clear relation of several Chinese academics working in US universities. §.§ Analysis of academic inheritance Currently, there are multiple academic projects dedicated to reproducing the history of academic inheritance and tracing the lineage between scholars. For instance, the "Mathematics Genealogy Project" has organized the genealogy of scientists, primarily mathematicians, from ancient times to the present. This article aims to reproduce the connections between scholars from a visual perspective based on the information primarily sourced from the Mathematics Genealogy Project. As shown in Figure <ref>, an analysis of academic inheritance is conducted using Professor Ari Kauffman from Stony Brook University as an example. It is worth noting that the list includes many well-known names, such as Newton and Galileo. In fact, this is a common phenomenon discovered by the genealogy project, whereby after multiple tracing, one's academic lineage can be traced back to familiar names like Leibniz and Newton. In addition to analyzing academic heritage, this article also proposes a visual method for analyzing students' whereabouts. As shown in Figure <ref> , it shows the whereabouts of 17 doctoral graduates under the guidance of Professor David Andrews Francis in the Department of Statistics at the University of Toronto. The figure shows the destinations chosen by graduates, most of whom choose to teach in schools in the United States and Canada. Due to Professor Andrews' collaboration with British schools, some graduates have also chosen to teach at universities in the UK. Some graduates choose to join the industry. § CONCLUSION This article presents a visual analysis of the research achievements of existing researchers, and draws the following conclusions: firstly, visual analysis of researchers is very important. By converting a large amount of data and information into charts, it can help universities, businesses, or governments more intuitively understand various aspects of researchers, including academic achievements, recent academic status, international cooperation status, and student distribution. As young researchers gradually mature and publish more results in academia. Major universities should also make possible predictions in order to establish a preliminary understanding of their future and make more informed plans for the future development of schools and researchers. In the future, with the development of Metaverse, Web 3.0 and other technologies <cit.>, the visual analysis of scientific researchers will also display their research level and achievements more intuitively and accurately. [title=Bibliography]
http://arxiv.org/abs/2307.02243v1
20230705123529
Power-up! What Can Generative Models Do for Human Computation Workflows?
[ "Garrett Allen", "Gaole He", "Ujwal Gadiraju" ]
cs.HC
[ "cs.HC", "cs.AI" ]
[email protected] 0000-0003-4449-1510 [email protected] 0000-0002-8152-4791 0000-0002-6189-6539 [email protected] Delft University of Technology Delft Zuid-Holland Netherlands We are amidst an explosion of artificial intelligence research, particularly around large language models (LLMs). These models have a range of applications across domains like medicine, finance, commonsense knowledge graphs, and crowdsourcing. Investigation into LLMs as part of crowdsourcing workflows remains an under-explored space. The crowdsourcing research community has produced a body of work investigating workflows and methods for managing complex tasks using hybrid human-AI methods. Within crowdsourcing, the role of LLMs can be envisioned as akin to a cog in a larger wheel of workflows. From an empirical standpoint, little is currently understood about how LLMs can improve the effectiveness of crowdsourcing workflows and how such workflows can be evaluated. In this work, we present a vision for exploring this gap from the perspectives of various stakeholders involved in the crowdsourcing paradigm — the task requesters, crowd workers, platforms, and end-users. We identify junctures in typical crowdsourcing workflows at which the introduction of LLMs can play a beneficial role and propose means to augment existing design patterns for crowd work. Power-up! What Can Generative Models Do for Human Computation Workflows? Ujwal Gadiraju August 1, 2023 ======================================================================== § INTRODUCTION AND BACKGROUND Artificial intelligence (AI) research is being reinvigorated with current advances in large language models (LLMs). Since their inception, LLMs have increased in size, effectiveness, and applications. For instance, BERT <cit.>, initially trained for masked language prediction, has been applied to other domains such as neural ranking <cit.> and document classification <cit.>. OpenAI's[<https://www.openai.com/>] GPT family of models have been used in language tasks including goal-oriented dialogue <cit.>, patent claim generation <cit.>, and story generation <cit.>. The most recent GPT variant, ChatGPT <cit.>, has seen an explosive growth in popularity, indicating the potential for a promising future where LLMs are deployed as work assistants. Due to such powerful generative capability, more researchers have started exploring generative LLMs in work assistant roles. For example, powerful generative LLMs have shown human-comparable writing skills in story generation <cit.> and scientific writing <cit.>. LLMs have also exhibited promising assistive capability in complex tasks like coding <cit.>, drug discovery <cit.>, and question generation for education needs <cit.>. The common thread running through all variations in LLMs is the need of high quality data for training and evaluation. Crowdsourcing has been widely adopted in machine learning practice to obtain high-quality annotations by relying on human intelligence <cit.>. Crowdsourcing is a paradigm in which researchers or other stakeholders request the participation of a distributed crowd of individuals, who can contribute with their knowledge, expertise, and experience <cit.>. Such individuals, called crowd workers, are asked to complete a variety of tasks in return for monetary or other forms of compensation. Tasks are often decomposed into smaller atomic units and can vary in their purpose, including labelling images, editing text, or finding information on specific topics <cit.>. Tasks can be standalone, or organized as a series of smaller sub-tasks, depending on their overall complexity and the design choices made by requesters. More complex problems, such as software engineering or system design problems, require task workflows. Crowdsourcing workflows are distinct patterns that manage how large-scale problems are decomposed into smaller tasks to be completed by workers. The crowd-powered word processor Soylent applies the Find-Fix-Verify workflow to produce high-quality text by separating tasks into generating and reviewing text <cit.>. The Iterate-and-Vote workflow has been deployed in creating image descriptions, where workers are asked to write descriptions of images to assist those who are blind <cit.>. Subsequent voting tasks are used to decide on the optimal description. <cit.> introduce CrowdMR, which combines the Map-Reduce workflow with crowdsourcing to facilitate the solving of problems that require both human and machine intelligence, i.e., “AI-Hard" problems <cit.>. With CrowdForge, <cit.> provide a framework for crowdsourcing to support complex and interdependent tasks. The authors follow up with the tool CrowdWeaver <cit.> for managing complex workflows, supporting such needs as data sharing between tasks and providing monitoring tools and real-time task adjustment capability. Taking a more holistic look at workflows, <cit.> investigate the relationship between the need for adaptation and complex workflows within crowdsourcing, finding that the current state of crowdsourcing processes are inadequate for providing the necessary adaptation that complex workflows require. Within crowdsourcing, the role of LLMs can be envisioned as akin to a cog in a larger workflow. Typically, LLMs are used for supporting individual writing or classification tasks within a workflow, as previous examples expressed. Researchers are also exploring the application of LLMs in assisting crowd workers. <cit.> combine the generative power of GPT-3 and the evaluative power of humans to create a new natural language inference dataset that produces more effective models when used as a training set. In a similar vein, <cit.> introduce a “Generative Annotation Assistant" to help in the production of dynamic adversarial data collection, significantly improving the rate of collection. These works measure the effectiveness of the models and the individual tasks, yet there remains an open gap regarding the understanding of how LLMs improve the effectiveness of crowdsourcing workflows and how such workflows can be evaluated. In this work, we present a vision for exploring the gap from the stakeholders' perspectives, e.g., task requesters, crowd workers, and end-users. In so doing, we highlight the junctures of crowdsourcing workflows at which introducing LLMs can be beneficial. We also propose means to augment existing design patterns for crowd work. § INCORPORATING LARGE LANGUAGE MODELS IN CROWDSOURCING WORKFLOWS As LLMs are pre-trained on large text corpora, they show great capability in understanding context-specific semantics. When further fine-tuned for specific uses with additional, smaller datasets, highly effective and domain-targeted models can be produced. Additionally, some LLMs (e.g., BART <cit.>, GPT-3 <cit.>) are also good at generating responses to input queries, which can be fluent, human-like, and even professional. As it stands, LLMs have been effectively deployed within multiple domains such as medicine <cit.>, finance <cit.>, and others requiring commonsense reasoning <cit.>. As such, LLMs are an opportune and potentially very useful tool to use within crowdsourcing where domain knowledge may not always be available. While LLMs are effective in many ways, they are far from being perfect and come with drawbacks. Due to their black box neural backbone, LLMs suffer from a lack of transparency, which leads to difficulty in explaining how they achieve the performances they do <cit.>. Such opacity also makes it difficult to track the factual error of LLMs, which inhibits the potential for improving the models <cit.>. Further, language models are known to capture several different forms of biases <cit.>. Most existing LLMs tend to perform poorly on tasks that require commonsense knowledge <cit.>, which is a common practice for children. Last but not least, current language models achieve poor compositional generalization <cit.>, which is required for solving complex tasks. Noticeably, LLMs fall short in aspects that humans are good at, e.g., commonsense reasoning <cit.> and complex task planning <cit.>. Putting LLMs into practice requires either addressing or working within these limitations. §.§ The Lens of Complex Crowdsourcing Workflows LLMs can easily fit into existing crowdsourcing workflows. Take the Find-Fix-Verify workflow as an example. This workflow is well-suited for writing tasks, whether it be editing, revisions, or new content. Each step is an opportunity to include LLMs for improvements in the process. Let us take the example of revising a news story. During the “Find" stage, a workers would be tasked with reading the story and finding any errors, e.g., grammar, spelling, or false statements. Once these errors are identified, a new crowd of workers is recruited for the next stage: to “Fix" the errors. We are now left with an updated draft of the news story that has fewer errors than the initial draft. Which brings us to the final stage of the workflow, “Verify", where yet another group of workers validate the work of the prior groups. In this particular example, it is fairly clear where an LLM can be swapped for the workers at each stage. A retrieval or error classification LLM can be deployed for finding the errors, a generative LLM can be used to produce repaired text, and yet another classification LLM can finish it all off as the verifier. However, not all tasks take this form, or follow this particular workflow. Adapting other workflows, i.e., Iterate-and-Vote or MapReduce, can be done in a similar manner. Even so, adaptations such as these prompt the question: Once introduced, what are the effects of LLMs within crowdsourcing workflows for each stakeholder of the crowdsourcing process? On the surface, this appears like straightforward question. Crowdsourcing has many different stakeholders involved: the requesters, the workers, and the end-users. The impact of including an LLM into workflows has the potential to affect each stakeholder in different ways. From the perspective of the requester, the monetary cost of completing tasks will be reduced as potentially fewer workers will need to be recruited. The tasks may take less time to complete which will result in further monetary savings. A reduction in time to gather data, complete tasks, and/or a reduced need for workers may have a negative impact on the income flow for workers, however. With available tasks taking less time and there being fewer tasks, it creates the potential for crowd workers to earn less. This can be offset by adjusting incentive structures on platforms. On the other hand, the reduction in costs for requesters could lead to more tasks being posted, leading to more high-quality labels. In turn, LLMs benefit from the better labels and improve in performance as well, creating a positive cycle that benefits both crowd workers and requesters. Further work is required to gain a better understanding of the financial opportunities and risks surrounding LLMs as part of crowdsourcing workflows. Of course, there are trade-offs that come alongside any benefits. The trade-off for the requesters is a learning curve around the LLMs. Time will need to be dedicated to strategize and familiarize with the integration of LLMs in workflows. A trade-off that crowdsourcing platforms will share, accompanied by the additional cost of the development to add the LLMs to their products. An LLM must be trained before it can be appropriately used within a crowdsourcing workflow. This training, or fine-tuning, creates an overhead for either the crowdsourcing platform or the requester. While the overhead is initially a burden for most stakeholders, there will be an efficiency gain in the long term. §.§ Risk and Opportunity Further consideration is needed regarding the transparency of LLMs versus humans. When crowd workers complete tasks, such as annotation or other decision-oriented varieties, requesters have the capability of performing a follow-up with the workers to elicit reasoning for the outcomes provided. This is not a simple job for LLMs. While there exist methods for model explainability <cit.>, none have demonstrated a level of effectiveness on par with what a requester would achieve with a human-human conversation. This same lack of transparency also has the potential of confounding workflows at the worker level. For example, take a scenario where an LLM is tasked with making a prediction, and a human worker to validate the prediction of the model, and the model provides a prediction that is not in line with what the worker expects to see. In such a scenario, the worker may want to interrogate the model to gain insight into why the prediction was made. However, there is currently no such clear way for the worker to request such an explanation from the LLM. Also worth considering is the concept of accountability. Whenever a machine is introduced into a system, be it a factory, an airplane, or a crowdsourcing workflow, the question of accountability requires definition. Adding LLMs into crowdsourcing workflows raises the question of who or what is accountable if things do not go according to plan? Is the model, the requester, the platform, or the crowd workers to be held responsible for mishaps? There are many questions around the benefits, viability, risks, and harms involved with introducing LLMs into crowdsourcing workflows. These questions provide rich research opportunities for the generative AI and human computation research communities. The realm of creative crowdsourcing tasks is another place of opportunity for LLMs. Generative models can help by providing suggestions or starting points to spark brainstorming or idea generation sessions. Alternatively, classification LLMs can be used to consolidate the ideas produced. For tasks that are more engineering or design focused, LLMs may be able to serve as “rubber duck" sounding boards. LLMs may also provide performance boosts in areas such as content creation, music composition, or protein discovery. The possibilities of how LLMs can be included in crowdsourcing are vast, yet the viability of these use cases warrants further investigation. ACM-Reference-Format
http://arxiv.org/abs/2307.03339v1
20230707004619
Open-Vocabulary Object Detection via Scene Graph Discovery
[ "Hengcan Shi", "Munawar Hayat", "Jianfei Cai" ]
cs.CV
[ "cs.CV" ]
Department of Data Science & AI, Monash University Australia [email protected] Department of Data Science & AI, Monash University Australia [email protected] Department of Data Science & AI, Monash University Australia [email protected] In recent years, open-vocabulary (OV) object detection has attracted increasing research attention. Unlike traditional detection, which only recognizes fixed-category objects, OV detection aims to detect objects in an open category set. Previous works often leverage vision-language (VL) training data (e.g., referring grounding data) to recognize OV objects. However, they only use pairs of nouns and individual objects in VL data, while these data usually contain much more information, such as scene graphs, which are also crucial for OV detection. In this paper, we propose a novel Scene-Graph-Based Discovery Network (SGDN) that exploits scene graph cues for OV detection. Firstly, a scene-graph-based decoder (SGDecoder) including sparse scene-graph-guided attention (SSGA) is presented. It captures scene graphs and leverages them to discover OV objects. Secondly, we propose scene-graph-based prediction (SGPred), where we build a scene-graph-based offset regression (SGOR) mechanism to enable mutual enhancement between scene graph extraction and object localization. Thirdly, we design a cross-modal learning mechanism in SGPred. It takes scene graphs as bridges to improve the consistency between cross-modal embeddings for OV object classification. Experiments on COCO and LVIS demonstrate the effectiveness of our approach. Moreover, we show the ability of our model for OV scene graph detection, while previous OV scene graph generation methods cannot tackle this task. Open-Vocabulary Object Detection via Scene Graph Discovery Jianfei Cai August 1, 2023 ========================================================== § INTRODUCTION Object detection is an important and fundamental problem in computer vision, which serves as a crucial step for many higher-level tasks, such as scene understanding <cit.>, image captioning <cit.> and cross-modal retrieval <cit.>. Traditional object detection expects to classify and localize objects in a fixed category set, as shown in Fig. <ref>(a). Consequently, users have to continually retrain the model to fit different real-world applications, because different applications normally involve varying category sets. Hence, open-vocabulary (OV) object detection <cit.> has attracted increasing attention in recent years, where the model is trained to recognize an open set of object categories and thus can be directly used for diverse applications. However, object detection training data only contain objects of limited categories, and thus the key challenges in OV detection are how to discover, classify and localize unseen objects. For object discovery, unseen objects may be treated as `background' by detection networks, and thus no proposal is generated for them. Similarly, without corresponding training data, detection networks are also hard to accurately localize and classify unseen objects. OV detection methods usually tackle these problems by introducing vision-language (VL) information, as illustrated in Fig. <ref>(b), because language involves various objects. The existing works use three types of methods to incorporate VL information. The first is pre-training-based methods <cit.>, which distill knowledge (e.g., feature spaces) from pre-trained VL models to discover and classify OV objects, and leverage fixed-set detection data to train modules for object localization. Nevertheless, pre-trained models reduce the flexibility of these methods. They have to encode their features into the pre-trained feature space and cannot flexibly adjust them. The second type is weakly-supervised methods <cit.>, which first generate pseudo OV detection labels from image-level VL data, and then train detection networks with these pseudo labels. Nonetheless, these methods suffer from the problem of inevitable noises in pseudo labels. The third type <cit.> reformulates object detection as referring grounding problem, and thus can leverage referring grounding training data to simultaneously enable OV object discovery, classification and localization. Such approaches avoid the noise in weakly-supervised methods and are more flexible than pre-training-based methods. Despite significant progress made by these methods, they only extract object names in referring expressions, but ignore other rich language information. As shown in Fig. <ref>(b), language expressions usually contain not only object names but a mass of object relations (e.g., `near' and `under'), which are also important cues for OV object detection. Firstly, unseen objects can be better discovered by relation cues. For example, when a network finds an `under train' relation in the image, there might be an object under the train, even if the network has not seen this object before. Secondly, relations can also improve object classification accuracy. For instance, the object under trains is probably `track'. Thirdly, as many relations describe object positions, such as `under' and `in front of ', they are helpful for object localization. Based on these observations, we propose a novel Scene-Graph-Based Discovery Network (SGDN) to exploit object relations for OV object detection. Specifically, we first present a scene-graph-based transformer decoder (SGDecoder) to model both objects and their relations, i.e., scene graphs. In SGDecoder, a sparse scene-graph-guided attention (SSGA) module is designed to embed scene graph cues into object representations for OV object discovery, localization and classification. Based on these representations, scene-graph-based prediction (SGPred) is proposed to predict OV detection results, including bounding boxes and categories. For bounding box prediction, we build a scene-graph-based offset regression (SGOR) mechanism, where object localization and scene graph modeling are mutually boosted. For classification, we present a cross-modal learning method that leverages scene graphs to improve the consistency between cross-modal object embeddings. SGPred also generates relation predictions to better learn relation information. Our major contributions can be summarized as follows. * We propose a novel scene-graph-based OV object detection network, SGDN. To the best of our knowledge, this is the first work that exploits scene graph cues for OV object detection. * We present SGDecoder including an SSGA module to model scene graphs for OV object discovery, localization and classification. An SGPred method with SGOR and cross-modal learning mechanisms are designed to improve OV predictions based on scene graph cues. * Our SGDN outperforms many previous state-of-the-art methods on two common OV detection datasets, COCO and LVIS. Meanwhile, SGDN can also generate OV scene graph detection results, while previous OV scene graph generation methods cannot. § RELATED WORK Open-Vocabulary Object Detection. The existing OV detection works can be generally categorized into three types. The first is pre-training-based OV detection. They are trained with fixed-set detection data to localize objects while incorporating VL pre-training models to recognize OV objects. OVR-CNN <cit.> uses image caption data to train a model as the pre-training. Many other methods <cit.> employ off-the-shelf pre-trained models, such as CLIP <cit.>. F-VLM <cit.> adds classification and localization heads to CLIP, and fine-tunes the model on detection data. ViLD <cit.> and OV-DETR <cit.> distill knowledge from CLIP to detection networks to generate OV detection results. HierKD <cit.> extracts multi-scale features for better OV detection. DetPro <cit.> incorporates prompt learning to boost performance, and Promptdet <cit.> further enhances the prompt learning to region level. These methods successfully leverage VL pre-training to improve their OV recognition ability. Nevertheless, the flexibility of these methods is limited by pre-training models, because they have to encode their features into the pre-trained features space and cannot flexibly adjust them. The second type is weakly-supervised approaches. They leverage large-scale image-level supervisions (such as image classification and image caption data) to train detection models, to address the issue of lacking OV dense annotations. To train an OV detection model, Detic <cit.> extracts pseudo bounding boxes from image classification datasets with up to 21K categories. Gao et al. <cit.>, RegionCLIP <cit.>, and VL-PLM <cit.> generate pseudo bounding boxes from CLIP by class activation map (CAM) or pre-trained RPN <cit.>. Rasheed et al. <cit.> first generate object detection results and then restore image-level results from these object-level predictions. In this way, they can use image-level supervision to train the model. VLDet <cit.> uses image-caption supervision by aligning each noun in the caption to object proposals, where object proposals are extracted by pre-trained detectors. However, there are inevitable noises during weakly-supervised training, which limits the performance. The third type, grounding-based works, points out the high similarity between OV detection and referring grounding, and uses grounding frameworks to tackle OV detection. Since grounding training data includes bounding box annotations for diverse objects, FindIt <cit.> combines referring comprehension and object detection data to train a grounding model, which shows good OV detection ability. X-DETR <cit.> reformulates detection and grounding as an instance-text alignment problem, and designs an alignment network for both tasks. GLIP <cit.> enhances the VL interaction in the alignment framework, and also extracts millions of pseudo grounding labels from image caption data to boost the training. GLIPv2 <cit.> extends GLIP for more tasks, such as image captioning and visual question answering. Nevertheless, they ignore object relation information in referring expressions, which are also important for OV detection. Unlike them, we exploit object relations to improve OV object discovery, classification and localization. Object Detection and Scene Graph. Early scene graph generation methods <cit.> employ pre-trained object detectors to extract bounding boxes for relation prediction. They do not optimize object detectors. Recent works <cit.> simultaneously optimize object detection and relation predictions. These scene graph approaches are foundations of our work. However, they more focus on relation prediction from object detection cues, rather than exploring relation cues for object detection. Several works <cit.> leverage scene graph cues for object detection and referring grounding. SIN <cit.> implicitly models object relations without any relation supervision for fixed-set object detection. SGMN <cit.> and vtGraphNet <cit.> disassemble complex referring expressions into scene graphs, and thus simplify the reasoning for referring grounding. Different from them, we exploit scene graphs for OV object detection. We propose modules that leverage scene graph cues to discover, classify and localize OV objects. Some works <cit.> study the OV scene graph generation problem. Zhong et al. <cit.> design a weakly-supervised method and leverage image caption data to capture OV knowledge. SVRP <cit.> uses VL data to pre-train a model for OV relation recognition, and then designs a prompt to predict relations between two objects. However, these methods also focus on relation prediction and have no mechanism for OV object detection. As a result, they can only tackle the OV predicate classification and scene graph classification tasks, while cannot generate OV scene graph detection results. Compared with them, our model aims at OV detection with relation cues, and is able to deal with the OV scene graph detection problem. § PROPOSED METHOD §.§ Problem Definition and Method Overview The inputs of OV object detection are an image and a number of text, as shown in Fig. <ref>(b). During inference, the text are usually C candidate object categories. OV detection networks output object proposals (bounding boxes) from the image, and determine the category of each object by matching the object embedding with C candidate category embeddings. During training, the text can be any language description corresponding to objects in the image. By learning with these VL data, OV detection networks are able to recognize various objects. Previous works <cit.> only use nouns (i.e., individual objects) from language descriptions, while ignoring other useful information such as object relations. Objects and relations can compose scene graphs, which provide important cues for OV object discovery, classification and localization. A scene graph triplet is formed as `subject-predicate(relation)-object', where `subject' and `object' are two objects and `predicate' is the relation between them. In this paper, we propose an SGDN that exploits scene graphs for OV object detection. Our SGDN consists of three components as illustrated in Fig. <ref>. The first is Feature Encoding that extracts the embeddings of the input image and text. The second is SGDecoder to generate embeddings of objects and relations. In SGDecoder, we propose an SSGA module to enrich object embeddings by scene graph information to improve OV object discovery, classification and localization. The final component is SGPred that generates object bounding boxes and categories, as well as relation categories. In SGPred, an SGOR (Fig. <ref>) is used to mutually refine scene graph extraction and bounding boxes prediction. We also propose cross-modal learning, which takes scene graphs as bridges to enhance the consistency between cross-modal embeddings. Next, we introduce each module in detail. §.§ Feature Encoding Image encoder. We leverage the common transformer-based architecture <cit.> for object detection, where the image encoder includes a backbone encoder (e.g., ResNet <cit.>) and a transformer encoder <cit.>. The output of our encoder is an image feature map 𝐕∈ℝ^N_v× D_v, where N_v is the number of image patches, and D_v is the dimension. Text encoder. We extract textual embeddings by a pre-trained text encoder (e.g., BERT <cit.> or RoBERTa <cit.>). Since we generate both object and relation predictions, our input text contains two parts: object categories and relation categories. During inference, there are C+1 object categories, including C candidate object categories in the target application and an additional `no object' category to recognize false proposals. The text encoder generates a feature map 𝐅^o∈ℝ^(C+1) × D for object categories, in which each feature vector encodes an object category, and D is the feature dimension. Similarly, we have M+1 relation categories, i.e., M candidate relation categories in the target application and an additional `no relation' category. 𝐅^p∈ℝ^(M+1) × D represents the output relation category feature map. Note that if an object or relation category contains multiple words, our text encoder can generate one feature vector of the entire category. During training, we first use language parsing tools <cit.> to extract nouns and relations of nouns from the language expression. Then, we take all nouns in this expression as the candidate object categories, and also add the `no object' category. All relations in this expression as well as the `no relation' category are treated as our relation categories. Our method leverages SGDecoder to model scene graphs, and candidate relation categories are only used for relation prediction. If training data (e.g., fixed-set detection data) or target applications only require object detection results, our model can avoid these relation inputs and skip the relation prediction and cross-modal learning parts in SGPred. §.§ Scene-Graph-Based Decoder After feature encoding, we build an SGDecoder to extract object and predicate embeddings. The inputs of our decoder are N object tokens {𝐨_n∈ℝ^D}_n=1,...N and a predicate token 𝐩∈ℝ^D, where D is the dimension of each token. Each object token 𝐨_i represents an object in the image. As investigated in previous scene graph generation works <cit.>, the relation and scene graph between the i-th and j-th objects can be represented by the concatenation of the `subject' embedding 𝐨_i, the predicate embedding 𝐩 as well as the 'object' embedding 𝐨_j; and the predicate embedding 𝐩 can be shared for all object pairs. Therefore, we only use one relation token to capture object relations. Our decoder generates object and predicate embeddings from these N+1 object and relation tokens and the image feature 𝐕. Self-attention. The decoder contains L blocks and each block includes a self-attention, an SSGA module and a cross-attention. The self-attention models long-range dependencies among object and predicate tokens, and updates these tokens as follows: {𝐨^sl_1, ..., 𝐨^sl_N, 𝐩^sl} = Attn( query: {𝐨_1, ..., 𝐨_N, 𝐩}, key: {𝐨_1, ..., 𝐨_N, 𝐩}, value: {𝐨_1, ..., 𝐨_N, 𝐩}) where Attn(·) is a multi-head attention model <cit.>. We take our object and predicate tokens {𝐨_1, ..., 𝐨_N, 𝐩} as queries, keys and values in this attention model. {𝐨^sl_1, ..., 𝐨^sl_N, 𝐩^sl} are the updated object and predicate tokens, where long-range dependencies are embedded, and every token is also D-dimension. The SSGA module. We propose an SSGA module that further embeds scene graph information into object tokens to improve OV object discovery, classification and localization. Specifically, as shown in Fig. <ref>, we first generate scene graph embeddings: 𝐠_i,j = [𝐨^sl_i, 𝐛_i, 𝐩^sl, 𝐨^sl_j, 𝐛_j], i,j = 1, ..., N and i ≠ j where [·] means token concatenation. 𝐛_i, 𝐛_j∈ℝ^4 are bounding boxes of the i-th and j-th objects, respectively. We integrate bounding box information to generate more powerful scene graph embeddings. The details of bounding boxes will be introduced in Sec. <ref>. The scene graph embedding 𝐠_i,j∈ℝ^3D+8 encodes the information of the i-th, j-th objects as well as their relation. All scene graph embeddings can compose a scene graph matrix 𝐆∈ℝ^N(N-1) × (3D+8). Our SSGA module then leverages an attention model to embed scene graphs into object tokens as {𝐨^sg_1, ..., 𝐨^sg_N} = SAttn( query: {𝐨^sl_1, ..., 𝐨^sl_N}, key: 𝐆, value: 𝐆) where we take object tokens as queries, and treat scene graph embeddings as keys and values. SAttn(·) means a sparse attention model. To reduce computational costs, we only calculate attention between each object token and its related scene graph embeddings, i.e., this object acts as the `subject' or `object' in the scene graph embedding. For example, for the n-th object token, we only compute attention between 𝐨^sl_n and {𝐠_n,j, 𝐠_j,n}_j=1,...,N and j ≠ n. The attention model incorporates scene graph guidance into object tokens and updates object tokes into {𝐨^sg_n∈ℝ^D}_n=1,...N. In this way, although our model has not seen some objects in training data, it can discover them from scene graph cues. Moreover, these scene graph cues are also helpful for object classification and localization. Cross-attention. The cross-attention takes object and predicate tokens as queries, while using the image feature map 𝐕 as keys and values to integrate visual information into these tokens: {𝐨^cr_1, ..., 𝐨^cr_N, 𝐩^cr} = Attn( query: {𝐨^sg_1, ..., 𝐨^sg_N, 𝐩^sl}, key: 𝐕, value: 𝐕). The output embeddings {𝐨^cr_1, ..., 𝐨^cr_N, 𝐩^cr} of each decoder block embed image, object and scene graph information. All embeddings are D-dimension vectors and can be further refined by the next decoder block. §.§ Scene-Graph-Based Prediction Based on our object and predicate embeddings {𝐨^cr_1, ..., 𝐨^cr_N, 𝐩^cr}, three types of results can be predicted, i.e., object bounding boxes and categories, as well as relation categories). Object bounding box prediction. Let 𝐛_n = [x_n, y_n, w_n, h_n] (n=1,...,N) represent the bounding box for the n-th object. x_n and y_n are the coordinates of the center point in the box, and w_n and h_n are the width and height of the bounding box. We can use MLPs (multi-layer perceptrons) to predict object bounding boxes {𝐛_1,...,𝐛_N} from object embeddings {𝐨^cr_1,...,𝐨^cr_N}. SGOR. Inspired by prior fixed-set object detection works <cit.>, we build an SGOR mechanism. On the one hand, iterative offset regression generates more accurate bounding boxes than one-step prediction <cit.>. More importantly, we leverage SGOR to allow mutual enhancement between scene graph and object localization. Concretely, we first initialize N object bounding boxes before the decoding in Sec. <ref>. Here, we use the same initialization as in <cit.>, which is based on deformable attention predictions. Then, in each block in our decoder, we leverage object boxes to enhance scene graphs by Eqn. (<ref>). After each block, new object embeddings {𝐨^cr_l, 1,...,𝐨^cr_l, N} are generated, where l=1,..,L means the l-th decoder block. Our object embeddings include scene graph cues, and we leverage them to refine bounding boxes as follows: Δ 𝐛_l,n = MLP([𝐨_l,n^cr, 𝐛_l,n]) where 𝐛_l,n∈ℝ^4 is the bounding box of the n-th object in the l-th decoder block. We concatenate the embedding 𝐨_l,n^cr and the bounding box 𝐛_l,n of this object. After that, an MLP with two linear layers and Sigmoid activation functions is used to predict the box offset Δ 𝐛_l,n∈ℝ^4. The bounding box of this object is refined as 𝐛_l+1,n = 𝐛_l,n + Δ 𝐛_l,n where 𝐛_l+1,n is the refined box and can be used as the bounding box in the next decoder block. For each object, the final bounding box prediction is the refined box 𝐛_L+1, n = 𝐛_L,n + Δ 𝐛_L,n after the last decoder block. Object category prediction. We predict object and relation categories based on the object and predicate embeddings {𝐨^cr_1, ..., 𝐨^cr_N, 𝐩^cr} in the final decoder block. Let 𝐎∈ℝ^N × D be a matirx composed by object embeddings, i.e., 𝐎 = {𝐨^cr_1, ..., 𝐨^cr_N}. To predict object categories, we first use two-layer MLPs to refine object embeddings 𝐎 into 𝐎∈ℝ^N × D. Then, we generate a similarity matrix between object and category embeddings: 𝐒^o = 𝐎(𝐅^o)^T where 𝐒^o∈ℝ^N × (C+1) is the object category matrix. In 𝐒^o, each element s^o_n,c means the similarity between the n-th object and c-th object category. For each object, we can find the category with the highest similarity as its classification result, and it is a false proposal if its category is `no object'. Relation category prediction and joint learning. For relation prediction, we first leverage Eqn. (<ref>) to generate scene graph embeddings 𝐠^p_i,j from our final object embeddings {𝐨^cr_1, ..., 𝐨^cr_N}, predicate embedding 𝐩^cr and bounding boxes {𝐛_L+1, 1, ..., 𝐛_L+1, N}. These scene graph embeddings compose a scene graph matrix 𝐆^p∈ℝ^N(N-1) × (3D+8). Two-layer MLPs are used to transform 𝐆^p into 𝐆^p∈ℝ^N(N-1) × D. A relation similarity matrix 𝐒^p∈ℝ^N(N-1) × (M+1) is calculated as 𝐒^p = 𝐆^p(𝐅^p)^T. Similar to object category prediction, we can determine the relation between every object pair by finding out the most similar relation category. There is no relation between two objects, if the predicted relation category is `no relation'. In this way, we can generate OV relation classification results. Moreover, since relation categories are predicted from object embeddings and bounding boxes, we can obtain better object embeddings and object localization results by joint learning with relation training data. Cross-modal learning. We propose cross-modal learning to further exploit scene graph cues to classify OV objects. As OV detection models formulate object classification as a VL matching problem, the key to accurate classification is to learn consistent object and category embeddings. Therefore, we leverage relation supervision to enhance the consistency between object and category embeddings. Specifically, during training, we replace object embeddings in the scene graph matrix 𝐆^p with ground truth object category embeddings. Let 𝐆^t∈ℝ^N(N-1) × (3D+8) represent the replaced matrix. We then use the same two-layer MLPs as in relation category prediction to transform the matrix into 𝐆^t∈ℝ^N(N-1) × D, and generate the relation similarity matrix 𝐒^t∈ℝ^N(N-1) × (M+1) as 𝐒^t = 𝐆^t(𝐅^p)^T. By learning 𝐒^t with relation supervisions, 𝐒^t and 𝐒^p will be closed, and thus our object and category embeddings will be more consistent. Note that we only use cross-modal learning during the training stage, because we do not have ground truth object categories during inference. §.§ Training Our training objective contains four parts as follows: Loss = λ_1L_bb + λ_2L_ocls + λ_3L_pcls + λ_4L_cml, where L_bb, L_ocls, L_pcls and L_cml are loss functions for object bounding box prediction, object category prediction, relation category prediction as well as cross-modal learning, respectively. λ_1, λ_2, λ_3 and λ_4 are hyperparameters to weight different losses. Our bounding box loss L_bb is smooth l1 loss. Since SGOR predicts offsets and refines bounding boxes in every decoder block, we calculate a smooth l1 loss in each block, and the final bounding box loss L_bb is their sum. Bipartite Hungarian matching is used to align predicted boxes with ground truths. Similar to previous OV works <cit.>, our object category loss L_ocls is the sum of binary cross-entropy losses for every element s^o_n,c in our object similarity matrix 𝐒^o. In particular, for each element s^o_n,c, we calculate a binary cross-entropy loss between it and the ground truth. The ground truth is 1 when the n-th object belongs to the c-th category; otherwise, it is 0. Similarly, relation category loss L_pcls and cross-modal leanring loss L_cml are also binary cross-entropy losses for similarity matrices 𝐒^p and 𝐒^t, respectively. We leverage referring grounding data for training. A referring grounding sample contains an image, a language and bounding box annotations for every noun. The relation between each object pair can be extracted by language parsing tools, as described in Sec. <ref>. We take them as relation classification ground truths. Fixed-set object detection data can also be used during training. We only use L_bb and L_ocls for these data, while fixing the predicate token and relation prediction parts. § EXPERIMENTS §.§ Experiment Settings Following prior works <cit.>, we evaluate the OV ability by zero-shot experiments on COCO <cit.> and LVIS <cit.>, and leverage VL data during training to recognize OV objects. VL data. Any extra VL can be used in the OV scenario. Here, we employ referring grounding data like previous methods <cit.>. Flickr30K Entities <cit.> includes 31K images with referring expressions and annotations. Visual Genome <cit.> labels 108K image for referring grounding. We use these 140K training data. Our goal is to verify the effectiveness of our network rather than train a large pre-training model. Thus, we do not use millions of training data. Also, Visual Genome <cit.> provides scene graph annotations, but we do not use them. COCO. The COCO 2017 dataset <cit.> contains 120K training and 5K validation images. We use the generalized zero-shot setting <cit.>, which splits COCO into 48 base classes for training and 17 novel classes for validation. We combine 120K COCO training images and 140K grounding data to train our model. There are overlapped images between COCO <cit.> and Visual Genome <cit.>. We remove them and training samples that contain 17 novel classes. LVIS. There are 100K training and 20K validation images on LVIS <cit.>. Categories in LVIS are divided into 405 frequent, 461 common and 337 rare classes. We combine 886 frequent and common classes for training, while using 337 rare categories for validation. 140K grounding and 100K LVIS training data are mixed during training, where rare classes are removed. Since LVIS <cit.> also requires mask predictions, we use the external fully convolutional head in <cit.> to generate masks based on decoder embeddings. We also test this LVIS-trained model on COCO to verify the cross-dataset ability. Metrics. We adopt AP50 for COCO <cit.> zero-shot detection, and mAP for LVIS <cit.> as well as the cross-dataset validation. §.§ Implementation Details We choose RoBERTa <cit.> as our text encoder and add a linear layer to transform the textual feature dimension to D. D is set to 512 in our experiments. We fix RoBERTa during training and only update the parameters of the linear layer. We do not use any prompt during training, only the prompt `A photo of a [query]' is used during inference. Our image encoder is Deformable DETR <cit.> with the ResNet50 <cit.> backbone pre-trained on ImageNet. Deformable attention is also used in our decoder. The numbers N and L of object tokens and decoder blocks are set to 100 and 6, respectively. We train our model on two stages. VL data are used during the first-stage training, while fixed-set detection data is used in the second stage. λ_1, λ_2, λ_3 and λ_4 are simply set to 1.5, 1.5, 1.0 and 1.0, respectively, and fixed for all datasets. Other network and training settings are the same as Deformable DETR <cit.>. All experiments are conducted on the Pytorch platform <cit.> with 8 V100 GPUs. §.§ Main Results We report the zero-shot results of our model and other state-of-the-art methods on COCO <cit.> in Table <ref>. Our SGDN outperforms Rasheed et al. <cit.>, which achieve the best performance for novel classes in previous methods. Rasheed et al. <cit.> use the large-scale VL pretraining CLIP <cit.> and add COCO Caption data for training. They also leverage novel-class information during training, which is not practical for OV detection. In prior works without novel-class information, RegionCLIP <cit.> shows the highest accuracy, which also uses CLIP <cit.> and three million extra data. Compared with it, our SGDN yields improvements of 6.1% for novel classes. GLIP <cit.> also uses referring grounding training data, but it is trained with 27 million data as a VL pre-training. Rather than designing a pre-training, our work aims to provide an architecture for OV detection. Therefore, we reproduce GLIP <cit.> with our 260K training data for a fair comparison. Our SGDN outperforms it by 6.8% for novel classes. We also achieve the best accuracy for all classes. Table <ref> shows the zero-shot results on LVIS <cit.>. We outperform the previous state-of-the-art method VLDet <cit.> by 1.9% for rare classes. Note that, VLDet <cit.> also uses novel-class information during training. Compared with GLIP <cit.>, which uses the same VL training data, we achieve improvements of 3.9% for rare classes and 2.8% for all classes. As several works <cit.> show cross-dataset results to further verify the OV ability, we also report these results in Table <ref>. Our method outperforms all these methods except the original GLIP <cit.>, which uses 27 million training data and a large Swin-L <cit.>. Compared with GLIP <cit.> with the same backbone and training data, our SGDN obtains gains of 3.6%. Meanwhile, X-DETR <cit.> also employs grounding data for training, and we outperform it by 14%. All these superior results demonstrate the effectiveness of our scene-graph-based framework, as well as our proposed SGDecoder and SGPred modules. §.§ Ablation Study To further verify the effectiveness of our SGDN, we conduct ablation studies on zero-shot COCO. All models are trained with 260K VL and COCO base data, and use the ResNet-50 backbone. Scene graph for OV detection. We report the effects of our main components in Table <ref>. Since we use deformable attention <cit.>, we first build a Deformable-DETR-based OV detection model, `Model A', as our baseline. In `Model A', we add a text encoder to Deformable DETR <cit.> and replace its classification head with our object category prediction. Then, we design a scene-graph-based `Model B', where we incorporate the predicate token 𝐩 as well as the relation category prediction module to `Model A', and extract scene graphs for training. For novel classes, `Model B' outperforms `Model A' and GLIP <cit.> by 2.4% and 1.6%, respectively. These results show the effectiveness of scene graphs for OV object detection. Main component. Our SGDecoder (`Model C') outperforms `Model B' by 2.3% for novel classes, because our SGDecoder with SSGA leverages scene graphs to better embed objects. Our SGPred (`Model D') achieves gains of 3.5% and 3.2% for novel and all classes, which demonstrates the effectiveness of our SGOR and cross-modal learning. Compared with `Model B', our final SGDN yields improvements of 5.2% and 4.6% for novel and all classes, respectively. Dissecting SGDecoder. We then dissect our SGDecoder in Table <ref>. If we remove SSGA from `Model C', the model is equal to `Model B' and the performance significantly decreases. In `Model C w/o box', we use SSGA but remove bounding boxes from scene graph embeddings. Compare to `Model B', this model achieves improvements of 1.5% for novel classes. The reason is that SSGA better exploits scene graph information for OV detection. Bounding boxes generate gains of 0.8% for novel classes. In `Model C w/o sparse', we employ vanilla deformable attention instead of the sparse one. The performance only slightly increases. However, vanilla attention requires much more computational costs than sparse attention. Dissecting SGPred. In Table <ref>, we show the effects of main parts in our SGPred. Our SGOR mutually improves object localization and scene graph embedding, and thus yields gains of 1.5% and 2.3% for novel and all classes. Cross-modal learning increases the performance for novel classes by 1.1%, benefiting from the scene-graph-based cross-modal consistency enhancement. We show our OV SGDet ability in Table <ref>. We conduct this experiment on Visual Genome <cit.>, and use the dataset split provided by <cit.>. The training set includes 70% seen object and relation classes in Visual Genome grounding and scene graph data, while the 30% unseen object and relation classes are used for validation. Metrics are Recall@50 and Recall@100. It can be seen that our SGDN significantly outperforms previous OV scene graph methods on the OV SGCls task and can also predict OV SGDet results. Qualitative results. Fig. <ref> shows qualitative results on COCO. It can be observed that GLIP <cit.> misclassifies some unseen objects, such as the `couch' in the upper right image in Fig. <ref>. We exploit scene graph information to better recognize OV objects. Normally, the object `near' a `TV' and a `chair' is more like a `couch' than a `bed'. Therefore, our approach reduces this classification error. GLIP [18] also misses a number of unseen objects. For example, in the bottom left and right images in Fig. <ref>, the `horse', `umbrella' and `handbag' are missed by GLIP <cit.>. Our SGDN successfully discovers them by exploiting scene graph cues `zebra near' and `person holding'. Moreover, SGDN can better localize unseen objects (e.g., the `snowboard' in the upper left image in Fig. <ref>) based on scene graphs. These results demonstrate the effectiveness of our SGDN for OV object discovery, classification and localization. OV scene graph detection. Scene graph generation contains three tasks. The simplest is predicate classification (PredCls), where object bounding boxes and classes are provided, and only object relations (predicates) need to be classified. The second is scene graph classification (SGCls). By given object bounding boxes, SGCls expects to classify objects and relations. The hardest is scene graph detection (SGDet), which requires to predict all bounding boxes, object and relation classes. Previous OV scene graph generation methods cannot deal with the OV SGDet task, because they are not able to detect OV objects <cit.>. Different from them, our SGDN can simultaneously detect OV objects and relations, and thus generates OV SGDet predictions. We visualize OV scene graph detection results in Fig. <ref>. Since SVPR <cit.> does not release its source code, we only show the results from our SGDN in Fig. <ref>. We successfully localize and classify unseen objects, e.g., `man', `cat' and `window'. Meanwhile, unseen relations such as `sitting on' are also predicted by our SGDN. § CONCLUSION In this paper, we have presented SGDN, a scene-graph-based network for OV object detection. We first introduce an SGDecoder to generate object and relation embeddings, where an SSGA module is presented to leverage scene-graph cues for OV object discovery, classification and localization. Secondly, an SGPred method is designed to predict OV object detection and scene graph results, including SGOR and cross-modal learning. In SGOR, scene graphs and object localization are iteratively improved by each other. Cross-modal learning takes scene graphs as bridges to enhance the consistency between cross-modal embeddings for OV object classification. Extensive experiments on two OV detection datasets demonstrate the effectiveness of our SGDN. We also show the OV scene graph detection ability of SGDN, which cannot be solved by previous OV scene graph generation approaches. ACM-Reference-Format
http://arxiv.org/abs/2307.00909v1
20230703101310
Cavity-Induced Strong Magnon-Magnon Coupling in Altermagnets
[ "Zhejunyu Jin", "Huanhuan Yang", "Zhaozhuo Zeng", "Yunshan Cao", "Peng Yan" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[Corresponding author: ][email protected] School of Physics and State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054, China Long-distance strong coupling between short-wavelength magnons remains an outstanding challenge in quantum magnonics, an emerging interdiscipline between magnonics and quantum information science. Recently, altermagnets are identified as the third elementary class of magnets that break the time-reversal symmetry without magnetization and thus combine characteristics of conventional collinear ferromagnets and antiferromagnets. In this work, we show that cavity photons can mediate the long-distance strong coupling of exchange magnons with opposite chiralities in altermagnets, manifesting as an anticrossing of the magnon-polariton spectrum in the extremely dispersive regime. The predicted effective magnon-magnon coupling strongly depends on the magnon propagation direction, and is thus highly anisotropic. Our findings are intimately connected to the intrinsic nature of altermagnetic magnons, i.e., chirality-splitting-induced crossing of exchange magnons, which has no counterpart in conventional ferromagnets or antiferromagnets, and may open a new path way for magnon-based quantum information processing in altermagnets. Cavity-Induced Strong Magnon-Magnon Coupling in Altermagnets Peng Yan ============================================================ Introduction.—Magnons (quanta of spin waves) have been extensively explored for wave-based sensing and computing concepts, due to their long lifetime and high tunability <cit.>. The compatibility between magnons and diverse quantum platforms such as qubit <cit.>, phonon <cit.>, and photon <cit.> further amplifies the advantages of utilizing magnons as a carrier for quantum information processing, constituting quantum magnonics <cit.>. Coherent transfer of magnetic information between two magnonic systems demands a strong magnon-magnon coupling, which is usually generated by dipolar interaction <cit.>, interlayer exchange <cit.>, and in-plane anisotropy <cit.>, etc. However, limited by the effective range of these magnetic interactions, they can only mediate the short-distance coupling between magnons. It is noted that the coherent magnon-photon interaction has been reported in hybrid cavity-magnet systems <cit.>. Cavity photons can also couple to two magnons simultaneously over a long distance, inducing a nonlocal magnon-magnon interaction in both ferromagnets and antiferromagnets <cit.>. Nevertheless, past studies focused on the long-wavelength limits, i.e., magnetostatic magnons. The nonlocal strong coupling between short-wavelength magnons remains an outstanding challenge in the community. Recently, an emerging class of magnets dubbed altermagnet was identified, which maintains zero net macroscopic magnetization but exhibits a surprisingly large spin-splitting <cit.>. These peculiar features are protected by the combined spin and lattice symmetries, i.e., the spin-space inversion and crystallographic-space rotation <cit.>, which make altermagnets a promising platform for quantum magnonics. It has been shown that the chiral-splitting effect induces the crossing of infinitely-long-wavelength magnons (k=0, where k is the wavevector) with opposite chiralities <cit.>. But it brings about two open issues for realizing strong nonlocal coupling between exchange magnons: (i) How to push the crossing point to the short-wavelength region; and (ii) How to realize the coupling between these two orthogonal magnon modes, which was forbidden by angular momentum conservation, a consequence of rotational invariance or isotropy. In this Letter, we demonstrate the cavity-induced long-distance strong coupling of exchange magnons in altermagnets (Fig. <ref>). We show that a perpendicular magnetic field can lead to an upward (downward) shift of the magnon branch with a low (high) group-velocity, resulting in an unavoided level crossing at a finite wave number (k≠ 0). Then, we find that the magnon degeneracy can be lifted by placing the altermagnet in a photonic cavity, manifesting as an anticrossing in the magnon-polariton spectrum. Surprisingly, it is observed that the photon frequency is orders of magnitude higher than that of magnons at the anticrossing point, indicating a highly dispersive magnon-photon coupling. By utilizing a second-order perturbation theory, we derive the analytical formula of the cavity-induced effective magnon-magnon coupling. We show that the indirect coupling strongly depends on the magnon propagation direction due to the anisotropic nature of the exchange interaction in altermagnets. Our results open the door for exploring quantum magnonics based on the altermagnetism. Chiral splitting of magnons in altermagnets.—Let us consider a two sublattice altermagnet with anisotropic intralayer exchange interactions. The spin Hamiltonian reads ℋ_alter= -S^2∑_i,j[J_1 s_i,j^A· s_i+1,j^A+J_2 s_i,j^B· s_i+1,j^B +J_2 s_i,j^A· s_i,j+1^A+J_1 s_i,j^B· s_i,j+1^B+J_3 s_i,j^A· s_i,j^B +S^-1 h·( s_i,j^A+ s_i,j^B)+K( s_i,j^A· z)^2+K( s_i,j^B· z)^2], where J_1,2>0 represents the intralayer ferromagnetic exchange coupling strength, J_3<0 is the interlayer antiferromagnetic exchange coupling coefficient, h and K denote the external magnetic field and the magnetic anisotropy constant, respectively, s_i,j^A and s_i,j^B are the normalized spins on sites (i,j) of sublattices A and B, respectively, and S is the length of the spin vector. Figure <ref>(a) shows the crystal structure of a two-sublattice altermagnet. Under a combined operation of two-fold spin-space rotation 𝒞_s,2 (s^A→ s^B) and four-fold crystallographic-space rotation 𝒞_4 (i→ j and J_1→ J_2), we find that Hamiltonian (<ref>) is invariant and thus respects the symmetry of altermagnet <cit.>. We then obtain the dispersion relation of altermagnetic magnons <cit.> ω_ k,± =± c_1( k)+c_2( k), with c_1( k)=S^2{h/S+(J_1-J_2)[cos(k_ya)-cos(k_xa)]} and c_2( k)=S^2{2K-(J_2+J_1)[cos(k_xa)+cos(k_ya)-2]}^1/2{2K-2J_3-(J_1+J_2)[cos(k_xa)+cos(k_ya)-2]}^1/2. Here, a is the lattice constant and ± corresponds to right-handed (RH) and left-handed (LH) magnon modes, with respect to the z axis, respectively. In what follows, we use the following parameters to calculate the spectrum: J_1= 0.4J_2, J_3 = -1.25J_2, K = 0.05J_2, and S=1, if not stated otherwise. Different from the antiferromagnet, the degeneracy of RH and LH magnons is broken even in the absence of the external magnetic field, resulting in the unequal magnon group velocity, as shown in Fig. <ref>(b). In addition, it is noted that the spectrum of RH magnons propagating along the x axis is identical to LH magnons propagating along the y axis, and vice versa. When a perpendicular magnetic field is applied, the energy degeneracy at k=0 is removed (k=| k|). However, we observe that, for magnons propagating along x-direction, the branch with a lower group-velocity shifts upward, while the one with a higher group-velocity shifts downward. This results in an unavoided level crossing at a finite wave number, i.e., k≠ 0 [see red and blue curves in Fig. <ref>(c)]. Such feature does not exist for magnons propagating along the y direction, which is an indication of the crossing anisotropy and will be discussed below. A level crossing usually means the absence of coupling. Particularly for two magnons with opposite handedness, their direct coupling was thought to be forbidden by angular momentum conservation, a consequence of rotational invariance. As to the long-distance coupling between exchange magnons, it even looks like a dilemma at first sight because the exchange interaction is extremely short-ranged. Next, we tackle this problem in cavities. Cavity-induced magnon-magnon coupling.—To this end, we consider the altermagnet embedded in a photonic cavity, described by following Hamiltonian ℋ =ℋ_alter+ℋ_ph+ℋ_int, ℋ_ph =1/2∫(ϵ_0 E^2+ B^2/μ_0)d r, ℋ_int =-1/μ_0∑_i,j( s_i,j^A+ s_i,j^B)· B, where ℋ_ph is the photon Hamiltonian, ℋ_int is the magnon-photon coupling, E and B are the electric and magnetic components of the electromagnetic wave, respectively, and ϵ_0 and μ_0 are vacuum permittivity and susceptibility, respectively. By using the Holstein-Primakoff transformation <cit.>: s^A,+=√(2)a,s^B,+=√(2)b^†, s^A,-=√(2)a^†,s^B,-=√(2)b, s^A_z=1-a^† a, and s^B_z=b^† b-1, where the spin operators s^m,±=s^m_x± is^m_y with m=A,B, and a(b) and a^†(b^†) are the magnon annihilation and creation operators for sublattice A(B), respectively, we can express the magnon Hamiltonian in the momentum space as ℋ_alter= ∑_ k{ [2J_1cos(k_xa)+2J_2cos(k_ya)+J_3-h -2J_2-2J_1+2K]a_ k^† a_ k+[2J_2cos(k_xa) +2J_1cos(k_ya) +J_3+h-2J_2-2J_1 +2K]b_ k^† b_ k +J_3(a_ kb_ k+a_ k^†b_ k^†)}. Here, we have assumed circularly-polarized photons, with the resulting photon Hamiltonian and magnon-photon coupling as follows ℋ_ph =∑_ kω_c(c_ k^† c_ k+1/2), ℋ_int =∑_ kg_c(c_ ka_ k+c_ k^† a_ k^†+c_ kb_ k^†+c_ k^† b_ k), where ω_c=v_c| k| is the photon dispersion relation with v_c the speed of light and g_c=√(ω_cμ_0 N/2 V) represents the magnon-photon coupling with the number of spins N and the volume of cavity V <cit.>. By utilizing the Bogoliubov transformation a_ k =u_ kα_ k+v_ kβ_ k^†, b_ k =u_ kβ_ k+v_ kα_ k^†, where u_ k=√((Δ_ k+1)/2), v_ k=-√((Δ_ k-1)/2), and Δ_ k=√(1/1-{J_3/(J_1+J_2)[cos(k_xa)+cos(k_ya)]+J_3-2(J_1+J_2)-2K}^2), the magnon Hamiltonian (<ref>) can be diagonalized as ℋ_alter =∑_ k(ω_ k,-α_ k^†α_ k+ω_ k,+β_ k^†β_ k). The total Hamiltonian then can be recast as ℋ=ψ^†ℳψ with the vector ψ=(α_ k,β_ k^†, c_ k^†)^† and matrix ℳ=( [ ω_ k,- 0 λ_ k/2; 0 ω_ k,+ λ_ k/2; λ_ k/2 λ_ k/2 ω_c; ]), with λ_ k=2g_c(u_ k+v_ k), leading to the following secular equation 4ω^3-4(ω_ k,++ω_ k,-+ω_c)ω^2+λ_ k^2(ω_ k,+ +ω_ k,-)-4ω_ k,+ω_ k,-ω_c +2 [-λ_ k^2+2ω_ k,+ω_ k,-+2(ω_ k,++ω_ k,-)ω_c ]ω=0. By numerical solving Eq. (<ref>), one obtains the dispersion relation of the coupled cavity-altermagnet. For the case without the magnetic field, near k=0, one of the degenerated magnon bands is a dark mode that does not interact with the cavity photon, while the other one does, see Figs. <ref>(a) and (c). When the external field is present, the double degeneracy of magnons is removed, and both magnon modes couple with the cavity photon, see Figs. <ref>(b) and (d). These features are close to the case in antiferromagnets due to the similar magnon dispersion in the long-wavelength limit <cit.>. Strikingly unlike the antiferromagnet, we find an anticrossing gap emerging at k_x=a^-1arccos [1-h/(J_2-J_1) ] for magnons propagating along x direction. This gap represents the effective magnon-magnon coupling. However, at this point, the photon frequency is much higher than magnon (ω_c/ω_ k, ±>10^5) [see Fig. <ref>(d)]. The magnon-photon coupling is thus highly dispersive, where a resonant energy exchange between magnon and photon is not allowed. But magnons can exchange energy through virtual photons with the cavity, resulting in the indirect magnon-magnon coupling, as illustrated in Fig. <ref>(e). To obtain an analytical understanding, we derive the effective magnon-magnon coupling by using the following unitary transformation U=exp[ λ_ k/2Δ_1(c_ k^†β_ k - β_ k^†c_ k + c_ k^†α_ k - α_ k^†c_ k) +λ_ k/2Δ _2(c_ k^†α_ k^† - c_ kα_ k + c_ k^†β_ k^† - c_ kβ_ k)], and expanding the transformed Hamiltonian to the second order of λ_ k ℋ_eff =U^†ℋU=[(ω ' + λ_ k^2/4Δ _2 + λ_ k^2/4Δ _1)α _ k^†α _ k + (ω ' + 3λ_ k^2/4Δ _1 - λ_ k^2/4Δ _2)β _ k^†β _ k+ (ω _c - λ_ k^2/Δ _1)c_ k^†c_ k] + g_eff(α _ kβ _ k^† + β _ kα _ k^† ), where Δ_1=ω_c-ω' and Δ_2=-ω_c-ω' with the magnon frequency ω' at the crossing point. Here, g_eff=λ_ k^2/2Δ _1 describes the effective magnon-magnon coupling mediated by virtual photons. In the dispersive limit, a careful examination reveals that one of the hybridization modes passes through the degeneracy point where ω_+ = ω_-=ω', in sharp contrast to the observations in resonantly coupled systems <cit.>. This feature can also be seen from the three eigenvectors α_ k^†-β_ k, c_ k-√(2)/2(α_ k^†+β_ k), and c_ k+√(2)/2(α_ k^†+β_ k), in which the state α_ k^†-β_ k is a magnon dark mode (also known as a subradiant mode) that is decoupled from the cavity <cit.>. Figure <ref>(a) demonstrates that the effective coupling induced by virtual photon exchange is dominated by the second-order of the dispersive coupling λ_ k. Since k_x at the anticrossing point is a nonlinear function of both the magnetic field h and exchange ratio J_1/J_2, it indicates a nonlinear dependence of the effective coupling strength g_eff on h and J_1/J_2, as shown in Figs. <ref>(b) and <ref>(c), respectively. The anisotropy constant K merely modifies the magnon frequency, and hardly affects the coupling strength g_eff due to the huge frequency mismatch between magnons and cavity photons, as shown in Fig. <ref>(d). Our theory well explains the numerical calculations. In the above calculations, we focused on the case of magnons propagating along the x direction. Notably, the anisotropic exchange interaction in altermagnets is expected to generate anisotropic magnon-magnon couplings. To illustrate this point, we express the magnon wavevector as k(cosϕx̂+sinϕŷ) with ϕ being the polar angle. We then derive the relation between the wavevector k_c of the crossing point and the angle ϕ, cos(k_cacosϕ)-cos(k_casinϕ) =h/J_1-J_2 , which is a transcendental equation and it can only be solved numerically. As plotted in Fig. <ref>(a), we observe that k_c is symmetric about ϕ = 0 and π due to the mirror symmetry of the y-z plane, while it takes the minimum when ϕ equals 0 or π because the difference in the group velocities of magnons with opposite chiralities reaches the maximum in such cases. When ϕ deviates from these two values, k_c substantially increases. In addition, it is noted that Eq. (<ref>) has real solutions only when the magnon propagation angle ϕ falls into two windows [-ϕ_0,ϕ_0] and [π-ϕ_0,π+ϕ_0] with ϕ_0≈√(4+2h/(J_1-J_2))/π, around ϕ = 0 and π. Figure <ref>(a) also shows that the crossing point moves away from the origin as the magnetic field increases. A direct consequence of the ϕ-dependence of k_c is the anisotropy of the effective coupling g_eff, as shown in Fig. <ref>(b). Numerical results compares very well with our analytical formula. Discussion.—Using the magnetic parameters of RuO_2: J_1= 0.8 meV, J_2 = 2 meV, J_3 = -3 meV, and K = 0.1 meV <cit.>, one can estimate the effective magnon-magnon couping as g_eff=0.49 meV for the spin density N/V ∼ 10^23 cm^-3 and h=0.6 meV. The resulting cooperativity, i.e., the ratio of the indirect coupling to the magnon dissipation rate, can reach 100 for the Gilbert damping constant α=1.0 × 10^-3, indicating a strong coupling in the highly dispersive region. It is noted that the dipole-dipole interaction can break the degeneracy of magnons with opposite chiralities, while the resulting direct magnon-magnon coupling is negligibly small compared to the cavity-induced indirect coupling between exchange magnons <cit.>. Its contribution can be further dismissed when considering the long-distance coupling since the dipolar field generally decays as 1/d^3 with d the spatial distance of two magnets. To summarize, we have predicted the cavity mediated long-range interaction between short-wavelength magnons in altermagnets. We showed that a perpendicular magnetic field can shift the magnon degeneracy from the origin to the exchange region, due to the unique chiral magnon splitting in altermagnets. The strong long-distance coupling between magnons with opposite handedness in cavities manifests as an anticrossing of the magnon-polariton spectrum in the highly dispersive region. We derived the analytical formula of the effective magnon-magnon coupling mediated by virtual photons and showed that it is highly anisotropic. Our results may open a new pathway for developing quantum magnonics and enrich the emerging research landscape of altermagnetism. This work was funded by the National Key Research and Development Program under Contract No. 2022YFA1402802 and the National Natural Science Foundation of China under Grant No. 12074057. 99 Kruglyak2010 V. V. Kruglyak, S. O. Demokritov, and D. Grundler, Magnonics, https://iopscience.iop.org/article/10.1088/0022-3727/43/26/264001/metaJ. Phys. D 43, 264001 (2010). Chumak2015 A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Magnon spintronics, https://doi.org/10.1038/nphys3347Nat. Phys. 11, 453 (2015). Pirro2021 P. Pirro, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Advances in coherent magnonics, https://doi.org/10.1038/s41578-021-00332-wNat. Rev. Mater. 6, 1114 (2021). Tabuchi2015 Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Coherent coupling between a ferromagnetic magnon and a superconducting qubit, https://www.science.org/doi/full/10.1126/science.aaa3693Science 349, 405 (2015). Mq2022M. Kounalakis, G. E. W. Bauer, and Y. M. Blanter, Analog Quantum Control of Magnonic Cat States on a Chip by a Superconducting Qubit, https://doi.org/10.1103/PhysRevLett.129.037205Phys. Rev. Lett. 129, 037205 (2022). Mq2023D. Xu, X. Gu, H. Li, Y. Weng, Y. Wang, J. Li, H. Wang, S. Zhu, and J. Q. You, Quantum Control of a Single Magnon in a Macroscopic Spin System, https://doi.org/10.1103/PhysRevLett.130.193603Phys. Rev. Lett. 130, 193603 (2023). Agrawal2013M. Agrawal, V. I. Vasyuchka, A. A. Serga, A. D. Karenowska, G. A. Melkov, and B. Hillebrands, Direct Measurement of Magnon Temperature: New Insight into Magnon-Phonon Coupling in Magnetic Insulators, https://doi.org/10.1103/PhysRevLett.111.107204Phys. Rev. Lett. 111, 107204 (2013). Streib2019 S. Streib, N. Vidal-Silva, K. Shen, and G. E. W. Bauer, Magnon-phonon interactions in magnetic insulators, https://doi.org/10.1103/PhysRevB.99.184442Phys. Rev. B 99, 184442 (2019). Bozhko2020 D. A. Bozhko, V. I. Vasyuchka, A. V. Chumak, and A. A. Serga, Magnon-phonon interactions in magnon spintronics (review article), https://doi.org/10.1063/10.0000872Low Temp. Phys. 46, 383 (2020). Bai2011L. Bai, M. Harder, Y. P. Chen, X. Fan, J. Q. Xiao, and C.-M. Hu, Spin Pumping in Electrodynamically Coupled Magnon-Photon Systems, https://doi.org/10.1103/PhysRevLett.114.227201Phys. Rev. Lett. 114, 227201 (2015). Braggio2017C. Braggio, G. Carugno, M. Guarise, A. Ortolan, and G. Ruoso, Optical Manipulation of a Magnon-Photon Hybrid System, https://doi.org/10.1103/PhysRevLett.118.107205Phys. Rev. Lett. 118, 107205 (2017). Harder2018M. Harder, Y. Yang, B. M. Yao, C. H. Yu, J. W. Rao, Y. S. Gui, R. L. Stamps, and C.-M. Hu, Level Attraction Due to Dissipative Magnon-Photon Coupling, https://doi.org/10.1103/PhysRevLett.121.137203Phys. Rev. Lett. 121, 137203 (2018). Yuan2022 H. Y. Yuan, Y. Cao, A. Kamra, R. A. Duine, and P. Yan, Quantum magnonics: When magnon spintronics meets quantum information science, https://doi.org/10.1016/j.physrep.2022.03.002 Phys. Rep. 965, 1 (2022). Shiota2020Y. Shiota, T. Taniguchi, M. Ishibashi, T. Moriyama, and T. Ono, Tunable Magnon-Magnon Coupling Mediated by Dynamic Dipolar Interaction in Synthetic Antiferromagnets, https://doi.org/10.1103/PhysRevLett.125.017203Phys. Rev. Lett. 125, 017203 (2020). Chen2018J. Chen, C. Liu, T. Liu, Y. Xiao, K. Xia, G. E. W. Bauer, M. Wu, and H. Yu, Strong Interlayer Magnon-Magnon Coupling in Magnetic Metal-Insulator Hybrid Nanostructures, https://doi.org/10.1103/PhysRevLett.120.217202Phys. Rev. Lett. 120, 217202 (2018). Ndiaye2017A. Sud, C. W. Zollitsch, A. Kamimaki, T. Dion, S. Khan, S. Iihama, S. Mizukami, and H. Kurebayashi, Tunable magnon-magnon coupling in synthetic antiferromagnets, https://doi.org/10.1103/PhysRevB.102.100403Phys. Rev. B 102, 100403(R) (2020). Sklenar2021J. Sklenar and W. Zhang, Self-Hybridization and Tunable Magnon-Magnon Coupling in van der Waals Synthetic Magnets, https://doi.org/10.1103/PhysRevApplied.15.044008Phys. Rev. Applied 15, 044008 (2021). Liensberger2019L. Liensberger, A. Kamra, H. Maier-Flaig, S. Geprägs, A. Erb, S. T. B. Goennenwein, R. Gross, W. Belzig, H. Huebl, and M. Weiler, Exchange-Enhanced Ultrastrong Magnon-Magnon Coupling in a Compensated Ferrimagnet, https://doi.org/10.1103/PhysRevLett.123.117204Phys. Rev. Lett. 123, 117204 (2019). Soykal2010Ö. O. Soykal and M. E. Flatté, Strong Field Interactions between a Nanomagnet and a Photonic Cavity, https://doi.org/10.1103/PhysRevLett.104.077202Phys. Rev. Lett. 104, 077202 (2010). Yuan2017 H. Y. Yuan and X. R. Wang, Magnon-photon coupling in antiferromagnets, https://doi.org/10.1063/1.4977083 Appl. Phys. Lett. 110, 082403 (2017). Lambert2016N. J. Lambert, J. A. Haigh, S. Langenfeld, A. C. Doherty, and A. J. Ferguson, Cavity-mediated coherent coupling of magnetic moments, https://doi.org/10.1103/PhysRevA.93.021803Phys. Rev. A 93, 021803(R) (2016). Rameshti2018B. Z. Rameshti and G. E. W. Bauer, Indirect coupling of magnons by cavity photons, https://doi.org/10.1103/PhysRevB.97.014419Phys. Rev. B 97, 014419 (2018). Grigoryan2019V. L. Grigoryan and K. Xia, Cavity-mediated dissipative spin-spin coupling, https://doi.org/10.1103/PhysRevB.100.014415Phys. Rev. B 100, 014415 (2019). Nair2022J. M. P. Nair, D. Mukhopadhyay, and G. S. Agarwal, Cavity-mediated level attraction and repulsion between magnons, https://doi.org/10.1103/PhysRevB.105.214418Phys. Rev. B 105, 214418 (2022). Zhang2023Q. Zhang, Y. Sun, J. Xue, and L. Bai , Distant Magnon-Magnon Coupling Mediated by Nonresonant Photon, https://doi.org/10.3390/sym15020518Symmetry 15, 518 (2023). Johansen2018O. Johansen and A. Brataas, Nonlocal Coupling between Antiferromagnets and Ferromagnets in Cavities, https://doi.org/10.1103/PhysRevLett.121.087204Phys. Rev. Lett. 121, 087204 (2018). Smejkal1L. Šmejkal, J. Sinova, and T. Jungwirth, Emerging Research Landscape of Altermagnetism, https://doi.org/10.1103/PhysRevX.12.040501Phys. Rev. X 12, 040501 (2022). Smejkal2L. Šmejkal, A. Marmodoro, K. Ahn, R. Gonzalez-Hernandez, I. Turek, S. Mankovsky, H. Ebert, S. W. D'Souza, O. Šipr, J. Sinova, and T. Jungwirth, Chiral magnons in altermagnetic RuO_2, https://doi.org/10.48550/arXiv.2211.13806arXiv:2211.13806v1 (2022). Smejkal3L. Šmejkal, J. Sinova, and T. Jungwirth, Beyond Conventional Ferromagnetism and Antiferromagnetism: A Phase with Nonrelativistic Spin and Crystal Rotation Symmetry, https://doi.org/10.1103/PhysRevX.12.031042Phys. Rev. X 12, 031042 (2022). Mazin2023I. I. Mazin, Altermagnetism in MnTe: Origin, predicted manifestations, and routes to detwinning, https://doi.org/10.1103/PhysRevB.107.L100418Phys. Rev. B 107, L100418 (2023). Turek2022I. Turek, Altermagnetism and magnetic groups with pseudoscalar electron spin, https://doi.org/10.1103/PhysRevB.106.094432Phys. Rev. B 106, 094432 (2022). Feng2022Z. Feng, X. Zhou, L. Š mejkal, L. Wu, Z. Zhu, H. Guo, R. González-Hernández, X. Wang, H. Yan, P. Qin, X. Zhang, H. Wu, H. Chen, Z. Meng, L. Liu, Z. Xia, J. Sinova, T. Jungwirth, and Z. Liu, An anomalous Hall effect in altermagnetic ruthenium dioxide, https://doi.org/10.1038/s41928-022-00866-zNat. Electron. 5, 735 (2022). Ouassou2023J. A. Ouassou, A. Brataas, and J. Linder, Josephson effect in altermagnets, https://doi.org/10.48550/arXiv.2301.03603arXiv:2301.03603v1 (2023). SZhang2023S.-B. Zhang, L.-H. Hu, and T. Neupert, Finite-momentum Copper pairing in proximitized altermagnets, https://doi.org/10.48550/arXiv.2302.13185arXiv:2302.13185v1 (2023). Zhou2023X. Zhou, W. Feng, R. Zhang, L. Šmejkal, J. Sinova, Y. Mokrousov, Y. Yao, Crystal Thermal Transport in Altermagnetic RuO_2, https://doi.org/10.48550/arXiv.2305.01410 arXiv:2305.01410v1 (2023). Hariki2023A. Hariki, T. Yamaguchi, D. Kriegner, K. W. Edmonds, P. Wadley, S. S. Dhesi, G. Springholz, L. Šmejkal, K. Výborný, T. Jungwirth, and J. Kuneš, X-ray Magnetic Circular Dichroism in Altermagnetic α-MnTe, https://doi.org/10.48550/arXiv.2305.03588arXiv:2305.03588v1 (2023). Sun2023C. Sun, A. Brataas, and J. Linder, Andreev reflection in altermagnets, https://doi.org/10.48550/arXiv.2303.14236arXiv:2303.14236v1(2023). Bai2023H. Bai, Y. C. Zhang, Y. J. Zhou, P. Chen, C. H. Wan, L. Han, W. X. Zhu, S. X. Liang, Y. C. Su, X. F. Han, F. Pan, and C. Song, Efficient Spin-to-Charge Conversion via Altermagnetic Spin Splitting Effect in Antiferromagnet RuO_2, https://doi.org/10.1103/PhysRevLett.130.216701Phys. Rev. Lett. 130, 216701 (2023). SM See Supplemental Material at http://link.aps.org/supplemental/ for the derivation of the dispersion of altermagnetic magnons and the details of effective Hamiltonian generated by unitary transformation, which includes Refs. <cit.>. Blais2004 A. Blais, R. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation, https://doi.org/10.1103/PhysRevA.69.062320Phys. Rev. A 69, 062320 (2004). Blais2007 A. Blais, J. Gambetta, A. Wallraff, D. I. Schuster, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, Quantum-information processing with circuit quantum electrodynamics, https://doi.org/10.1103/PhysRevA.75.032329Phys. Rev. A 75, 032329 (2007). David2009 D. Zueco, G. M. Reuther, S. Kohler, and P. Hänggi, Qubit-oscillator dynamics in the dispersive regime: Analytical theory beyond the rotating-wave approximation, https://doi.org/10.1103/PhysRevA.80.033846Phys. Rev. A 80, 033846 (2009). HPT. Holstein and H. Primakoff, Field dependence of the intrinsic domain magnetization of a ferromagnet, https://doi.org/10.1103/PhysRev.58.1098 Phys. Rev. 58, 1098 (1940). Yuan2020 H. Y. Yuan, S. Zheng, Z. Ficek, Q. Y. He, and M. Yung, Enhancement of magnon-magnon entanglement inside a cavity, https://doi.org/10.1103/PhysRevB.101.014419Phys. Rev. B 101, 014419 (2020). Xu2019P. Xu, J. W. Rao, Y. S. Gui, X. Jin, and C.-M. Hu, Cavity-mediated dissipative coupling of distant magnetic moments: Theory and experiment, https://doi.org/10.1103/PhysRevB.100.094415Phys. Rev. B 100, 094415 (2019). Li2022Y. Li, V. G. Yefremenko, M. Lisovenko, C. Trevillian, T. Polakovic, T. W. Cecil, P. S. Barry, J. Pearson, R. Divan, V. Tyberkevych, C. L. Chang, U. Welp, W. Kwok, and V. Novosad, Coherent Coupling of Two Remote Magnonic Resonators Mediated by Superconducting Circuits, https://doi.org/10.1103/PhysRevLett.128.047701Phys. Rev. Lett. 128, 047701 (2022). Zhang2015X. Zhang, C.-L. Zou, N. Zhu, F. Marquardt, L. Jiang, and H. X. Tang, Magnon dark modes and gradient memory, https://doi.org/10.1038/ncomms9914Nat. Commun. 6, 8914 (2015). Sheng2020K. Shen, Magnon Spin Relaxation and Spin Hall Effect Due to the Dipolar Interaction in Antiferromagnetic Insulators, https://doi.org/10.1103/PhysRevLett.124.077201 Phys. Rev. Lett. 124, 077201 (2020). Tacchi2019 S. Tacchi, R. Silvani, G. Carlotti, M. Marangolo, M. Eddrief, A. Rettori, and M. G. Pini, Strongly hybridized dipole-exchange spin waves in thin Fe-N ferromagnetic films, https://doi.org/10.1103/PhysRevB.100.104406Phys. Rev. B 100, 104406 (2019).
http://arxiv.org/abs/2307.05374v1
20230704085609
Multi-Task Learning to Enhance Generazability of Neural Network Equalizers in Coherent Optical Systems
[ "Sasipim Srivallapanondh", "Pedro J. Freire", "Ashraful Alam", "Nelson Costa", "Bernhard Spinnler", "Antonio Napoli", "Egor Sedov", "Sergei K. Turitsyn", "Jaroslaw E. Prilepsky" ]
eess.SP
[ "eess.SP", "cs.LG" ]
Multi-Task Learning to Enhance Generazability of Neural Network Equalizers in Coherent Optical Systems Sasipim Srivallapanondh(1), Pedro J. Freire(1), Ashraful Alam(1), Nelson Costa(2), Bernhard Spinnler(3), Antonio Napoli(3), Egor Sedov(1), Sergei K. Turitsyn(1), Jaroslaw E. Prilepsky(1) August 1, 2023 ======================================================================================================================================================================================================== (1) Aston University, United Kingdom [email protected] (2) Infinera, Carnaxide, Portugal (3) Infinera, Munich, Germany 1.1 For the first time, multi-task learning is proposed to improve the flexibility of NN-based equalizers in coherent systems. A “single" NN-based equalizer improves Q-factor by up to 4 dB compared to CDC, without re-training, even with variations in launch power, symbol rate, or transmission distance. § INTRODUCTION The demand for high-speed data transmission keeps increasing due to upcoming technologies (6G<cit.>, etc.). Coherent optical systems have emerged as a key solution to meet this demand. Nonetheless, the presence of linear and especially nonlinear distortions in fiber-optic systems limits the achievable information rates <cit.>. Various digital signal processing (DSP) techniques have been proposed for nonlinear effects mitigation in long-haul systems<cit.>. Neural networks (NNs) have recently emerged as an effective alternative for channel equalization: the NNs have demonstrated excellent capability to approximate the inverse of the optical channel transfer function, potentially outperforming conventional DSP approaches <cit.>. However, generalizability remains one of the main challenges of NN-based equalizers and attracts more attention<cit.>. Due to different values of accumulated chromatic dispersion (CD) <cit.>, or the presence of channel distortion, the equalizers in the receiver or transmitter require reconfiguration and must be adjustable to compensate for the variation of impairments as the channel characteristics change. In this work, multi-task learning (MTL) <cit.> is proposed to calibrate the NN-based equalizer used for different transmission conditions in coherent systems. MTL leverages shared representations to enhance the adaptability of NN-based equalizers across different system configurations and optical impairments. This approach does not require re-training or additional data when the channel conditions change. Our results demonstrate the effectiveness of an MTL-based NN equalizer, which not only improves the equalization performance but also works efficiently in different transmission regimes and scenarios, leading to more generalizable and flexible solutions for NN-based nonlinear transmission effect mitigation. § MULTI-TASK LEARNING FOR NN-BASED EQUALIZERS Single Task Learning (STL) is a commonly used approach to train NNs. STL refers to the training in which the NN learns the representation of the function to provide the output of a “specific” task <cit.>. One advantage of STL is that it allows the NN to focus solely on a specific task, usually leading to very good performance in that task. However, the NN may behave poorly when applied to different tasks (e.g., when the transmission scenario of interest is not included in the initial training dataset). As shown in Fig. <ref>b, if STL is used for channel equalization in different transmission scenarios, multiple NN models are usually required to provide acceptable performance. In MTL, the NN is trained with multiple datasets from multiple related tasks. In this case, the common representations learned from different but related tasks are shared <cit.>. As depicted in Fig. <ref>c, MTL enables a single NN to equalize the signal in different ranges of launch power, symbol rate, and transmission distance by the joint training on the datasets from different transmission scenarios. MTL allows the NN to generalize better by using the domain-specific information contained in the different related tasks <cit.>. Besides the generalization feature enabled by the MTL, it reduces hardware costs. In fact, the shared weights are fixed, which results in the simplification of the multipliers <cit.>. However, MTL can also lead to some disadvantages compared to the STL. Firstly, there is a trade-off between the performance of individual tasks and the overall performance of the equalizer. Secondly, the degree of information sharing between tasks has to be carefully controlled. Too much sharing can cause a negative information transfer, resulting in performance degradation for each task <cit.>. In this work, we investigate the performance of NN-based equalizers using MTL where a single NN, without re-training, is potentially capable of recovering the transmitted symbol independently of the specific parameters of the transmission systems. The considered transmission setup is altered by changing the symbol rate (R_S) and launch power (P) of data channels and the transmission distance (number of spans, N_Span). For the MTL, the NN is trained with different datasets resulting from the combination of different transmission setups (to share the weights and biases). § NUMERICAL SETUP The dataset was obtained by numerical simulation assuming the transmission of a single 16-QAM dual-polarization channel along the standard single-mode fiber (SSMF). The signal propagation through the fiber was represented by a generalized Manakov equation using the GPU-accelerated split-step Fourier method<cit.>. The SSMF is characterized by the effective nonlinearity coefficient γ = 1.2 (W· km)^-1, chromatic dispersion coefficient D = 16.8 ps/(nm·km), and attenuation parameter α = 0.21 dB/km. At the end of each fiber span, the optical fiber losses were compensated by an erbium-doped fiber amplifier with a noise figure of 4.5 dB. Downsampling and CD compensation (CDC) were performed on the receiver end. Afterwards, the received symbols were normalized and used as inputs of the NN. § METHODOLOGY The NN architecture, depicted in Fig. <ref>a, contains a stack of four bidirectional-Long Short-Term Memory (biLSTM) layers with 100 hidden units in each layer coupled with a dense output layer of 2 neurons to deliver the real and imaginary values for the X-polarization. The biLSTM was selected because it outperformed other types of NNs when used for nonlinear compensation <cit.>. The model took four input features resulting from the in-phase and quadrature components of the complex signal (X_I, X_Q, Y_I, and Y_Q) where X_I+jX_Q and Y_I+jY_Q were the signals in the X and Y polarizations, respectively. A set of 141 input symbols was fed to the NN to recover one symbol at the output. A new set of synthetic data of size 2^18 was randomly created with different system parameters and used in each training epoch to allow the model to learn different transmission scenarios. The entire training was carried out with a mini-batch size of 2000, and a learning rate of 0.001. The mean square error (MSE) loss estimator and the classical Adam algorithm  <cit.> were applied when training the weights and biases. The transmission scenarios include R_S ranging from 30 to 70 GBd, number of spans ranging between 10 and 50 (with fixed 50 km span length), and launch power ranging between -1 and 5 dBm. The NNs were trained with MTL or STL as follows: * MTL trained for 1000 epochs with datasets including different N_Span, but fixed R_S = 40 GBd and P = 5 dBm. * MTL trained for 1000 epochs with datasets including different P, but fixed N_Span = 50 and R_S = 40 GBd[This model has one extra input feature, which is the launch power. The model learns the data during the training using a normalized launch power. Therefore, it could not learn to generalize well without knowing the actual launch power.]. * MTL trained for 1000 epochs with datasets including different R_S but fixed N_Span = 50 and P = 5 dBm. * MTL trained for 1200 epochs with datasets including different combinations of N_Span, R_S, and P. This NN is referred to as the “Universal model”[Here, the values of R_S and N_Span are randomly selected from the list of possible baud rate values with 5 GBd increment and the list of span number with the increments of 5 spans, respectively, to decrease the possible number of combinations for the NN's learning.]. * STL (without MTL) trained for 1000 epochs with fixed parameters: R_S = 40 GBd, N_Span = 50 and P = 5 dBm. § RESULTS AND DISCUSSION We considered MTL for multiple symbol rates, transmission distances, and launch powers. To evaluate equalization performance and generalizability, the MTL models were compared to CDC and the STL model trained with a fixed dataset. Variation of transmission distance: Fig. <ref>a shows the optical performance for different reaches considering a fixed launch power of 5 dBm and a signal baud rate of 40 GBd. The STL model performed the best when N_Span was 50 (because it was trained for this specific transmission scenario), significantly outperforming the remaining approaches. However, its performance was significantly impacted in the shorter reaches as it could not generalize. On the other hand, the MTL trained with different N_span showed much better performance than STL for the shorter reaches, achieving a better Q-factor (about 3 dB Q-factor improvement) than CDC only for all considered scenarios. The universal MTL model also showed better performance than the CDC alone, leading to a maximum Q-factor improvement of about 2.5 dB at 50×50 km. Variation of launch powers: Fig. <ref>b depicts the Q-factor as a function of the launch power for a fixed R_S of 40 GBd and transmission distance of 50×50 km. Again, the STL model showed the best gain for launch powers close to the one it was trained with (5 dBm), but revealed quite poor results for the remaining launch powers. In contrast, the universal MTL model enabled a Q-factor improvement exceeding 2 dB for the most relevant launch powers. The MTL, trained with various P but fixed N_SPAN and R_S, revealed the best performance, enabling a Q-factor improvement exceeding 4 dB for the most relevant launch powers. Interestingly, we can see that, at 5 dBm, the MTL outperformed STL. The reason for this may be that the STL is overfitting and cannot adapt to the unseen test data as effectively as the MTL model, which is more generalized. Ref. <cit.> supported the claim that a more generalized model can perform better. Variation of symbol rates: Fig. <ref>c illustrates the Q-factor as a function of the data signal baud rate for a fixed transmission distance and launch power of 50×50 km and 5 dBm, respectively. STL led to very good results for the 40 GBd transmission scenario (training scenario) but showed very poor generalization capability. The MTL, trained with multiple R_S but fixed N_Span and P, enabled a Q-factor improvement of up to 4.5 dB with respect to the CDC only, whereas the universal MTL model showed up to 2.5 dB improvement. The MTL provided a good gain in most cases. The aforementioned results show that, although STL may lead to outstanding performance in specific transmission conditions, it is not suitable for real-world system application because it lacks the adaptability to dynamic optical network parameters. MTL overcomes this limitation, allowing the equalizer to be more flexible, but at the cost of small performance degradation compared to models trained only for a specific task. § CONCLUSIONS Multi-task learning is proposed to allow a “single” NN-based equalizer, without re-training, to recover received symbols when the transmission scenarios change. The results showed that the MTL can provide up to 4 dB improvement in Q-factor with respect to CDC alone even if the transmission distance, launch power, and symbol rate vary, thus highlighting the adaptability of the MTL NN-based equalizer to the real-world dynamic optical network. 0.0 Acknowledgements: This work is supported by the EU H2020 Marie Skodowska-Curie Action project MENTOR (No. 956713), SMARTNET EMJMD program under Grant 586686-EPP-1-2017-1-U.K.-EPPKA1-JMDMOB, EPSRC project TRANSNET (EP/R035342/1), and the Horizon Europe project ALLEGRO, GA n. 101092766. 1.0
http://arxiv.org/abs/2307.02118v1
20230705084542
Universal Sums of Triangular Numbers and Squares
[ "Zichen Yang" ]
math.NT
[ "math.NT", "11E25, 11F27, 11F30" ]
In this paper, we study universal sums of triangular numbers and squares. Specifically, we prove that a sum of triangular numbers and squares is universal if and only if it represents 1,2,3,4,5,6,7,8,10,13,14,15,18,19,20,23,27,28,34,41,47, and 48. Physics-assisted Deep Learning for FMCW Radar Quantitative Imaging of Two-dimension Target Zhuoyang Liu^*, Graduate Student Member, IEEE, Huilin Xu^*, Graduate Student Member, IEEE, Feng Xu^*, Senior Member, IEEE ^* Key Lab for Information Science of Electromagnetic Wave (MoE), Fudan University, Shanghai 200433, China, Email:{liuzy20,fengxu}@fudan.edu.cn, [email protected] ==================================================================================================================================================================================================================================================================================================================== § INTRODUCTION For an integer m≥3, we define the generalized m-gonal number as the function P_m→ given by P_m(x)(m-2)x^2-(m-4)x/2, with x∈. A sum of generalized polygonal numbers is a function F^r→ of the form F(x_1,…,x_r)=a_1P_m_1(x_1)+⋯+a_rP_m_r(x_r) with integers 1≤ a_1≤⋯≤ a_r and m_1,…,m_r≥3. We say that an integer n∈ is represented by a sum F of generalized polygonal numbers, or alternatively a sum F of generalized polygonal numbers represents an integer n∈, if there exist x_1,…,x_r∈ such that n=F(x_1,…,x_r). If a sum of generalized polygonal numbers represents every positive integer, we say that it is a universal sum. For any integer 𝔐≥1, we denote by 𝒞_𝔐 the class consisting of sums of generalized polygonal numbers of the form (<ref>) with parameters m_1,…,m_r≥3 such that (m_1-2,…,m_r-2)≤𝔐. In <cit.>, Kane and the author showed that for any integer 𝔐≥1, there exists a minimal constant Γ_𝔐>0 such that every sum in 𝒞_𝔐 is universal if and only if it represents every positive integer up to the constant Γ_𝔐. Moreover, for any real number ε>0, we have an asymptotic upper bound for the constant Γ_𝔐 as follows, Γ_𝔐≪_ε𝔐^43+ε. As it was pointed out in <cit.>, the implied constant in (<ref>) is ineffective, because universal ternary sums in 𝒞_𝔐 are not completely classified for any integer 𝔐≥3. On the other hand, universal ternary sums in 𝒞_1 and 𝒞_2 were classified in <cit.> using various arithmetic methods. Therefore the constants Γ_1 and Γ_2 are effectively computable. In fact, Bosma and Kane showed that Γ_1=8 in <cit.> because the class 𝒞_1 consists of sums of triangular numbers. The purpose of this paper is to show that Γ_2=48. Since the class 𝒞_2 consists of sums of triangular numbers and squares, we establish the following finiteness theorem for such sums. A sum of triangular numbers and squares is universal if and only if it represents every integer in the following set {1,2,3,4,5,6,7,8,10,13,14,15,18,19,20,23,27,28,34,41,47,48}. In particular, we have Γ_2≤48. Moreover, we can show that every integer appearing in the set (<ref>) is critical in the following sense. For any integer t in the set (<ref>), there exists a sum of triangular numbers and squares representing every positive integer except t. Therefore we have Γ_2=48. The proof of the main theorems uses a standard technique of Bhargava <cit.>, which is the so-called escalator tree. To be precise, the escalator tree T for sums of triangular numbers and squares is a rooted tree constructed as follows. We define the root to be F=0 with depth 0, and then we construct the nodes of depth r+1 from the nodes of depth r inductively for any integers r≥0 as follows. If a node of depth r is universal, then it is a leaf of the tree. If a sum of triangular numbers and squares F is not universal, we define the truant t(F) of the sum F to be the least positive integer not represented by F and the truant of the root is defined to be 1 by convention. The children of a non-universal node F(x_1,…,x_r)=∑_j=1^ra_jP_m_j(x_j) of depth r are the sums of triangular numbers and squares of the form F(x_1,⋯,x_r)+a_r+1P_m_r+1(x_r+1) such that a_r≤ a_r+1≤ t(F), 3≤ m_r+1≤4, and an additional restriction that m_r≤ m_r+1 if a_r=a_r+1 to avoid repeated nodes. It is implied by the explicit construction in the rest of the paper, that the escalator tree T has finitely many nodes. Then it follows that the set (<ref>) is exactly the same as the set consisting of truants of non-universal nodes in the escalator tree. So to prove Theorem <ref>, it suffices to prove the universality of universal nodes and calculate the truants of non-universal nodes. We can study the nodes of depth r≤3 using elementary methods. The truant of the root node is 1 by convention. So it is clear that there are two nodes of depth r=1, P_3 and P_4, whose truants are 2. It follows that the nodes of depth r=2 are P_3+P_3, P_3+P_4, P_3+2P_3, P_3+2P_4, P_4+P_4, P_4+2P_3, and P_4+2P_4. We note that there is no need to study the node P_4+2P_3 and its descendants because of the following observation of Euler. The nodes P_3+P_3 and P_4+2P_3 represent the same set of integers. This is an observation of Euler in communication with Goldbach. See <cit.>. So we ignore the node P_4+2P_3 and its descendants in the following discussion. It is easy to verify that every node of depth r=2 is not universal, whose truant is given in Table <ref>. c|cccccc The truants of the non-universal nodes of depth r=2. F P_3+P_3 P_3+P_4 P_3+2P_3 P_3+2P_4 P_4+P_4 P_4+2P_4 t(F) 5 8 4 4 3 5 Hence we turn to study the nodes of depth r=3. There are universal nodes of depth r=3 and they were classified in a series of papers <cit.>. We summarize the results in the next proposition. For nodes of depth r=3 in the escalator tree T, we have: * Every child of the node P_3+P_3 is universal except P_3+P_3+3P_3, P_3+P_3+3P_4, and P_3+P_3+5P_4. * Every child of the node P_3+P_4 is universal except P_3+P_4+5P_3, P_3+P_4+5P_4, P_3+P_4+6P_4, P_3+P_4+7P_3, and P_3+P_4+7P_4. * Every child of the node P_3+2P_3 is universal. * Every child of the node P_3+2P_4 is universal except P_3+2P_4+3P_3, P_3+2P_4+3P_4, and P_3+2P_4+4P_4. * Every child of the node P_4+P_4 is universal except P_4+P_4+P_4, P_4+P_4+2P_4, P_4+P_4+3P_3, and P_4+P_4+3P_4. * Every child of the node P_4+2P_4 is universal except P_4+2P_4+2P_4, P_4+2P_4+3P_3, P_4+2P_4+3P_4, P_4+2P_4+4P_4, P_4+2P_4+5P_3, and P_4+2P_4+5P_4. The truants of the non-universal nodes are given in Table <ref>. See the papers <cit.> for the classification of universality ternary sums and it is straightforward to calculate the truants of these non-universal nodes. c|ccccccc The truants of the non-universal nodes of depth r=3. Node P_3+P_3+3P_3 P_3+P_3+3P_4 P_3+P_3+5P_4 P_3+P_4+5P_3 P_3+P_4+5P_4 P_3+P_4+6P_4 P_3+P_4+7P_3 Truant 8 8 19 13 13 47 20 Node P_3+P_4+7P_4 P_3+2P_4+3P_3 P_3+2P_4+3P_4 P_3+2P_4+4P_4 P_4+P_4+P_4 P_4+P_4+2P_4 P_4+P_4+3P_3 Truant 20 7 7 20 7 14 6 Node P_4+P_4+3P_4 P_4+2P_4+2P_4 P_4+2P_4+3P_3 P_4+2P_4+3P_4 P_4+2P_4+4P_4 P_4+2P_4+5P_3 P_4+2P_4+5P_4 Truant 6 7 23 10 14 10 10 This finishes the discussion of nodes of depth r=3 and we can move on to the study of nodes of depth r=4, which is the major task in this paper. To study nodes of depth r=4, a combination of arithmetic methods and analytic methods using the theory of modular forms is applied. So the rest of the paper is organized as follows. In Section <ref>, we study the representations by the children of underlined nodes in Table <ref> together with another non-underlined node P_3+P_4+6P_4 using arithmetic methods. In Section <ref>, we study the other nodes of depth r=4 using an analytic method based on the theory of modular forms. In the last section, we study nodes of depth r≥5 and prove the main theorems. § NODES OF DEPTH R=4: ARITHMETIC METHODS In this section, we apply arithmetic methods to study the representations of these nodes. Actually, we can prove every result in this section with the analytic method used in the next section. However, the analytic method is usually more time-consuming than arithmetic methods. Thus, we apply arithmetic methods whenever they are applicable. We adopt the formulation in terms of quadratic forms with congruence conditions. More precisely, for a sum F of triangular numbers and squares, we construct a quadratic form Q with congruence conditions with two integers μ≥1,ρ∈ such that for any integer n∈, we have r_F(n)=r_Q(μ n+ρ), where we denote by r_F(n) the number of representations of n by the sum F and denote by r_Q(n) the number of representations of n by the quadratic form Q with congruence conditions for any integer n∈. The construction is given by completing the squares in the sum F. If F=a_1P_3+⋯+a_r_1P_3+b_1P_4+⋯+b_r_2P_4 is a sum of triangular numbers and squares with r_1≥1, then we take the quadratic form Q with congruence conditions as follows Q(x_1,…,x_r_1,y_1,…,y_r_2) a_1(2x_1+1)^2+⋯+a_r_1(2x_r_1+1)^2+2b_1(2y_1)^2+⋯+2b_r_2(2y_r_2)^2, with μ8 and ρ a_1+⋯+a_r_1. If F is a sum of squares, it is a quadratic form without congruence conditions. In this case, we simply take Q=F with μ=1 and ρ=0. For both cases, it is to verify that (<ref>) holds for any integer n∈. The arithmetic method used in the next proposition is based on the fact that the quadratic form Q with congruence conditions constructed above corresponding to each underlined node in Table <ref> is of class number one, which means that the genus of Q consists of one class, that is, the class of Q. In this case, by local-global principle <cit.>, an integer is represented by Q if and only if it is represented by Q over _p for any prime number p. Therefore we can find a large set of positive integers represented by Q by local computations. With this knowledge, we then can determine the sets of positive integers represented by the children of this non-universal node. Every child of the underlined nodes in Table <ref> is universal, except the children P_3+P_4+5P_3+10P_3, P_3+P_4+5P_3+10P_4, P_3+P_4+5P_4+5P_4, and P_4+2P_4+5P_4+5P_4. For these non-universal children, we have * P_3+P_4+5P_3+10P_3(^4)⊇{n∈ | n≠23 and n≢93,123125}. * P_3+P_4+5P_3+10P_4(^4)={n∈ | n≠23}. * P_3+P_4+5P_4+5P_4(^4)={n∈ | n≠18}. * P_4+2P_4+5P_4+5P_4(^4)={n∈ | n≠15}. We prove the universality of the children of the node P_3+P_3+3P_3 to illustrate the method. For the children of other nodes, we omit the details. From Table <ref>, we know that the truant of the node P_3+P_3+3P_3 is 8. So we have to show that every child of the form P_3+P_3+3P_3+a_4P_m_4 with 3≤ a_4≤8 and m_4=3,4 is universal. By the triangular theorem of eight in <cit.>, every child of the form P_3+P_3+3P_3+a_4P_3 with 3≤ a_4≤8 is universal. It remains to prove the universality for every child of the form P_3+P_3+3P_3+a_4P_4 with 3≤ a_4≤ 8. Using the above construction of the quadratic form with congruence conditions, it is equivalent to showing that Q(x,y,z)(2x+1)^2+(2y+1)^2+3(2z+1)^2=8n+5-2a_4(2w)^2, is solvable with x,y,z,w∈ for any positive integer n. Because the quadratic form Q with the congruence conditions is of class number one, we see that it represents any positive integer of the form 8n+5 such that n≢8,17,23,2627 by Hensel's lemma. Therefore the equation (<ref>) is solvable with w=0 for such integers. Furtheremore, if n≢2627 or a_4≠3, it is solvable with w=1 for any positive integer n≡8,17,2627 such that n≥ a_4, and if a_4=3, it is solvable with w=2 for any positive integer n≡2627 such that n≥4a_4. A quick calculation reveals that it is solvable for any integer n≤4a_4-1 such that n≢2327. Hence the equation (<ref>) is solvable for any integer n≢2327. It remains to show that it is solvable for any positive integer n≡2327. Note that in this case the integer 8n+5 is divisible by 9. Therefore, if we write n=27m+23 for some integer m≥0, then it reduces to solving (<ref>) for n=3m+2. If 3m+2≢2327, then we are done. Otherwise we can repeat this argument until it is solvable. Hence every child of the form P_3+P_3+3P_3+a_4P_4 with 3≤ a_4≤8 is universal. In a similar manner, we can prove the universality of universal children and determine sets of positive integers represented by non-universal children of the other underlined nodes in Table <ref>. For the underlined nodes in Table <ref> which are sums of squares, one may refer to <cit.> for the characterization of positive integers represented by these nodes. In fact, using the analytic method given in the next section, we can prove that P_3+P_4+5P_3+10P_3(^4)⊇{n∈ | 8n+16≠200·25^a with a∈}, though the weaker result in Proposition <ref> is enough for showing that every child of the node P_3+P_4+5P_3+10P_3 in the escalator tree T is universal. For proving the next lemma, we need a more advanced arithmetic tool developed in <cit.>. The node P_3+P_4+6P_4 represents every positive integer n≡0,15. It is equivalent to showing that Q(x,y,z) x^2+3y^2+8z^2=8n+1, is solvable with integers x≡12 and y≡04 for any positive integer n≡0,15. The congruence conditions are superfluous because x^2+3y^2+8z^2=8n+1 implies that x≡12 and y≡04 by passing to /8. The genus of the quadratic form Q consists of two classes, one of which contains the quadratic form Q_1(x,y,z) Q(x,y,z) and the other contains another quadratic form Q_2(x,y,z)3x^2+3y^2+4z^2+2xy-2xz+2yz. We claim that Q_1 represents any positive integer n≡15 not of the form n≠4t^2 with any integer t∈. It is easy to verify that any positive integer n≡15 is locally represented by Q_1. If Q_1 represents n, then we are done. Otherwise we see that Q_2 represents n by local-global principle <cit.>. Set R_1 ±(0,0,2)∪±(1,2,3)∪±(1,2,4)∪±(2,4,0)∪±(2,4,4), R_2 ±(2,0,3)∪±(2,1,1)∪±(2,2,4)∪±(2,3,2), R_3 ±(1,3,0)∪±(1,3,4)∪±(2,1,2), R_4 ±(0,2,2)∪±(2,2,1), R_5 ±(2,3,0), where (a,b,c) stands for the coset (a,b,c)+5^3 in ^3/5^3. Since n≡15, it is not hard to show that any representation (x,y,z) of n lies in the union R_1∪ R_2∪ R_3∪ R_4∪ R_5. By a computer searching, we have the following identities relating Q_1 with Q_2 as follows, I_1  Q_1(4x+8y+5z/5,-3x-y+5z/5,-2x+y/5)=Q_2(x,y,z), I_2  Q_1(4x+8y-z/5,-3x-y-3z/5,-2x-y+3z/5)=Q_2(x,y,z), I_3  Q_1(8x+4y-5z/5,-x-3y-5z/5,-x+2y/5)=Q_2(x,y,z), I_4  Q_1(8x+4y+z/5,-x-3y+3z/5,-x+2y+3z/5)=Q_2(x,y,z). For any 1≤ i≤ 4, assuming that (x,y,z)∈ R_i is a representation of n≡15 by Q_2, then we can construct a representation of n by Q_1 via the identity I_i. Therefore if Q_2 admits a representation of n≡15 in R_1∪ R_2∪ R_3∪ R_4, then we are done. So it remains to deal with representations (x,y,z)∈ R_5 by Q_2. We set T1/5[ 0 -5 0; -1 4 6; -4 -4 -1; ]. Suppose that (x,y,z)∈±(2,3,0) is a representation of n by Q_2. Then it is easy to verify that T(x,y,z) is another representation of n by Q_2 lying in R_1∪ R_2∪ R_3∪ R_4∪ R_5 by the following identity Q_2(T(x,y,z))=Q_2(-y,-x+4y+6z/5,-4x-4y-z/5)=Q_2(x,y,z). If T^k(x,y,z) is a representation in R_1∪ R_2∪ R_3∪ R_4 for any positive integer k≥1, then we are done. Otherwise by <cit.>, the representation (x,y,z) lies in a one-dimensional eigenspace of T with eigenvalue (T). Since the primitive eigenvector in this eigenspace is (1,-1,0) and Q_2(1,-1,0)=4, we see that any positive integer n≡15 not of the form 4t^2 with any integer t∈ is represented by Q_1. Arguing in a similar manner with the same collection of identities, one can show that any positive integer n≡-15 not of the form 4t^2 with any integer t∈ is represented by Q_1. Back to the proof of the lemma, it is easy to see that 8n+1≡±15 if and only if n≡0,15 and every integer 8n+1 is not of the form 4t^2 with any integer t∈. Therefore the node P_3+P_4+6P_4 represents any positive integer n≡0,15. Every child of the node P_3+P_4+6P_4 of the form P_3+P_4+6P_4+a_4P_3 such that 7≤ a_4≤ 47 and a_4≡1,45, or of the form P_3+P_4+6P_4+a_4P_4 such that 6≤ a_4≤47 and a_4≡2,35 is universal. This is straightforward following Proposition <ref>. § NODES OF DEPTH R=4: ANALYTIC METHODS For the remaining nodes of depth r=4, we apply analytic methods to study the representations. Because any remaining node F=a_1P_3+⋯+a_r_1P_3+b_1P_4+⋯+b_r_2P_4 such that r_1+r_2=r contains at least a multiple of an triangular number P_3, we can associate to it the quadratic form Q with congruence conditions as follows, Q(x_1,…,x_r_1,y_1,…,y_r_2) a_1(2x_1+1)^2+⋯+a_r_1(2x_r_1+1)^2+2b_1(2y_1)^2+⋯+2b_r_2(2y_r_2)^2, with integers μ8 and ρ a_1+⋯+a_r_1, constructed in Section <ref>. So we have the relation r_F(n)=r_Q(μ n+ρ), for any integer n∈. Let A be the diagonal matrix with entries 2a_1,…,2a_r_1,4b_1,…,4b_r_2, which is the Hessian matrix of the quadratic form Q with congruence conditions. The discriminant D of Q is defined as the determinant of the matrix A. The level N of Q is defined as the least positive integer N such that diagonal entries of the matrix NA^-1 are even and off-diagonal entries of the matrix NA^-1 are integral. Let be the complex upper half-plane. The theta series Θ_Q→ associated to the quadratic form Q(x_1,…,x_r_1,y_1,…,y_r_2) with congruence conditions is defined as Θ_Q(τ)∑_x_i,y_j∈e^2π iτ Q(x_1,…,x_r_1,y_1,…,y_r_2)=∑_n≥0r_Q(n)e^2π inτ, for any complex number τ∈. By <cit.>, the theta series Θ_Q is a modular form of weight 2 for the congruence subgroup Γ_0(4N) with the Nebentypus χ_D, where the character χ_D is given by the Kronecker-Jacobi symbol defined in <cit.>. By the theory of modular forms, it splits into a sum of an Eisenstein series E_Q and a cusp form G_Q. Let a_E_Q(n) and a_G_Q(n) be the n-th Fourier coefficients of E_Q and G_Q, respectively. Clearly we have r_Q(n)=a_E_Q(n)+a_G_Q(n), for any integer n∈. To prove that Q represents the integer μ n+ρ, a typical approach is to bound the Fourier coefficient a_E_Q(μ n+ρ) from below and bound the Fourier coefficient a_G_Q(n) from above for any integer n≥0. We give the bounds in Proposition <ref> and Proposition <ref>. Suppose that F=∑_i=1^4a_iP_m_i is a sum of triangular numbers and squares such that m_i=3 for at least one 1≤ i≤ 4. Let Q be the corresponding quadratic form with congruence conditions of discriminant D and level N, together with integers μ and ρ constructed in Section <ref>. If Q represents any integer μ n+ρ over _p for any prime number p, then there exists a constant C_E>0 such that a_E_Q(μ n+ρ)≥ C_E·σ_χ_D(μ n+ρ)·(μ n+ρ), for any integer n≥0 such that _p(μ n+ρ)≤1 for any odd prime number p| N, where σ_χ(n) is a twisted divisor function defined as, σ_χ(n)∑_d| nχ(d)/d, for a Dirichlet character χ modulo N. Moreover, the constant C_E has an explicit expression given as follows, C_Eπ^2/L(2,χ_D)√(D)∏_p| N(b_p/1-χ_D(p)p^-2), with a positive rational number b_p>0 depending only on Q, μ, and ρ for any prime number p| N. We use the Siegel–Minkowski formula to rewrite the Fourier coefficient a_E_Q(n) in terms of a product of local densities. We use the formulation of the Siegel–Minkowski formula given in <cit.>. Then we have a_E_Q(n)=4π^2n/√(D)∏_pβ_p(n;Q), for any integer n≥0, where if Q is a quadratic form with congruence conditions of the form Q(x_1,x_2,x_3,x_4)=∑_1≤ i,j≤4a_i,j(μ_ix_i+ρ_i)(μ_jx_j+ρ_j), with integers a_i,j,μ_i,ρ_i∈ for any 1≤ i,j≤4, then the local density β_p(n;Q) is given by β_p(n;Q)=∫__p∫_∏_i=1^4(ρ_i+μ_i_p)e_p(σ(∑_1≤ i,j≤4a_i,jx_ix_j-n))dx_1⋯dx_4dσ, for any prime number p and any p-adic integer n∈_p, where the Haar measure on the field _p is normalized so that the subset _p is of measure 1 and the function e_p_p→ is defined by e_p(α) e^-2π ia for some rational number a∈⋃_t=1^∞p^-t such that α-a∈_p. To prove the lower bound, we evaluate the local densities β_p(μ n+ρ;Q) for any prime number p. Plugging in the quadratic form Q with congruence conditions corresponding to the sum F=∑_i=1^4a_iP_m_i and changing variables, we see that β_p(μ n+ρ;Q)=p^-_p(4)I_p(2n;ϕ_Q), where ϕ_Q(x_1,…,x_4)=∑_i=1^42a_iP_m_i(x_i) and I_p(n;ϕ) is defined as I_p(n;ϕ)=∫__p∫__p^4e_p(σ(ϕ(x_1,…,x_4)-n))dx_1⋯dx_4dσ, for any prime number p, any p-adic integer n∈_p, and any polynomial ϕ∈_p[x_1,…,x_4]. Then we use the explicit formulae given in <cit.> to bound the integral I_p(2n;ϕ_Q) for any prime number p. If p is a prime number such that p∤ N, we can replace the polynomial ϕ_Q by a unimodular quaternary diagonal quadratic form with discriminant D by changing variables in the integral I_p. Therefore, applying <cit.>, we have β_p(μ n+ρ;Q)=I_p(2n;ϕ_Q)=(1-χ_D(p)/p^2)∑_k=0^tχ_D(p)^k/p^k, where we set t_p(μ n+ρ). If p is an odd prime number such that p| N, with the assumption _p(μ n+ρ)≤1, the integral I_p(2n;ϕ_Q) is determined by the congruence class of the integer n modulo p by <cit.>. Since Q represents any integer μ n+ρ over _p, there exists a positive rational number b_p>0 such that β_p(μ n+ρ;Q)≥ b_p, for any integer n≥0 such that _p(μ n+ρ)≤1. If p=2, we see that 𝔱_d<∞ in <cit.> because m_i=3 for at least one 1≤ i≤ 4. Therefore the integral I_2(2n;ϕ_Q) is determined by the congruence class of the integer n modulo 2^𝔱_d-1. Since Q represents any integer μ n+ρ over _2, there exists a positive rational number b_2>0 such that β_2(μ n+ρ;Q)≥ b_2, for any integer n≥0. Plugging these bounds into the Siegel–Minkowski formula (<ref>), we obtain the desired lower bound for the Eisenstein part. Suppose that F=∑_i=1^4a_iP_m_i is a sum of triangular numbers and squares. Let Q be the corresponding quadratic form with congruence conditions of level N, constructed in Section <ref>. Then there exists a constant C_G>0 such that for any integer n≥0, we have |a_G_Q(n)|≤ C_G·σ_0(n)· n^1/2, where σ_0(n) is the number of divisors of an integer n. Moreover, the constant C_G is given by C_G∑_i∈ I∑_σ K_i→|σ(γ_i)|/√(d_i), if we have G_Q(τ)=∑_i∈ I∑_σ K_i→σ(γ_i)g_i|_V(d_i)^σ(τ), where γ_i∈ is a complex number, d_i is an integer, and g_i is a newform of weight 2 and level N_i such that N_id_i|4N for each i∈ I, and each inner sum runs over the set of embeddings of the base field K_i of the newform g_i into the field of complex numbers. This follows from Deligne's optimal bound for the Fourier coefficients of newforms <cit.>. From the expressions of the lower bound and the upper bound, we have to bound the quotient of divisor functions to compare the Eisenstein part with the cuspidal part. Let χ be a Dirichlet character. For any real number ε>0, we have σ_χ(n)/σ_0(n)≥ C_εn^-ε, for some constant C_ε>0. We use the trick of Ramanujan to bound the quotient of the divisor functions. Note that n^εσ_χ(n)/σ_0(n) is multiplicative for any real number ε>0. So it suffices to bound p^ε tσ_χ(p^t)/σ_0(p^t), for any prime number p and any integer t≥1. Because the numerator in the ratio (<ref>) dominates the denominator as the integer n is sufficiently large, it suffices to bound for finitely many prime numbers p, which depends on the fixed real number ϵ>0. For each of these prime numbers p, the ratio (<ref>) is an increasing function in the variable t as t is sufficiently large. Therefore, we can bound the ratio (<ref>) from below by a constant C_ε>0. For each of the remaining nodes, it is easy to verify that the assumptions of Proposition <ref> holds and it follows that the constant C_E is always positive. Thus, combining Proposition <ref>, Proposition <ref>, and Proposition <ref>, the quadratic form Q with congruence conditions represents any sufficiently large positive integer of the form μ n+ρ such that _p(μ n+ρ)≤1 for any odd prime number p| N. Then it remains to check the representability of finitely many positive integers, which can be done by a computer. With this method, we study the remaining nodes of depth r=4 in the next proposition. Every child of the non-underlined nodes in Table <ref> is universal except the nodes P_3+P_4+7P_3+7P_3, P_3+P_4+7P_3+14P_3, P_3+P_4+7P_3+14P_4, P_3+P_4+7P_4+14P_3, P_3+P_4+7P_4+7P_4, P_3+P_4+7P_4+14P_4, P_4+2P_4+5P_3+10P_3, P_4+2P_4+5P_3+8P_4, and P_4+2P_4+5P_3+10P_4. For these non-universal children, we have * P_3+P_4+7P_3+7P_3(^4)⊇{n∈ | 8n+15≠343·49^a with a∈}. * P_3+P_4+7P_3+14P_3(^4)⊇{n∈ | 8n+22≠294·49^a,686·49^a with a∈}. * P_3+P_4+7P_3+14P_4(^4)⊇{n∈ | 8n+8≠280·49^a with a∈}. * P_3+P_4+7P_4+14P_3(^4)⊇{n∈ | 8n+15≠343·49^a with a∈}. * P_3+P_4+7P_4+7P_4(^4)⊇{n∈ | 8n+1≠217·49^a,385·49^a with a∈}. * P_3+P_4+7P_4+14P_4(^4)⊇{n∈ | 8n+1≠329·49^a with a∈}. * P_4+2P_4+5P_3+10P_3(^4)⊇{n∈ | 8n+15≠175·25^a with a∈}. * P_4+2P_4+5P_3+8P_4(^4)={n∈ | n≠28}. * P_4+2P_4+5P_3+10P_4(^4)={n∈ | n≠20}. The proofs for these nodes are essentially the same. So we illustrate the method using the node P_3+P_3+5P_4+19P_3 and omit the details for other nodes. Let Q be the quadratic form with congruence conditions corresponding to the node P_3+P_3+5P_4+19P_3. We have μ=8 and ρ=21. By a computer program implemented in SageMath <cit.>, we find that the constants are given by C_E≈0.236, C_G≈12.645, C_ε≈0.482, with ε=0.25. Therefore by checking the representability up to 152402970 with a computer, we see that Q represents every integer of the form 8n+21 with n≥0 such that _5(8n+21)≤1 and _19(8n+21)≤1. It is easy to verify that Q represents the integers 5·5^2,5·19^2,13·5^2 and 13·19^2. Hence we can conclude that Q represents every integer of the form 8n+21 with n≥0, which is equivalent to the universality of the node P_3+P_3+5P_4+19P_3. In principle, the proofs for other nodes are similar. However, naive application of the analytic method to prove the universality of every child of the node P_3+P_4+6P_4 requires us to compute newforms of levels up to 9024 with highly non-trivial characters, which is extremely slow in SageMath. So we apply the following tricks to reduce the level. Note that the truant of the node P_3+P_4+6P_4 is 47. By Proposition <ref>, we see that it suffices to prove the universality of the child of the form P_3+P_4+6P_4+a_4P_m_4 for 7≤ a_4≤47 with a_4≡0,2,35 when m_4=3 and for 6≤ a_4≤47 with a_4≡0,1,45 when m_4=4. For any child of the form P_3+P_4+6P_4+a_4P_3 with a_4≡12, let Q be the quadratic form with congruence conditions corresponding to the node P_3+P_4+6P_4+a_4P_3. We have μ=8 and ρ=1+a_4 and the level of Q is 48a_4. So naively we have to compute newforms of levels up to 192a_4. Since ρ≡02, we see that the Fourier coefficient a_G_Q(n) are only supported on even integers n and every integer d_i in the splitting (<ref>) is divisible by 2. Hence, it suffices to compute newforms of level up to 96a_4. For any child of the form P_3+P_4+6P_4+a_4P_4 with a_4≡12, let Q(x,y,z,w) (2x+1)^2+2(2y)^2+12(2z)^2+2a_4(2w)^2 be the quadratic form with congruence conditions corresponding to the node P_3+P_4+6P_4+a_4P_4. We have μ=8 and ρ=1. Applying the analytic method naively to study the representations of 8n+1 by Q requires us to compute newforms of levels up to 192a_4. Note that the quadratic form Q(x,y,z,w) with congruence conditions represents an integer 8n+1 if and only if the quadratic form Q'(x,y,z,w) x^2+2y^2+12z^2+8a_4w^2 represents the integer 8n+1. Thus, we can apply the analytic method to the quadratic form Q' without congruence conditions, which only requires us to compute newforms of level up to 96a_4. Using these tricks, we can reduce the computational overhead for most of difficult cases. Overall it takes several weeks using paralleled computation in a 4-processor CPU of 4.20GHz frequency to verify that every child of the node P_3+P_4+6P_4 is universal. Till now, we have finished the discussion of the nodes of depth r=4. The truants of the non-universal nodes of depth r=4 are listed in Table <ref>. c|ccccc The truants of the non-universal nodes of depth r=4. F P_3+P_4+5P_3+10P_3 P_3+P_4+5P_3+10P_4 P_3+P_4+5P_4+5P_4 P_4+2P_4+5P_4+5P_4 P_3+P_4+7P_3+7P_3 t(F) 23 23 18 15 41 F P_3+P_4+7P_3+14P_3 P_3+P_4+7P_3+14P_4 P_3+P_4+7P_4+14P_3 P_3+P_4+7P_4+7P_4 P_3+P_4+7P_4+14P_4 t(F) 34 34 41 27 41 F P_4+2P_4+5P_3+10P_3 P_4+2P_4+5P_3+8P_4 P_4+2P_4+5P_3+10P_4 ⋆ ⋆ t(F) 20 28 20 ⋆ ⋆ § NODES OF DEPTH R≥5 AND PROOFS TO THE MAIN THEOREMS Finally, we study nodes of depth r≥5 and then it is straightforward to prove the main theorems. Every node of depth r≥5 is universal, except the nodes P_3+P_4+7P_4+7P_4+21P_3 and P_3+P_4+7P_4+7P_4+21P_4, both of which represent every positive integer except 48. Therefore any nodes of depth r=6 is universal. By Proposition <ref> and Proposition <ref>, it is obvious to see that any child of non-universal nodes other than the nodes P_3+P_4+7P_3+14P_3 and P_3+P_4+7P_4+7P_4 of depth r=4 is universal. Now we prove that any child of the node P_3+P_4+7P_3+14P_3 is universal. The truant of the node P_3+P_4+7P_3+14P_3 is 34. For any child of the form P_3+P_4+7P_3+14P_3+a_5P_m_5 with 14≤ a_5≤34 and 3≤ m_5≤4, it is equivalent to showing that Q(x,y,z,w) (2x+1)^2+2(2y)^2+7(2z+1)^2+14(2w+1)^2=8n+22-8a_5P_m_5(v), is solvable with x,y,z,w,v∈ for any integer n≥0. If 8n+22≠294·49^a or 686·49^a with any integer a≥0, then the equation is solvable with v=0 by Proposition <ref>. If 8n+22=294 or 686, it is easy to verify that the equation is solvable. If 8n+22=294·49^a,686·49^a with any integer a≥1, then the equation is solvable with v=1 because 8n+22-8a_5 is not divisible by 49 and it is not equal to 294 or 686 either. Therefore, any child of the node P_3+P_4+7P_3+14P_3 is universal. In a similar fashion, we can show that any child of the node P_3+P_4+7P_4+7P_4 is universal except the nodes P_3+P_4+7P_4+7P_4+21P_3 and P_3+P_4+7P_4+7P_4+21P_4, both of which represent every positive integer except 48. Since both of them fail to represent only one positive integer, any nodes of depth r=6 are universal. By the construction of the escalator tree, the theorem follows from Table <ref>, Table <ref>, Table <ref>, and Proposition <ref>. We know that any integer t in the set (<ref>) is the truant of a sum of triangular numbers and squares in the escalator tree, say F(x_1,…,x_r). Suppose that G is a universal sum, for example, P_3(y_1)+P_3(y_2)+P_3(y_3). Then, we show that the sum H(x_1,…,x_r,y_1,y_2,y_3,z) F(x_1,…,x_r)+(t+1)G(y_1,y_2,y_3)+(2t+1)P_3(z) represents every positive integer except t. Indeed, we notice that the sum F represents every positive integer up to t-1 and the sum (t+1)G represents every positive integer that is divisible by t+1. Then it follows that H represents every positive integer n≢t(t+1). Moreover, setting z=1, we see that every positive integer n≡ t(t+1) such that n≥2t+1 is represented by the sum H. Therefore the sum H represents every positive integer except t. Hence, for any integer t in the set (<ref>), there exists a sum of triangular numbers and squares representing every positive integer except t. § ACKNOWLEDGEMENT The author is indebted to his advisor Dr. Ben Kane for many helpful suggestions. plain
http://arxiv.org/abs/2307.00718v1
20230703025300
Low temperature dynamic polaron liquid in a CMR manganite
[ "Daniel Jost", "Hsiao-Yu Huang", "Matteo Rossi", "Amol Singh", "Di-Jing Huang", "Yonghun Lee", "Hong Zheng", "J. F. Mitchell", "Brian Moritz", "Zhi-Xun Shen", "Thomas P. Devereaux", "Wei-Sheng Lee" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Department of Physics and Astrophysics, University of Delhi, New Delhi 110007, India National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA Department of Physics, Stanford University, Stanford, California 94305, USA Department of Applied Physics, Stanford University, Stanford, California 94305, USA Geballe Laboratory for Advanced Materials, Stanford University, Stanford, California 94305, USA [email protected] Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, USA [email protected] Stanford Institute for Materials and Energy Sciences (SIMES), 2575 Sand Hill Road, Menlo Park, CA 94025, USA Polarons – fermionic charge carriers bearing a strong companion lattice deformation – exhibit a natural tendency for self-localization due to the recursive interaction between electrons and the lattice. While polarons are ubiquitous in insulators, how they evolve in transitions to metallic and superconducting states in quantum materials remains an open question. Here, we use resonant inelastic x-ray scattering (RIXS) to track the electron-lattice coupling in the colossal magneto-resistive bi-layer manganite across its metal-to-insulator transition. The response in the insulating high-temperature state features harmonic emissions of a dispersionless oxygen phonon at small energy transfer. Upon cooling into the metallic state, we observe a drastic redistribution of spectral weight from the region of these harmonic emissions to a broad high energy continuum. In concert with theoretical calculations, we show that this evolution implies a shift in electron-lattice coupling from static to dynamic lattice distortions that leads to a distinct polaronic ground state in the low temperature metallic phase – a dynamic polaron liquid. Low temperature dynamic polaron liquid in a CMR manganite W.-S. Lee July 2, 2023 ========================================================== Coupling between the electronic charge and the lattice is ubiquitous in materials and influences the physical and chemical properties of countless systems. When the coupling is particularly strong, polarons <cit.>, and their dynamic and transport properties, can play a pivotal role in charge mobility and chemical reactivity <cit.>. From photocatalysts and perovskite solar cells to transition metal oxides with strong electron correlations for high-power switching and data storage, understanding the role of polarons in various processes and how to control polaron mobility may point the way toward improved performance and functionality. The bi-layer manganite is a classic system for studying polaronic contributions to transport properties and the origin of non-trivial metal-to-insulator transitions (MITs). In contrast to other notorious examples of MITs, such as those in vanadium-based materials <cit.>, the MIT in is inverted, having a high temperature insulating and a low temperature metallic state, where the metallic state is intertwined with either ferromagnetic (FM-M) or antiferromagnetic (AFM-M) order <cit.> (fig1a). Moreover, exhibits colossal magneto-resistance (CMR) at the MIT in the doping regime of x = 0.3 to 0.4 <cit.>, which, despite decades of study, remains an incompletely understood phenomenon. The CMR effect is most pronounced <cit.> for x = 0.4 (with the formula abbreviated as LSMO throughout) at its ferromagnetic transition with a Curie temperature T_c=120 K. The substitution of La^3+ with Sr^2+ in introduces mixed Mn^3+ and Mn^4+ valence states at this doping concentration with four- and three electron occupation of the 3d orbitals, respectively. The distortion of the Jahn-Teller active Mn^3+ sites (fig1a) lifts the degeneracy of the e_g orbitals in the insulating state above  <cit.>. Below , the static Jahn-Teller distortion disappears <cit.> with the e_g electron on the Mn^3+ site sitting firmly in the d_x^2-y^2 orbital. Double exchange <cit.> (DE) allows for hopping between adjacent Mn^3+ and Mn^4+ sites via the ligand 2p states, which sets the stage for the metallic regime. However, DE fails to explain metallicity in its entirety; as was realized early on, the calculated resistivity based on this mechanism cannot capture the several orders of magnitude changes across the magnetic transition that have been observed experimentally <cit.>, calling for an additional ingredient to fully clarify the MIT <cit.>. In the high temperature insulting state, neutron scattering <cit.>, diffuse x-ray scattering <cit.>, and inelastic light scattering <cit.> experiments suggest the existence of static polarons. In addition, charge order (CO) has been observed with an incommensurate wave vector of ∼(0.3,0) r.l.u. (reciprocal lattice units, 2π/a), attributed to frozen quasi-static polarons <cit.>. Yet, in the metallic state, the fate of these polarons remains elusive <cit.>. Viewed from the lattice, the reduction of Huang scattering and the vanishing of charge order <cit.> seem to indicate that the polarons disappear. Conversely, anomalies in bond-stretching phonons at low temperatures hint at the persistence of strong electron-lattice coupling in the metallic state <cit.>, as does the ‘peak-dip-hump’ structure observed in angle-resolved photoemission spectroscopy (ARPES) experiments <cit.>. While these findings suggest that a strong electron-lattice coupling persists in the metallic state, how the polaronic state evolves across the MIT remains an open question. In this Letter, we present a detailed temperature-dependent study of LSMO, and its polaronic signatures through the MIT, using resonant inleastic x-ray scattering <cit.> (RIXS). High quality single crystals of LSMO were grown using the floating zone method <cit.>; and RIXS experiments were performed at beamline 41A of the Taiwan Photon Source, National Synchrotron Radiation Research Center (NSRRC) <cit.>, at the Mn L_3- and O K-edges with σ polarization, i.e. perpendicular to the scattering plane, and a spectrometer angle 2θ = 150^∘. See the Supplementary Material <cit.> for additional detail. At high-temperatures, LSMO is insulating and features a dispersionless oxygen phonon at small energy transfer in the response. Reducing temperature through the MIT coincides with a drastic redistribution of spectral weight, piling up into a broad high-energy continuum. The systematic evolution of the RIXS response reveals a transition from static polarons at high temperatures to a dynamic polaron liquid at low temperatures. We anchor this investigation in the insulating state by examining the temperature evolution of the static polarons reported earlier <cit.> as expressed in the CO signal. RIXS <cit.> has proven to be an exceptionally sensitive tool <cit.> for picking up CO signatures because the RIXS process couples directly to the charge. fig1b shows the Mn L_3-edge RIXS map taken in the paramagnetic insulating (PI) state as a function of in-plane momentum transfer along the (h,0) direction, where quasi-elastic scattering unambiguously shows the CO signal at = (0.32, 0) r.l.u. Upon cooling into the metallic ferromagnetic state, the signal drops and eventually disappears, as shown in fig1c. A similar trend is observed in the RIXS spectra taken at the O K-edge (fig1d) near the maximum momentum transfer at ∼-0.3 r.l.u.. The temperature dependence of the O K-edge data reflects that of the Mn L_3-edge data, as well as data acquired using both diffuse x-ray and quasi-elastic neutron scattering <cit.> (fig1e), all indicating a disappearance of the Jahn-Teller distortion and the CO in the metallic state. Taken at face value, this would imply that the electron-phonon coupling is less relevant in the metallic state which should be reflected in the behavior of the phonon modes. The displacement of the oxygen atoms, and the associated optical phonon modes, should play a dominant role in the polaronic physics <cit.>. Thus, the inelastic signal at the O K-edge can provide more direct information about the behavior of polarons in LSMO across the MIT, as the RIXS phonon cross-section directly reflects the electron-phonon coupling <cit.>. fig2a shows the momentum resolved RIXS spectra at the O K-edge taken in the insulating (126 K) state up to an energy transfer of 200 meV. There is a sharp peak at ∼60 meV, a weaker, broad peak at approximately twice the energy (∼120 meV), and a decreasing background with increasing energy. We identify the first branch with a phonon excitation, whose energy scale coincides well with the energies of optical vibrations of the oxygen atoms <cit.>, and the second is a harmonic of the first, as shown by a fitting procedure <cit.>. The dispersion of both excitations shows no detectable momentum dependence and lacks any signs of an anomaly near . Turning to the phonon response at 57 K (fig2b), well below the ferromagnetic transition, the phonon energy shifts relative to the high temperature state (see Ref. <cit.> for details). With the lack of a clear momentum dependence of these prominent spectral features, we present a detailed analysis of the momentum-integrated spectra as a function of temperature in fig2c. Tracking the peak energies E_1 and E_2 through the MIT, there is a clear shift from high to low temperature. We quantify this difference Δ_i = E̅_i^T > T_c - E̅_i^T < T_c,i=1,2 in panels fig2d and fig2e, taking the average of the energies E̅_i on either side of the MIT (see Ref. <cit.> for details on the fitting procedure). The exceptionally large value Δ_1 (∼6 meV) is substantiated by the simultaneous shift of the harmonic sideband Δ_2 (∼11 meV) also occurring abruptly at the MIT. The shifts of E_1 and E_2 are accompanied by an overall decrease of RIXS phonon intensity and a notable variation of the relative intensity between the E_1 and E_2 branches (fig2b). Thus, in contrast to the behavior of the quasi-elastic response, the phonon softening implies that the electron-phonon coupling has become more relevant in the metallic state, while the decrease in phonon intensity – typically proportional to the strength of the electron phonon coupling in RIXS <cit.> – seems to indicate a weaker electron-phonon coupling. The resolution of these apparent contradictions comes in the form of a crossover from a static to dynamic polaron. Crucially, on a larger energy scale, the momentum-integrated spectra, as shown in <ref>, reveal the concomitant emergence of a broad peak having a maximum at ∼500 meV from the high temperature continuum-like distribution which is likewise dispersionless (see Ref. <cit.>). The temperature dependence of the spectral weight can be divided into two regions: spectral weight depletion (<250 meV) and spectral weight gain (> 250 meV) with decreasing temperature. This redistribution (see the inset of <ref>) manifests sharply at the MIT, accompanying the shift of the RIXS phonon energies (fig2d),(e) and the disappearance of the quasi-elastic CO signature. Additional insight on the nature of the broad continuum-like response at low temperatures can be obtained by examining its incident photon energy dependence. As shown in fig4a the incident photon energy map contains strong signals at low energy from the phonon and its harmonic, resonant near the onset of the O K-edge absorption, indicating that these phonons couple to charges near the Fermi energy. In addition to the RIXS phonons, the spectral weight at higher energy, i.e. the broad continuum-like hump in the low temperature data of <ref>, also exhibits a strong resonance in intensity across the onset of the O K-edge. The position of the hump disperses with increasing incident photon energy, indicating that the hump consists of a continuum of modes with increasing resonant photon energy as the mode energy rises. We note that this excitation is not itself a fluorescence, as the emergence of the spectral weight is limited to the vicinity of the RIXS resonant energy and appears to be superimposed on the temperature independent fluorescence signal (fig4b, see Ref. <cit.> for details). It is unlikely that this excitation would be associated with magnetic excitations of a single spin flip, such as ferromagnetic magnons or Hund’s exchange splitting <cit.>, because O K-edge RIXS cannot access single spin-flip (Δ S = 1) excitations in 3d transition metal oxides <cit.>; nor is it a multi-magnon excitation, as the in-plane double exchange coupling (∼5 meV <cit.>) and bandwidth would be too small. The hump is unlikely due to acoustic plasmons, recently observed in the cuprates <cit.>, which should exhibit a rapid energy-momentum dispersion that does not vary with incident photon energy <cit.>; nor is it associated with orbital excitations that occur at much higher energy <cit.>. Thus, this excitation is likely associated with a harmonic sequence of lattice excitations as expected for a polaron, that make up the broad continuum. To validate this premise, we turn to multiplet exact diagonalization calculations <cit.>, which account for charge transfer, hybridization, and lattice coupling using a Mn^3+O_2 cluster with 3d^4 (t_2g^3 and e_g^1) valence electrons that couple to an oxygen phonon mode (see Methods for details). As shown in fig4c, with sufficiently large electron-phonon coupling strength (i.e. in the polaronic regime), the RIXS phonon excitations consist of a series of harmonics that persist to energies significantly higher than the phonon energy. The calculation qualitatively mimics the incident photon energy dependence of the hump, in which the ladder of phonon excitations resonates across the absorption edge having a dispersion with increasing incident photon energy, as expected from a Franck-Condon picture <cit.>. How does the high-temperature static polaron in the insulating state evolve into the low-temperature dynamic polaron in the metallic state? Past efforts have attempted to address this question by adhering to abstract concepts such as coherent condensation effects <cit.> or the formation of Zener polarons <cit.>. Our results provide a more microscopic picture that implies that the distortions and the strong electron-lattice interaction manifest in a different manner above and below . In the high temperature phase, the system is locally JT distorted. Here, the lattice energy is tied up in static distortions and the formation of charge order. In this case, the phonons can be viewed as displacements around the equilibrium lattice positions in the CO phase, strongly coupled to either deformational bond- or site-based electrons, as indicated from harmonic phonon excitations in the high temperature RIXS spectrum. At low temperatures, the charge order vanishes, and the static JT distortion is lifted <cit.>. Dynamic lattice distortions occur around the relaxed, undistorted atomic positions and coupling between electrons and these phonons must account for the energy originally stored in the static JT distortion. This produces a liquid-like, electrically conductive dynamic polaronic state that manifests in the O K-edge RIXS spectrum as harmonic phonon emission with a shifted energy and reduced intensity and, most importantly, the emergence of a broad continuum at higher energies involving a large number of phonon excitations <cit.> (cf. <ref>). Indeed, experimentally the low temperature state is a bad metal: it exhibits high metallic resistivity <cit.>, small quasi-particle spectral weight, and incoherent sidebands from photoemission <cit.>, which track the conductivity <cit.>. All of this indicates a sizeable electron-lattice interaction even in the low-temperature state, which we have observed directly using RIXS. We note that the phonon energy of approximately 50 meV agrees well with the kink energy in the electronic band dispersion observed in photoemission <cit.>, which tracks the quasi-particle weight across the MIT. Our findings unambiguously suggest that the low temperature metallic state remains deep inside the polaronic regime, one in which the ground state should be thought of as a dynamic polaron liquid <cit.> – an unorthodox metal far from a conventional metallic state <cit.>. The work at SLAC was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under contract DE-AC0276SF00515. D.J. gratefully acknowledges funding of the Alexander-von-Humboldt foundation via a Feodor-Lynen postdoctoral fellowship. The RIXS experiments were performed at beamline 41A of the Taiwan Photon Source. Sample preparation and characterization in the Materials Science Division of Argonne National Laboratory was supported by U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. § SUPPLEMENTARY MATERIAL §.§ X-ray scattering For the Mn L_3-edge measurements, the photon energy was tuned to the maximum of the absorption peak (figS1b). The total energy resolution was better than Δ E ∼ 40 meV. The O K-edge data presented were obtained during two independent experiments (Experiment 1 and Experiment 2). The momentum transfer measurements at the O K-edge were conducted 0.6 eV below the maximum of the absorption peak (see figS1c), an energy at which the phonon response was resonant. The incident photon energy dependence was taken at an incident angle of θ_in=23^∘ corresponding to q_||∼0.25 r.l.u.. We report a total energy resolution of better than Δ E ∼ 27 meV for the RIXS spectra measured in Experiment 1 and better than Δ E∼ 23 meV for Experiment 2. Momentum maps are plotted versus the in-plane momentum transfer along the (h,0) direction in reciprocal lattice units (r.l.u.)., i.e. along the Mn-O bonds, where 1 r.l.u. corresponds to 2π/a with the in-plane lattice constant a = 3.87 Å. §.§ Fitting procedure The data (<ref> and <ref>) was fitted using Lorentzian functions for the elastic and phonon contributions and an anti-symmetrized Lorentzian for the high frequency broad continuum. Additionally, a convolution with a Gaussian of fixed full-width half-maximum (FWHM) was applied. The first phonon line was fitted with the parameters left unconstrained, whereas for the second harmonic emission, the Lorentzian FWHM was fixed to that of the first phonon peak, leaving the energy position as well as the amplitude unconstrained. The residual spectral weight between the prominent phonon emission peaks and the hump signature was interpreted as higher harmonic phonon contributions and fitted using two additional Lorentzian functions, likewise with a fixed FWHM corresponding to the first phonon line with the amplitude and position being unconstrained. §.§ Simulations The multiplet exact diagonalization calculations were conducted on a single cluster of Mn^3+O_2 which included 5 Mn 3d orbitals and 6 O 2p orbitals with an Mn3d-O2p hybridization. The values for the charge transfer energy Δ_CT and the crystal field ϵ were chosen to yield a high spin ground state with crystal-field split t_2g and e_g orbitals by about 1.2 eV, adapted to match the experimental multiplet structure at the Mn L_3 edge. The phonon entered as bond distortion with a set energy of 60 meV, and the phonon Hilbert space was limited to 12 phonons. RIXS maps were obtained via Kramers-Heisenberg expressions <cit.>.
http://arxiv.org/abs/2307.02749v2
20230706031840
The Local-Global Conjecture for Apollonian circle packings is false
[ "Summer Haag", "Clyde Kertzer", "James Rickards", "Katherine E. Stange" ]
math.NT
[ "math.NT", "52C26, 11D99, 11-04 (Primary) 20H10, 11E12, 11A15, 11B99 (Secondary)" ]
Apollonian local-global is false]The local-global conjecture for Apollonian circle packings is false University of Colorado Boulder, Boulder, Colorado, USA [email protected] https://math.colorado.edu/ suha3163/ University of Colorado Boulder, Boulder, Colorado, USA [email protected] https://clyde-kertzer.github.io/ University of Colorado Boulder, Boulder, Colorado, USA [email protected] https://math.colorado.edu/ jari2770/ University of Colorado Boulder, Boulder, Colorado, USA [email protected] https://math.katestange.net/ Rickards and Stange are currently supported by NSF-CAREER CNS-1652238 (PI Katherine E. Stange). [2020]Primary: 52C26, 11D99, 11-04; Secondary: 20H10, 11E12, 11A15, 11B99 In a primitive integral Apollonian circle packing, the curvatures that appear must fall into one of six or eight residue classes modulo 24. The local-global conjecture states that every sufficiently large integer in one of these residue classes will appear as a curvature in the packing. We prove that this conjecture is false for many packings, by proving that certain quadratic and quartic families are missed. The new obstructions are a property of the thin Apollonian group (and not its Zariski closure), and are a result of quadratic and quartic reciprocity, reminiscent of a Brauer-Manin obstruction. Based on computational evidence, we formulate a new conjecture. [ Katherine E. Stange August 1, 2023 ======================= § INTRODUCTION Methods for understanding arithmetic properties of thin groups have had a proving ground in the study of Apollonian circle packings (Figure <ref>), as well as problems such as Zaremba's conjecture for continued fractions (see <cit.>). The central conjecture is that “thin orbits” in , such as orbits of a thin group like the Apollonian group (defined below), satisfy a local-global property, namely that they are subject to certain congruence restrictions (local), but otherwise should admit all sufficiently large integers (global). In the Apollonian case, the local-global conjecture specifying the exact congruence obstructions is due to Graham-Lagarias-Mallows-Wilks-Yan <cit.> and Fuchs-Sanden <cit.>. Significant progress in finding lower bounds for the number of integers appearing as curvatures has relied a variety of techniques, including analytic methods, quadratic forms, and the spectral theory of graphs. This has culminated in the theorem of Bourgain and Kontorovich that amongst the admissible curvatures for an Apollonian circle packing (those values not obstructed by congruence conditions), a set of density one does appear <cit.>. In this paper, we demonstrate that, for infinitely many (and perhaps most, in a suitable sense) Apollonian circle packings, the local-global conjecture is nevertheless false. The new obstruction rules out certain power families such as n^2 as curvatures in certain packings. It has its source in quadratic and quartic reciprocity, making it reminiscent of a Brauer-Manin obstruction. However, it is strictly a phenomenon of the thin group; its Zariski closure has no such obstruction. §.§ Apollonian circle packings A theorem often attributed to Apollonius of Perga states that, given three mutually tangent circles in the plane, there are exactly two ways to draw a fourth circle tangent to the first three. By starting with three such circles, we can add in the two solutions of Apollonius (sometimes called Soddy circles after an ode by the famous chemist), obtaining five. New triples appear, and by continuing this process, one obtains a fractal called an Apollonian circle packing (Figure <ref>). Any solution to Apollonius' problem produces four mutually tangent circles. A Descartes quadruple is a quadruple of four real numbers (a, b, c, d) such that * (a+b+c+d)^2=2(a^2+b^2+c^2+d^2) (the Descartes equation); * a+b+c+d>0. Given any Descartes quadruple, there exist four mutually tangent circles in the plane with those curvatures (straight lines are included as circles of curvature zero, and a negative curvatures represents a swap of orientation, placing the point at infinity in the interior). In particular, a Descartes quadruple generates a unique Apollonian circle packing (whereas an Apollonian circle packing contains many Descartes quadruples). For this and general background on Apollonian circle packings, see <cit.>. A simple but remarkable consequence of the Descartes equation is that if a, b, c, d∈, then all curvatures in the packing are integral. Call such a configuration, and the packing it generates, integral, and if we furthermore have (a, b, c, d)=1, call both primitive. Figure <ref> depicts part of the packing obtained from the Descartes quadruple (-23, 48, 49, 52), with circles labelled by curvature (the outer circle being the unique one of negative curvature). Note that the four curvatures in the Descartes quadruple are the smallest four that appear in the packing; such a quadruple is unique, and is called the root quadruple. We frequently describe circle packings by their root quadruple. §.§ The set of curvatures of an Apollonian circle packing Given a primitive Apollonian circle packing , we study the set of curvatures which appear. This question was first addressed in the work of Graham, Lagarias, Mallows, Wilks, and Yan in <cit.> as the “Strong Density Conjecture”, appearing near the end of Section 6. This was later revised by Fuchs and Sanden in <cit.>, and has come to be known as the “local-global conjecture” or “local-global principle." Let be a primitive Apollonian circle packing containing curvatures equivalent to r24. The set of positive integers x ≡ r 24 not occurring in is finite. Call a positive curvature c missing in if curvatures equivalent to c24 appear in but c does not. By computing the frequency of curvatures that appear up to 5· 10^8 in the packings corresponding to (-1, 2, 2, 3) and (-11, 21, 24, 28), Fuchs and Sanden observe a tendency toward increasing multiplicity for larger curvatures, which is evidence towards there being few missing curvatures. The current best known result is due to Bourgain and Kontorovich. The number of missing curvatures up to N is at most O(N^1-η) for some effectively computable η > 0. In this paper, we prove that Conjecture <ref> is false for many packings. There exist infinitely many primitive Apollonian circle packings for which the number of missing curvatures up to N is Ω(√(N)). In particular, the local-global conjecture is false for these packings. This theorem is proven by showing that certain quadratic and quartic families of curvatures (of the form ux^2 and ux^4 for a fixed integer u) are missing from some packings. A precise description of the obstructions is found in Theorem <ref>. While Conjecture <ref> is false in general, there still remain some families where it could be true, as we find no new obstructions. For the other packings, we can also remove the quadratic and quartic obstructions, and ask if the remaining set of curvatures is now finite. Let be a primitive Apollonian circle packing, and define S_ to be the set of missing curvatures which do not lie in one of the quadratic or quartic obstruction classes as described in Theorem <ref>. Call this set the “sporadic set” for . For a positive integer N, define S_(N) to be the set of sporadic curvatures in that are at most N. By writing an efficient program using C and PARI/GP <cit.> to compute missing curvatures, we are able to find S_(N) for various packings and bounds N in the range [10^10, 10^12]. This code can be found in the GitHub repository <cit.>, and the sets S_(N) are found in the GitHub repository <cit.>. Some of this data is summarized in Tables <ref> and <ref>, which support a revised conjecture. This revised conjecture remains in concordance with the earlier analysis of Fuchs and Sanden, which indicates a sparsity of missing curvatures. Let be a primitive Apollonian circle packing. Then S_ is finite. In other words, every sufficiently large curvature will appear in the given primitive Apollonian circle packing , except for curvatures lying in one of the linear, quadratic, or quartic families described by Theorem <ref>. §.§ The method of proof To illustrate the method of proof, we will prove an example theorem in the simplest case. The remainder of the paper is devoted to the details of the general proof, which is more involved, but follows the same general plan. The Apollonian circle packing generated by the quadruple (-3,5,8,8) has no square curvatures. First, we observe that all curvatures n in this packing have n ≡ 0, 1 4. Fix a circle ∈ of curvature n. It is well-known that the curvatures of the family of circles tangent to are the values properly represented by a translated quadratic form f_(x,y) - n of discriminant -4n^2 (see Section <ref>). Modulo n, the invertible values reside in a single multiplicative coset of the squares (see Proposition <ref> and its proof), and we therefore define the symbol χ_2() to be the unique non-zero value of the Kronecker symbol cn for c a curvature of a circle tangent to . In particular, χ_2()=-1 implies that square curvatures cannot be tangent to . Now suppose that _1, _2 ∈ are tangent, having non-zero coprime curvatures a and b respectively. Using quadratic reciprocity of the Kronecker symbol (recall that a,b ≡ 0,1 4 with at least one being odd), χ_2(_1)χ_2(_2)=abba=1. In particular, χ_2(_1) = χ_2(_2). It can be shown that any two circles in are connected by a path of pairwise coprime curvatures (Corollary <ref>). Therefore, χ_2() is independent of the choice of circle ∈, and we have defined χ_2(). Since 5 and 8 are coprime curvatures in the generating quadruple, we can compute χ_2() as χ_2()=85 = -1. We conclude that if there exists a square curvature in , it would be tangent to a circle with χ_2() = -1, a contradiction. Therefore there do not exist square curvatures in . For other quadratic obstructions, there are two main difficulties in extending this proof to all cases. When investigating the families ux^2 for u=2, 3, 6, we additionally need to focus on circles of curvature coprime to 6. Furthermore, when the packing contains curvatures that are 2, 34, the Kronecker symbol used in the above proof may not translate across the packing, or even be well-defined! This requires modifying the definition of χ_2() in cases according to the curvature of modulo 4. The quartic obstructions are not dissimilar in spirit. Instead of studying the quadratic form of tangent curvatures, whose values are integers, we study the associated fractional ideal of [i]. The study of these ideals depend on some results describing Apollonian circle packings as orbits of a subgroup of the Bianchi group (2,[i]) <cit.>. Thus, quartic obstructions are a result of the Apollonian group's close connection to the Gaussian integers. §.§ Reciprocity obstructions in thin groups Call the quadratic and quartic obstructions found in this paper reciprocity obstructions. These obstruct the orbit of the Apollonian group (i.e. all Descartes quadruples in a fixed packing), and not the orbit of the larger Zariski closure, which is an orthogonal group of transformations preserving the Descartes equation (the Zariski closure was found in <cit.>). Indeed, the super-Apollonian group lies between both groups, and an orbit contains all primitive integral Apollonian circle packings (see <cit.>). As every integer occurs in at least one packing, every single integer is represented, so there are no missing curvatures in the orbit of the Super-Apollonian group. Sarnak also remarked on the lack of a spin group obstruction for the Descartes form in <cit.>. Thus, the obstructions discussed here are truly a phenomenon of the thin group. As they arise from reciprocity, they should be considered a new class of thin group obstruction analogous to the Brauer-Manin obstruction. We expect that variants of these reciprocity obstructions exist in other thin group settings. In particular, the known obstructions to a number appearing in the orbit of a thin group now fall into one of the following categories: * Analytic: the count of numbers that appear up to N (including multiplicity) is O(N^δ) where δ < 1. Computation of δ is typically a Hausdorff dimension question, and it is clear that this is a restriction. * Algebraic: * Congruence obstructions, where certain residue classes do not appear; * Reciprocity obstructions, where certain power families do not appear. Of course, some orbits may not have reciprocity obstructions, or even congruence obstructions! For example, in <cit.>, it is shown that there are no congruence obstructions for Zaremba's conjecture. Furthermore, in <cit.>, it is shown that the local-global conjecture does hold for Soddy sphere packings. Despite this, one expects other instances of thin groups to produce reciprocity obstructions (this will be the topic of a follow-up paper). §.§ Acknowledgements The authors are grateful to the Department of Mathematics at the University of Colorado Boulder for sponsoring the Research Experience for Undergraduates and Graduates in Summer 2023, which led to this project (these obstructions were first observed in the context of another problem; see Corollary <ref>). We are also grateful to Alex Kontorovich, Peter Sarnak and Richard Evan Schwartz for feedback on an earlier version of this paper. § PRECISE STATEMENT OF THE RESULTS Curvatures modulo 24 in a primitive Apollonian circle packing fall into a set of six or eight possible residues, called the admissible residues for the packing. There are six possible sets of these residue classes, detailed in the following proposition. Let be a primitive Apollonian circle packing. Let R() be the set of residues modulo 24 of the curvatures in . Then R() is one of six possible sets, labelled by a type as follows: Type R() (6, 1) 0, 1, 4, 9, 12, 16 (6, 5) 0, 5, 8, 12, 20, 21 (6, 13) 0, 4, 12, 13, 16, 21 (6, 17) 0, 8, 9, 12, 17, 20 (8, 7) 3, 6, 7, 10, 15, 18, 19, 22 (8, 11) 2, 3, 6, 11, 14, 15, 18, 23 The set R() is called the admissible set of the packing. Each admissible set is labelled by a type (x,k) where x is its cardinality and k is the smallest residue in R that is coprime to 24. Let be a primitive Apollonian circle packing, let u and d be positive integers, and let S_d,u:={un^d:n∈}. In anticipation of the source of the new obstructions we discover, we say that the set S_d,u forms a reciprocity obstruction to if * Infinitely many elements of S_d,u are admissible in modulo 24; * No element of S_d,u appears as a curvature in . If d=2 we call it a quadratic obstruction, and if d=4 it is a quartic obstruction. Note that some reciprocity obstructions are stronger than others, as (for example) we have S_2, 3⊇ S_2, 18 and S_2, 9⊇ S_4, 9. We will tend to only associate “maximal” reciprocity obstruction classes to packings, i.e. where no stronger reciprocity obstructions exist. It is clear that if there exists a reciprocity obstruction for , then the local-global conjecture <ref> cannot hold for , and more specifically, for any of the admissible residue classes intersecting S_d, u. For each of these classes, adding a single condition to n will give us the intersection of S_d,u with that class. In order to describe the quadratic and quartic obstructions, it is necessary to subdivide the types further. There exists a function χ_2:{circles in a primitive Apollonian circle packing}→{± 1} which relates to the possible curvatures of circles tangent to the input circle , and is constant across the packing containing . In particular, this gives a well defined value for χ_2() for primitive Apollonian circle packings . Furthermore, there exists a function χ_4:{circles in a primitive Apollonian circle packing of type (6, 1) or (6, 17)}→{1, i, -1, -i} that satisfies χ_4()^2=χ_2(), and is also constant across a packing (necessarily of type (6, 1) or (6, 17); it is left undefined for other types). This again gives a well defined value for χ_4(). The value of χ_2 determines which quadratic obstructions occur, and the value of χ_4 determines which quartic obstructions occur. We compute χ_2 in terms of the quadratic residuosity of the curvatures tangent to a circle, and χ_4 in terms of a quartic residue symbol. Full definitions come in Sections <ref> and <ref>, but we can give a quick way to compute χ_2() here (which follows directly from the definition). Let be a primitive Apollonian circle packing, and let (a, b) be a pair of curvatures of circles tangent to each other in that also satisfies: * a is coprime to 6b; * if is of type (8, k), then a≡ 78. Then χ_2()=ba. The definition of χ_4 relies on a finer invariant using the quartic residue symbol for Gaussian integers. The essential fact that the symbol is constant across a packing is a direct consequence of quadratic and quartic reciprocity. The (extended) type of a primitive Apollonian circle packing is either the triple (x, k, χ_2) or (x, k, χ_2, χ_4), where has type (x, k) and corresponding values of χ_2 (and χ_4, if relevant). We will refer to the type as any of the three possible variants, as they are uniquely distinguished by the number of entries. The type of a primitive Apollonian circle packing implies the existence of certain quadratic and quartic obstructions, as described by the following table (which also includes the list of residues modulo 24 where Conjecture <ref> is false, and those where it is still open): Type Quadratic Obstructions Quartic Obstructions <ref> false <ref> open (6, 1, 1, 1) 0, 1, 4, 9, 12, 16 (6, 1, 1, -1) n^4, 4n^4, 9n^4, 36n^4 0, 1, 4, 9, 12, 16 (6, 1, -1) n^2, 2n^2, 3n^2, 6n^2 0, 1, 4, 9, 12, 16 (6, 5, 1) 2n^2, 3n^2 0, 8, 12 5, 20, 21 (6, 5, -1) n^2, 6n^2 0, 12 5, 8, 20, 21 (6, 13, 1) 2n^2, 6n^2 0 4, 12, 13, 16, 21 (6, 13, -1) n^2, 3n^2 0, 4, 12, 16 13, 21 (6, 17, 1, 1) 3n^2, 6n^2 9n^4, 36n^4 0, 9, 12 8, 17, 20 (6, 17, 1, -1) 3n^2, 6n^2 n^4, 4n^4 0, 9, 12 8, 17, 20 (6, 17, -1) n^2, 2n^2 0, 8, 9, 12 17, 20 (8, 7, 1) 3n^2, 6n^2 3, 6 7, 10, 15, 18, 19, 22 (8, 7, -1) 2n^2 18 3, 6, 7, 10, 15, 19, 22 (8, 11, 1) 2, 3, 6, 11, 14, 15, 17, 23 (8, 11, -1) 2n^2, 3n^2, 6n^2 2, 3, 6, 18 11, 14, 15, 23 The intersection of quadratic and quartic obstructions with a residue class can be described by adding a condition on n. For example, the obstruction 2n^2 in type (6, 17, -1) intersects the class 824 as 2(6n± 2)^2, and the class 024 as 2(6n)^2. We could consider the χ_4 value for packings of types (6, 1, -1) and (6, 17, -1), which would be either i or -i. Both of these χ_4 values imply that the families n^4, 4n^4, 9n^4, 36n^4 are quartic obstructions. However, we already have n^2 as a quadratic obstruction, which is strictly stronger. This is why they are not listed, as the quartic obstruction does not give anything the quadratic did not (see the discussion below Definition <ref>). This is why we did not attempt to extend the definition of χ_4 to other packing types: we did not find any further quartic obstructions in our computations (that were not already ruled out by quadratic obstructions). A few direct corollaries from Theorem <ref> are noted next. The local-global conjecture <ref> is false for at least one residue class in all primitive Apollonian circle packings that are not of type (6, 1, 1, 1) or (8, 11, 1). The exceptions where the local-global conjecture may yet hold include the strip packing (generated from the root quadruple (0, 0, 1, 1)), and the bug-eye packing (generated from (-1, 2, 2, 3)). The following corollary is the phenomenon which led to the discovery of quadratic obstructions. Curvatures 24m^2 (necessarily 024) and 8n^2 with 3∤ n (necessarily 824) cannot appear in the same primitive Apollonian circle packing, despite 024 and 824 being simultaneously admissible in packings of type (6, 5) or (6, 17). Apollonian circle packings have been generalized in a variety of ways. In <cit.>, K-Apollonian packings were defined for each imaginary quadratic field K, where the (i)-Apollonian case is the subject of this paper. It is quite possible that quadratic obstructions occur in these packings, as they share many features with Apollonian packings, including the fact that quadratic forms control the family of tangent curvatures. The existence of quartic obstructions is less likely, since these depend on quartic reciprocity, which is defined only for K=(i). The family of packings studied in <cit.> are also governed by quadratic forms (this was the essential feature needed for the positive density results of that paper), and include the K-Apollonian packings; these are likely subject to quadratic obstructions as well. It would also be interesting to ask the same question about an even wider class of packings studied by Kapovich and Kontorovich <cit.>. In Section <ref>, we cover the background material required, including the connection between circle packings and values of binary quadratic forms. The proof of the quadratic obstructions comes in Section <ref>, and the quartic obstructions are found in Section <ref>. Computational evidence to support Conjecture <ref> is found in Section <ref>. § RESIDUE CLASSES AND QUADRATIC FORMS In this section we prove Proposition <ref>, and give the background material on the connection between quadratic forms and circle packings. To begin, consider a Descartes quadruple (a, b, c, d) contained in an Apollonian circle packing. The act of swapping the ith circle to obtain the other solution described by Apollonius is called a move, and is denoted S_i. In terms of the quadruples, S_1 corresponds to S_1:(a, b, c, d)→ (2b+2c+2d-a, b, c, d). Analogous equations for S_2 to S_4 hold. It is possible to move between every pair of Descartes quadruples in a fixed circle packing via a finite sequence of these moves (up to a permutation of the entries of the quadruples). The classical Apollonian group is generated by these four moves as transformations in the orthogonal group preserving the Descartes form (which has signature 3,1). §.§ Residue classes In <cit.>, Graham-Lagarias-Mallows-Wilks-Yan proved that the residues modulo 12 of a primitive Apollonian circle packing fall into one of the following four sets: {0, 1, 4, 9}, {2, 3, 6, 11}, {3, 6, 7, 10}, {0, 5, 8, 9}. They also proved that if m is coprime to 30, then every residue class modulo m occurs in . In her PhD thesis <cit.>, Fuchs proved that there are in fact restrictions modulo 24, and that this is the “best possible modulus.” However, to the best of our knowledge, the exact list of the possible admissible sets modulo 24 found in Proposition <ref> has not appeared in the literature until now, and requires justification. We give a self-contained proof of Proposition <ref>. The key observation is the following result. Let tangent circles in a primitive Apollonian circle packing have curvatures a, b. Then a+b≢3, 6, 78. Assume otherwise. Then the odd part of a+b is equivalent to 34, so there exists an odd prime p with p≡ 34 and v_p(a+b) odd. Let (a, b, c, d) be a Descartes quadruple corresponding to our starting circles, and rearranging the Descartes equation gives (a-b)^2+(c-d)^2=2(a+b)(c+d). Since the left hand side is a sum of two squares that is a multiple of p≡ 34, it follows that p| a-b, c-d, and v_p(LHS) is even. Therefore v_p(c+d) is odd, hence p| c+d as well. Thus p| a, b, c, d, so the quadruple is not primitive, contradiction. Rather than work modulo 24, we consider modulo 3 and 8 separately. Let be a primitive Apollonian circle packing. Then the set of Descartes quadruples in taken modulo 3 is one of the following sets: * {(0, 0, 1, 1), (0, 1, 1, 1)} and all permutations; * {(0, 0, 2, 2), (0, 2, 2, 2)} and all permutations. By dividing into cases based on the number of curvatures that are multiples of 3, a straightforward computation shows the claimed sets are the only solutions to Descartes's equation modulo 3. By considering the moves S_1 to S_4, we see that they fall into the two classes. Let be a primitive Apollonian circle packing. Then the set of Descartes quadruples in taken modulo 8 is one of the following sets: * {(0, 0, 1, 1), (0, 4, 1, 1), (4, 4, 1, 1)} and all permutations; * {(0, 0, 5, 5), (0, 4, 5, 5), (4, 4, 5, 5)} and all permutations; * {(2, 2, 3, 7), (2, 6, 3, 7), (6, 6, 3, 7)} and all permutations. Let (a, b, c, d) be a primitive Descartes quadruple in . Considering the Descartes equation modulo 2, there is an even number of odd numbers amongst a, b, c, d. This cannot be zero (as the quadruple would not be primitive), and it cannot be 4, as otherwise (a+b+c+d)^2≡ 816. Therefore there are always two odd and two even, so without loss of generality assume that c, d are odd and a, b are even. Assume c≡ 14. By Proposition <ref>, a≡ b≡ 04 (else a+c≡ 34). Thus we also have d≡ 14, and in fact d≡ c8, as otherwise c+d≡ 68. This gives the quadruples listed in (a) and (b), and by applying S_1 through S_4, we see them fall into two classes. Finally, assume c≡ 34. In this case we must have a≡ b≡ 24, which implies that d≡ 34 as well. Consequently we have c≢d8, else c+d≡ 68. This gives the quadruples in (c), and again, the moves S_1 to S_4 show that they form one class. A consequence of Lemmas <ref> and <ref> is that * R()3= {0, 1} or {0, 2}; * R()8= {0, 1, 4} or {0, 4, 5} or {2, 3, 6, 7}. The Chinese remainder theorem gives six ways to combine these two congruences into a congruence set modulo 24, resulting in exactly the six sets described in Proposition <ref>. For each of the six sets, there do exist primitive packings with those admissible sets. The only remaining issue is justification that we do actually get curvatures corresponding to every pair of residues modulo 3 and 8, i.e. that they do not conspire against each other in the packing (which would lead to a subset of the claimed admissible set). A more general version of the following lemma exists in <cit.>. Let be a primitive Apollonian circle packing containing a curvature equivalent to r_13 and another curvature equivalent to r_28. Then there exists a curvature in that is simultaneously equivalent to r_13 and r_28. Let (a, b, c, d) be a Descartes quadruple in , and assume that a≡ r_28. If (a, b, c, d)3 has two zeroes, then swapping either of them produces the non-zero residue. If it has a single zero, then swapping a non-zero residue gives a zero. In particular, we can always apply a sequence of moves to get (a', b', c', d') where a'≡ r_13. Furthermore, we can assume that if r_1≡ 0 then b≡ c≡ d≢03, and if r_1≢0, then b≡ r_13 and c≡ d≡ 03. Note that the moves do not change the curvatures modulo 4, hence a'≡ r_2, r_2+48. If it is r_28 we are done. Otherwise it is r_2+48, and then the move S_1 will take it to r_28 and will not change it modulo 3, completing the proof. In particular, Proposition <ref> follows. Proposition <ref> is not just useful for eliminating possible admissible sets; it can also rule out behaviours in certain packings. A direct corollary that will be useful later is as follows. Let be a primitive Apollonian circle packing of type (8, k), and assume two circles with odd curvatures a, b are tangent to each other. Then one of a, b is 38, and the other is 78. §.§ Quadratic forms Generating integral Apollonian circle packings seems tricky at first glance, as it requires integral solutions to the Descartes equation. This step is solved by a direct connection to quadratic forms, as first described in <cit.>. A primitive integral positive definite binary quadratic form is a function of the form f(x, y)=Ax^2+Bxy+Cy^2, where A, B, C∈, (A, B, C)=1, A>0, and D:=B^2-4AC<0. Call D the discriminant of f. The group (2, ) acts on the set of forms as follows: abcd∘ f:=f(ax+by, cx+d). This action preserves the discriminant, and divides the set of forms of a fixed discriminant into a finite number of equivalence classes. See the books by Buell <cit.> or Cohen <cit.> for a longer exposition on quadratic forms. Note that while most references study the (2, ) action, we will need the extended (2, ) action, which no longer gives rise to a group. Let (a, b, c, d) be a primitive Descartes quadruple. Then (a+b)x^2+(a+b+c-d)xy+(a+c)y^2 is a primitive integral positive definite binary quadratic form of discriminant -4a^2. Similarly, let Ax^2+Bxy+Cy^2 be a primitive integral positive definite binary quadratic form of discriminant -4a^2. Then (a, A-a, C-a, A+C-B-a) is a primitive Descartes quadruple. These two notions are inverse to each other, and induce a bijection between quadratic forms of discriminant -4a^2 and Descartes quadruples containing a as the first curvature. Given a circle of curvature a in a primitive Apollonian circle packing, we can associate a Descartes quadruple (a, b, c, d) to , and therefore a quadratic form. The ambiguity in choosing the Descartes quadruple exactly corresponds to taking a (2, )-equivalence class of quadratic forms (hence the need to go beyond the (2, ) action). This is shown in <cit.>; see also Proposition 3.1.3 of <cit.>. Let be a circle of curvature n≠ 0 in a primitive Apollonian circle packing, and define f_ to be a primitive integral positive definite binary quadratic form of discriminant -4n^2 that corresponds to via Proposition <ref>. For the rest of the paper we will assume that n≠ 0 for convenience. The results should still hold for n=0, but as this only corresponds to the strip packing (0, 0, 1, 1), it will be of no use here. While f_ is only defined up to a (2, )-equivalence class, this will not matter, and it is more convenient to always refer to it as a bona fide quadratic form, and not just a representative of its equivalence class. Furthermore, if the circle packing containing has symmetries, multiple distinct circles may correspond to the same Descartes quadruple, and hence the same quadratic form. This fact is of no importance in this paper. Given a quadratic form f, the values properly represented by f are the numbers of the form f(x, y) where x, y are coprime integers. This set is constant across a (2, )-equivalence class of forms. In <cit.>, Sarnak made a crucial observation connecting curvatures of circles tangent to and properly represented values of f_. The set of curvatures of circles tangent to in is in bijection with the set of f_(x, y)-n, where n is the curvature of and (x, y) is a pair of coprime integers. This observation has been a key ingredient in nearly every result on Apollonian circle packings since, and this paper is no exception. § QUADRATIC OBSTRUCTIONS Let be a primitive Apollonian circle packing, and fix a set S_2,u={uw^2:w∈} for some fixed positive integer u which has no prime divisors larger than 3. The strategy to prove that no element of S_2,u appears as a curvature in is: * For each circle ∈, define χ_2() ∈{± 1 } which relates to the possible curvatures of circles tangent to , and prove that it is an invariant of : * Prove that the value of χ_2 is equal for tangent circles with coprime curvatures; * Prove that given any two circles in , there exists a path from one to the other via tangencies, where each consecutive pair of curvatures is coprime; Thus, we define a value χ_2() ∈{± 1 } for each packing . * Prove that for a fixed u, packings with a certain χ_2() value and type cannot accommodate curvatures in S_2,u in circles tangent to a “large” subset of . * Prove that every circle in is tangent to an element of this large subset. For circle packings with the certain χ_2() value and type, these steps imply that curvatures in S_2,u cannot appear anywhere in the packing. For each admissible residue class contained in S_2,u, we get a quadratic obstruction. §.§ Definition of χ_2 Let f(x, y)=Ax^2+Bxy+Cy^2 be a primitive integral positive definite binary quadratic form of discriminant -4n^2, for an integer n≠ 0. The set of properly represented invertible values of f(x,y) modulo n taken modulo squares, i.e., { f(x,y) n : (x,y) = (f(x,y),n) = 1 }⊆ (/n)^×/((/n)^×)^2, is a singleton set. Denote any lift of this value to the positive integers by ρ(f). If A is coprime to n, observe that f(x, y)=A(x+B2Ay)^2+n^2Ay^2≡ A(x+B2Ay)^2n, hence this lies in the coset containing A. If A is not coprime to n, then we can replace f by a (2, ) translation of f for which the first coefficient is coprime to n. Note that the class of ρ(f) is well-defined across a (2, )-equivalence class of forms, as these forms properly represent the same integers. By using the Kronecker symbol, this allows us to determine a condition for numbers that are not represented by f. Let n'=n/2 if n≡ 24 and n'=n otherwise. Then the Kronecker symbol ρ(f)n' is independent of the choice of ρ(f) and takes values in {± 1}. Furthermore, let uw^2 be coprime to n, where u and w are positive integers. If ρ(f)n'≠un', then f(x, y) does not properly represent n+uw^2. Let ρ_1 and ρ_2 be two choices for ρ(f). Then there exists integers s, t such that (s, n)=1 and ρ_1=s^2ρ_2+tn. Since n' is either odd or a multiple of 4, the Kronecker symbol is well defined modulo n'. Therefore ρ_1n'=s^2ρ_2+tnn'=s^2ρ_2n'=ρ_2n', so the symbol ρ(f)n' is well-defined as a function of f (the assumption that ρ_1, ρ_2> 0 is implicitly used if n'<0). Since ρ(f) is coprime to n, the symbol takes values in {± 1}. Finally, assume that f(x, y) properly represents n+uw^2. Since (uw^2, n)=1, we can take ρ(f)=u, and the result follows from above. Taking n' instead of n in Proposition <ref> was crucial, as if n≡ 24, the Kronecker symbol is not well-defined modulo n. By using the correspondence between circles and quadratic forms, we can now assign a sign ± 1 to each circle in an Apollonian circle packing, which will dictate what quadratic obstructions can occur adjacent to it. By Proposition <ref>, the following is well-defined. Let be a circle in a primitive Apollonian circle packing of curvature n≠ 0, and let ρ =ρ(f_). Define χ_2():=ρn if n≡ 0, 14; -ρn/2 if n≡ 24; 2ρn if n≡ 34. Assume that is a circle having curvature n coprime to u and equivalent to either 14 or 78. Then χ_2()=ρn, so χ_2()≠un would rule out the existence of a circle tangent to of curvature uw^2 with w coprime to n. However, to rule out uw^2 in the entire packing, we need to show that all such circles in a packing have the same value of χ_2. This requires walking through a tangency chain of intermediate circles not necessarily coprime to u. The various cases in the definition are designed for this purpose. §.§ Propagation of χ_2 For a circle packing of type (6, x), all curvatures are 0, 14, which makes the propagation fairly easy. For the (8, x) packings, all curvatures are 2, 34, and Corollary <ref> will come into play. Let _1, _2 be tangent circles with non-zero coprime curvatures in a primitive Apollonian circle packing. Then χ_2(_1)=χ_2(_2). Let the curvatures of _1,_2 be a,b respectively. Since (a+b, a)=(a+b, b)=1, by Proposition <ref>, we can take ρ(f__1)=a+b=ρ(f__2) (noting that a+b>0). We separate into cases based on the type of packing and a,b4. First, assume the packing has type (6, k). Then a,b≡ 0, 14, with at least one being odd. Therefore χ_2(_1)χ_2(_2)=a+baa+bb=abba=1, by quadratic reciprocity. This implies that χ_2(_1)=χ_2(_2), as claimed. Otherwise, the packing has type (8, k), and we make two cases. If both a and b are odd, by Corollary <ref>, we can assume a≡ 38 and b≡ 78. Thus χ_2(_1)χ_2(_2)=2a+2ba2a+2bb=2abbaab=(-1)(-1)=1, where we used the fact that ab≡ 58 and quadratic reciprocity. Finally, assume that a is odd and b is even, necessarily 24. Write b=2b', and we compute χ_2(_1)χ_2(_2)=2a+2ba-a-bb'=4b'a-ab'=-1b'b'aab'=(-1)^(b'-1)/2(-1)^(b'-1)/2=1, completing the proof. §.§ Coprime curvatures Since Proposition <ref> requires coprime curvatures, to extend χ_2 to the entire packing, we need some results on coprimality of tangent curvatures in the packing. If p divides the curvatures of two tangent circles _1 and _2, then it is an immediate consequence of primitivity that there is a circle tangent to both _1 and _2 that is coprime to p. However, we need something slightly stronger: that for any _1 and _2, we can find a circle tangent and coprime to both. For this, we characterise the possible curvatures tangent to both by a polynomial, and apply the Chinese remainder theorem. Let (a, b, c, d) be a Descartes quadruple, where curvatures a, b correspond to circles _1,_2 respectively. The curvatures of the family of circles tangent to both _1 and _2 are parameterized by f(x) = (a + b)x^2 - (a + b + c - d)x + c, x ∈. The relevant family of curvatures is given by applying powers of S_4S_3 to the initial quadruple. We have that 2(a+b+f(x+1))-f(x)=f(x+2) and 2(a+b+f(x-1))-f(x)=f(x-2). We show by two-tailed induction that (S_4S_3)^k(a,b,f(0),f(1))=(a,b,f(2k),f(2k+1)), k ∈. The forward inductive step is (S_4S_3)^k+1(a,b,f(0),f(1)) = (S_4S_3)(a,b,f(2k),f(2k+1)) = S_4(a,b,f(2k+2),f(2k+1)) = (a,b,f(2(k+1)),f(2(k+1)+1)), and the other direction is similar. Let _1, _2∈ be tangent circles in a primitive Apollonian circle packing. Then there exists a circle ' that is tangent to both _1 and _2, and whose curvature is coprime to both _1 and _2. It suffices to show that the function f(x) of Lemma <ref> takes at least one value simultaneously coprime to a and b. By the Chinese remainder theorem, it suffices, for each prime p dividing a or b, to show that f(x) does not vanish identically over 𝔽_p. Suppose p=2, so at least one of a or b is even. Then, since any Descartes quadruple has two even curvatures and two odd curvatures, f(x) is not always even. For p > 2, a quadratic polynomial can only vanish on all of 𝔽_p if its coefficients vanish, in which case p | a+b, a+b+c-d, c. But then p | a,b,c,d, a contradiction in a primitive packing. Let , '∈ be two circles in a primitive Apollonian circle packing. Then there exists a path of circles _1, _2, …, _k such that * _1= and _k='; * _i is tangent to _i+1 for all 1≤ i≤ k-1; * The curvatures of _i and _i+1 are non-zero and coprime for all 1≤ i≤ k-1. There exists a path of circles _1, _2, …, _k that satisfy the first two requirements. For any consecutive pair whose curvatures are not coprime, by Lemma <ref> we can insert another circle between them which is tangent with coprime curvature to each, which yields a valid path. The following corollary now follows directly from Proposition <ref> and Corollary <ref>. The value of χ_2 is constant across all circles in a fixed primitive Apollonian circle packing . Denote this value by χ_2(). Before proving the sets of quadratic obstructions, we have one final coprimality lemma. Let be a circle of curvature n in a primitive Apollonian circle packing. Then there exists a circle tangent to with curvature coprime to 6n. If there is a circle tangent to of curvature divisible by d = 6/(6,n), then we can use Lemma <ref> to finish. Using Lemma <ref>, let a represent the unique non-zero residue modulo 3 attained by this packing. By the same lemma, quadruples modulo 3 are of the form (0,a,a,a) or (0,0,a,a) up to permutation, and furthermore, S_2 swaps between these two. Modulo 2, they are always of the form (0,0,1,1) up to permutation. Therefore, if d=2 or 3, taking any quadruple containing will suffice to find a curvature which is divisible by d. If d=6 (in which case n is coprime to 6), take a quadruple containing in the first position. If it contains a curvature divisible by 6, we are done. Otherwise, without loss of generality, it is simultaneously of the form (a,0,a,a) 3 and (1,1,0,0) 2; by applying the swap S_3 or S_4 we can obtain a curvature divisible by 6. §.§ Quadratic obstructions For each type of packing, we can assemble the above results to determine which values of u and χ_2 produce quadratic obstructions. Let have type (6, k). Then the following quadratic obstructions occur, as a function of type and χ_2(): Type χ_2() Quadratic obstructions (6, 1) 1 -1 n^2, 2n^2, 3n^2, 6n^2 (6, 5) 1 2n^2, 3n^2 -1 n^2, 6n^2 (6, 13) 1 2n^2, 6n^2 -1 n^2, 3n^2 (6, 17) 1 3n^2, 6n^2 -1 n^2, 2n^2 We work with the contrapositive: for each type, we assume a quadratic family appears, and compute χ_2(). Assume that a circle of curvature uw^2 appears in a packing of type (6,k). By Lemma <ref>, it is tangent to a circle with curvature n coprime to 24uw^2, hence we can take n≡ k24. By Proposition <ref>, the existence of the curvature uw^2 tangent to means un=ρ(f_)n=χ_2(), using Definition <ref> and Corollary <ref>. By quadratic reciprocity, we have k24 2n 3n 1 1 1 5 -1 -1 13 -1 1 17 1 -1 These values give the claimed table. Note that not listing a value of u as a quadratic obstruction in the table does not imply that it cannot be an obstruction, only that this proof method does not work for it. The completeness of these lists is discussed in Section <ref>. In particular, the (6, k) entries in the table of Theorem <ref> are filled in by intersecting the quadratic obstructions with the possible residue classes. Next, we complete the computation for (8, k), which is similar but a little trickier. The issue stems from there being two residue classes that are coprime to 24. Let have type (8, k). Then the following quadratic obstructions occur, as a function of type and χ_2(): Type χ_2() Quadratic obstructions (8, 7) 1 3n^2, 6n^2 -1 2n^2 (8, 11) 1 -1 2n^2, 3n^2, 6n^2 Repeat the proof of Proposition <ref>: assume that a circle of curvature uw^2 appears in . Thus there exists a circle of curvature n coprime to 24uw^2 that is tangent to our starting circle. Thus un=ρ(f_)n=2nχ_2(), i.e. χ_2()= 2un. By quadratic reciprocity, we have k24 2n 3n 7 1 -1 11 -1 1 19 -1 -1 23 1 1 First, assume u=2. Then χ_2()=1, giving the conditions. Next, take u=3. The only admissible residue class with elements of the form 3w^2 is 324, so we can assume that w is odd. If the packing has type (8, 7), we divide in two cases. If n≡ 724, then χ_2() = 2un=-1. The other possibility is n≡ 1924. However, in this case the two circles under consideration have curvatures which are 3 8, in contradiction to Corollary <ref>. For packing type (8, 11), a similar argument works. The case n≡ 1124 is ruled out by Corollary <ref>, and n≡ 2324 implies χ_2()=2un=1. The final case is u=6. Then χ_2() = 3n, and the conclusion follows from the table of residues. It is reasonable to ask if the results in this section can be extended to other values of u, as we focused on u| 6. The proof will work for larger values of u that have no prime divisors other than 2 or 3, but these obstructions are already contained in those with u| 6. If u has a prime divisor p≥ 5, then the Kronecker symbol pn cannot be determined from the residue class n24. It will rule out uw^2 from appearing tangent to a subset of the circles in , but this is not enough to cover the entire packing. Interestingly, this suggests that there may be “partial” obstructions: quadratic families whose members appear less frequently than other curvatures of the same general size. § QUARTIC OBSTRUCTIONS The proof strategy in this section is similar to the quadratic case, where we define an invariant on the circles in the packing, and show that this forbids certain curvatures. The main difference is the quartic restrictions will only apply to two types of packings, and the propagation of the invariant comes down to quartic reciprocity for [i]. §.§ Quartic reciprocity We recall the main definitions and results of quartic reciprocity; see Chapter 6 of <cit.> for a longer exposition. The Gaussian integers [i] form a unique factorization domain, with units being {1, i, -1, -i}. For α=a+bi∈[i], denote the norm of α by N(α):=a^2+b^2. We call α odd if N(α) is odd, and even otherwise. An even α is necessarily divisible by 1+i. If α=a+bi∈[i] is odd, call it primary if α≡ 12+2i. This is equivalent to (a, b)≡ (1, 0), (3, 2)4. The associates of α are α, iα, -α, -iα. If α is odd, then exactly one associate of α is primary. The quartic residue symbol αβ takes in two coprime elements α,β∈[i] with β odd, and outputs a power of i. Let π be an odd prime of [i]. If α is coprime to π, define απ to be the unique power of i that satisfies απ≡α^N(π)-1/4π. Extend the quartic residue symbol multiplicatively in the denominator: α u π_1 π_2 ⋯π_n = αuαπ_1απ_2⋯απ_n, where αu=1 for any unit u∈[i]. The basic properties of this symbol, and the statement of quartic reciprocity are summarized next, following <cit.>. <cit.> The quartic residue symbol satisfies the following properties: * If α_1, α_2 ∈[i] with α_1α_2 coprime to an odd β∈[i], then α_1α_2β=α_1βα_2β. * If α_1, α_2, β∈[i] satisfy α_1≡α_2β, α_1 and β are coprime, and β is odd, then α_1β=α_2β. * If a, b∈ are coprime integers with b odd, then ab=1. <cit.> Let α=a+bi be primary. Then iα=i^1-a/2, -1α=i^b, 1+iα=i^a-b-b^2-1/4, 2α=i^-b/2. If β∈[i] is relatively prime to α and primary, then αβ=(-1)^N(α)-1/4N(β)-1/4βα. In particular, if either α or β is an odd primary integer, then αβ=βα. §.§ Definition of χ_4 Let be a circle of curvature n≠ 0 in a primitive Apollonian circle packing. It is associated to [f_], a (2, )-equivalence class of primitive integral positive definite binary quadratic forms of discriminant -4n^2. In order to define χ_2(), we observed that the values of the quadratic form f_ were consistent in their residuosity modulo n: they were all either quadratic residues, or all non-residues (at least once we restricted to those coprime to n). By using the bijection between quadratic forms and fractional ideals, we obtain a unique homothety class of lattices [Λ], for some Λ⊆[i]. Recall that the values of f_ are exactly the norms of the elements of Λ (up to a global scaling factor). By considering the elements of this lattice, instead of their norms, we can recover finer information than just quadratic residuosity: we can access quartic residuosity. This is the key insight in defining χ_4. Up to multiplication by a unit, there is a unique representative of [Λ] that lies inside [i] with covolume n <cit.> (the cited proposition is more general, but it is only in the case of class number one that this is true of every homothety class). In general, any rank-2 lattice Λ⊆[i] has an order, ord(Λ):= {λ∈[i] : λΛ⊆Λ}, which is an order of (i), not necessarily maximal. By the conductor of Λ, we will mean the conductor of this order, which is a positive integer. The following can be proven by observing that Λ is locally principal, or by choosing a Hermite basis for Λ, which will have one element in ord(Λ). <cit.> Let n be a positive integer. Let Λ⊆[i] be a lattice of covolume n and conductor n, and define S_Λ:={β∈Λ:β is coprime to n}. Then the image of S_Λ in ([i]/n[i])^×/(/n)^× consists of a single element. The uniqueness property allows us to define the invariant χ_4. Let be a primitive Apollonian circle packing of type (6, 1) or (6, 17), and let be a circle of non-zero curvature n in , necessarily satisfying n≡ 0, 1, 48. Suppose corresponds to a lattice Λ_⊆[i] of covolume n. Let β=a+bi∈ S_Λ_∪ S_iΛ_, where β is chosen to be primary if n is even. Write n=2^en' where n' is odd. Define χ_4() as χ_4():= (-1)^be/4βn' if n≡ 08; βn if n≡ 18; -1n'βn' if n≡ 48. There exists a choice of β satisfying all requirements, and the definition of χ_4 is well-defined, independent of this choice, and lies in {1,i,-1,-i}. First, the set S_Λ_∪ S_iΛ_ is uniquely determined by . If n is odd and β, β' are two choices, then by Proposition <ref> we have β'=i^k(uβ+δ) for k=0, 1, an integer u coprime to n, and δ∈ n[i]. Using Propositions <ref> and <ref> (and recalling that n≡ 18), we compute β'n=in^kuβ+δn=uβn=unβn=βn, so the symbol is well-defined. Next, assume n is even, hence a multiple of 4. Pick an arbitrary β∈ S_Λ_, which is necessarily odd, and by replacing it with an associate, we can assume it is primary, which proves that a choice of β is possible. As before, assume that two valid choices are β, β', so that β'=i^k(uβ+δ) for some integer k=0, 1, integer u coprime to n, and δ∈ n[i]. Also write β=a+bi and β'=a'+b'i. If k=1, then b and b' have opposite parity, a contradiction to them being primary. Therefore k=0, so β'=uβ+δ. In particular, as n' is an odd integer, the analogous computation to Equation <ref> still holds, so β'n'=βn'. It remains to check that the extra factors in the definition of χ_4 in the even case are also independent. If n≡ 48, the extra factor is -1n', which does not depend on β. If n≡ 08, the extra factor is (-1)^be/4, which depends on b. We must verify that b≡ 04, so that the exponent of be/4 is integral. Assuming this is true, and using that 8|δ and u is odd, we have b'≡ ub≡ b8, which completes the proof. To prove that b ≡ 0 4, note that a^2+b^2=N(β) is a value properly represented by f_(x, y). If b≢04, then a^2+b^2≡ 58, hence f_(x, y) - n properly represents a number that is 5-0≡ 58, i.e. contains a curvature of this form. However, we are in a packing of type either (6, 1) or (6, 17), where all odd curvatures are 18, contradiction. §.§ Propagation of χ_4 Assume is a primitive Apollonian circle packing of type (6, 1) or (6, 17). In order to relate the χ_4 value of tangent circles, we need a value of β that works for both. Let _1 and _2 be tangent circles of non-zero coprime curvatures n_1, n_2 in . Then there exists a β∈[i] such that * N(β)=n_1+n_2; * β is a valid choice in Definition <ref> for both _1 and _2. The β is described by <cit.>, and is defined up to unit multiple, which allows for choosing β to be primary if necessary. That β is in Λ__1 and Λ__2 is a consequence of <cit.>. That the norm is correct is a consequence of <cit.>. Because this proof depends so heavily on the results from <cit.>, we will provide an overview of the relevant ideas there. The orbit of the extended real line under the Möbius transformations (2, [i]) includes all primitive integral Apollonian circle packings (scaled by a factor of 1/2); we call this the Schmidt arrangement. Thus, given the packing , we can place it within the extended complex plane ≅ℙ^1() in a canonical way, so that the tangency points of any circle are given by the projectivization of the -span in ^2 of two vectors [α, β], and [γ, δ], corresponding to tangency points α/β and γ/δ. Therefore the denominators of the tangency points in the packing form a lattice Λ = β + δ. In <cit.>, it is shown that these are the Λ described in the previous section. In particular, the lattices of tangent circles share a primitive vector corresponding to the tangency point. Let _1 and _2 be tangent circles of non-zero coprime curvature in . Then χ_4(_1)=χ_4(_2). Let n_1 and n_2 be the curvatures of _1 and _2 respectively. Since we are in type (6,1) or (6,17), n_1,n_2 ≡ 0,1, or 4 8. Take a β as promised by Proposition <ref>, and assume that n_1 is odd. If n_2 is also odd, then N(β)=n_1+n_2≡ 28, hence β=(1+i)β', with β' odd. By replacing β with an associate, we can assume that β'=a+bi is primary. We compute χ_4(_1)=βn_1=1+in_1β'n_1=i^n_1-1/4n_1β'. As n_1+n_2=N(β)=ββ, we have n_1≡ -n_2β'. Thus χ_4(_1)=i^n_1-1/4-n_2β'=i^n_1-1/4-1β'n_2β'=i^n_1-1/4i^bβ'n_2=i^n_1-1/4i^bi^-n_2-1/4χ_4(_2). In order to conclude that χ_4(_1)=χ_4(_2), we must have n_1-1/4+b-n_2-1/4≡ 04, i.e. n_1-n_2+4b≡ 016. If n_1≡ n_216, then 2(a^2+b^2)=n_1+n_2≡ 216, hence a^2+b^2≡ 18. In particular, 4| b (recall that β' is primary), and Equation <ref> follows. Otherwise, n_1≡ n_2 + 816, so 2(a^2+b^2)≡ 1016, and a^2+b^2≡ 58. This implies that b≡ 24, and again Equation <ref> is true. If n_2 is even, then β=a+bi is primary by assumption. We compute χ_4(_1)=βn_1=n_1β=-n_2β=n_2/4β, where the last equality follows from -4β=1+iβ^4=1. We now separate into the cases n_2≡ 0 or 4 8. If n_2≡ 08, write n_2=2^en_2' with n_2' odd, and χ_4(_2)=(-1)^be/4βn_2'=(-1)^be/4± n_2'β=(-1)^be/4∓ 1β-n_2'β=(-1)^be/4∓ 1β2^eβ^-1χ_4(_1), where the sign of the ± depends on n_2' modulo 4. As in the proof of Proposition <ref>, we have 4| b, hence -1 β=i^b=1, so the ± sign does not matter. We also compute 2^eβ^-1=(i^-b/2)^-e=(-1)^be/4=(-1)^-be/4, hence the terms cancel and χ_4(_2)=χ_4(_1). Finally, assume n_2≡ 48, so n_2'=n_2/4. If n_2'≡ 14, then χ_4(_2)=-1n_2'βn_2'=n_2'β=χ_4(_1), as desired. Otherwise, n_2'≡ 34 and χ_4(_2)=-1n_2'βn_2'=--n_2'β=--1βχ_4(_1)=-i^-bχ_4(_1). Since a^2+b^2=n_1+n_2≡ 58, we have b≡ 24, so -i^b=1, completing the proof. Corollary <ref> and Proposition <ref> combine to give the following corollary. The value of χ_4 is constant across all circles in a fixed Apollonian circle packing of type (6, 1) or (6, 17). Denote this value by χ_4(). §.§ Quartic obstructions Assume that is a primitive Apollonian circle packing of type (6,1) or (6,17). As in the quadratic section, χ_4 determines the quartic obstructions. Then the following quartic obstructions occur, as a function of type and χ_4(): Type χ_4() Quartic obstructions (6, 1) 1 -1, i, -i n^4, 4n^4, 9n^4, 36n^4 (6, 17) 1 9n^4, 36n^4 -1 n^4, 4n^4 i, -i n^4, 4n^4, 9n^4, 36n^4 Let u∈{1, 4, 9, 36}, and assume that a circle of curvature n=uw^4 appears in for some positive integer w. The proof works in a “contrapositive” manner: we divide into cases based on n8, and, for each u, restrict the values of χ_4(𝒜) that may permit the curvature n to appear. Let n_2 be the curvature of a circle ' tangent to that is coprime to n. Let β be chosen as in Proposition <ref> for the circles and '. If n≡ 08, then 2| w, hence n=2^en' with n' odd and 2| e. Thus χ_4()=(-1)^be/4βn'=β1 if u=1, 4; β3^2 if u=9, 36. Clearly β1=1, which gives the result for u=1,4 (specifically, if any curvature of the form n^4 or 4n^4 appears, then χ_4(𝒜)=1). For u=9, 36, we have n≡ 024. Let β=a+bi, and as 3 is prime in [i], β3^2≡β^4≡ (a+bi)^4≡ a^4+b^4+(a^3b-ab^3)i3. Then a^2+b^2= n+n_2≡ n_23. If the type of is (6, 1), then a^2+b^2≡ 13, so exactly one of (a, b) is 03. In either case, a^4+b^4+(a^3b-ab^3)i≡ 13, so χ_4()=1. If the type is (6, 17), then a^2+b^2≡ 23, so a^2≡ b^2≡ 13. Thus a^4+b^4+(a^3b-ab^3)i≡(a^2)^2+(b^2)^2+ab(a^2-b^2)i≡ 2≡ -13, so χ_4()=-1, again agreeing with the table. Next, assume n≡ 18. If u=1, we immediately have χ_4()=βw^4=1. Otherwise we have u=9, and in fact n≡ 924. Then χ_4()=β3^2 w^4=β3^2, from whence the analysis proceeds exactly as in the case of n≡ 0 8. Finally, take n≡ 48, which allows u=4, 36. If u=4, then n' is an odd fourth power, so χ_4()=-1n'βn'=1. If u=36, then n'=9t^4, so χ_4()=β3^2, which is again identical to the case of n≡ 0 8. There is a nice relationship between χ_4() and χ_2(). χ_4()^2=χ_2(). Let be a circle of odd curvature n in . Choose a circle ' tangent to of coprime curvature n_2, and choose β as in Proposition <ref> for circles and '. Then χ_4()^2=βn^2. Let ·· also denote the quadratic residue symbol for [i], and it follows that χ_4()^2=βn. By <cit.>, we have βn=N(β)n, with the second Kronecker symbol taken over . Since N(β)=n+n_2, we can take ρ(f_)=n+n_2, proving that χ_4()^2=χ_2(). As χ_2 and χ_4 are constant across , the result follows. From this proposition, if χ_4()=± i, then χ_2()=-1, implying that there are no squares in the packing. Since all quartic obstructions are squares, these cases were already removed, and we get nothing new. This is why we only list quartic obstructions for types (6, 1, 1) and (6, 17, 1). § COMPUTATIONS In order to support Conjecture <ref>, code to compute the missing curvatures and remove the quadratic and quartic families was written with a combination of C and PARI/GP <cit.>. This code (alongside other methods to compute with Apollonian circle packings) can be found in the GitHub repository <cit.>. Files containing the sporadic sets for many small root quadruples can be found in the GitHub repository <cit.>. We summarize the results for the smallest three quadruples (ordered by the sum of the root quadruple curvatures) of each type in Tables <ref> and <ref>. In every case, the sporadic set appears to thin out in a typical way as the curvature bound increases. This is illustrated in Figure <ref>, showing the decreasing proportion of sporadic curvatures, as curvature size increases, for some large sporadic sets. The last column of Table <ref> and <ref> shows the ratio of the largest curvature computed to the last known sporadic curvature. As this ratio increases, we can be increasingly confident that we have found the full sporadic set. In Figure <ref>, we compare our predicted values of #S_ to the “size” of the packing, and observe an upward trend. There are only 5 pairs of residue class and packing for which it appears that every single positive residue in that class appears. They are: * (-3, 5, 8, 8) and 524; * (-3, 4, 12, 13) and 1324; * (-1, 2, 2, 3) and 11, 14, 2324. Near misses are * (0, 0, 1, 1) and 124, which only misses the curvature 241 up to 10^10; * (-1, 2, 2, 3) and 224, which only misses the curvature 13154 up to 10^10. An intrepid observer of the raw sporadic sets may remark that, toward the tail end, the sporadic curvatures are disproportionately multiples of 5. In fact, they generally prefer prime divisors which are 1 4. We speculate that this is another local phenomenon: a result of certain symmetries of the distribution of curvatures in the orbit of quadruples modulo p ≡ 1 4 (similar to <cit.>). One particularly visually appealing way to observe the reciprocity obstructions is to plot the successive differences of the exceptional set. Since quadratic and quartic sequences have patternful successive differences, even a union of quadratic and quartic sequences reveals a prominent visual pattern once the sporadic set begins to thin out. See Figure <ref>. alpha
http://arxiv.org/abs/2307.01334v1
20230703201306
Finitely generated subgroups of algebraic elements of plane Cremona groups are bounded
[ "Anne Lonjou", "Piotr Przytycki", "Christian Urech" ]
math.GR
[ "math.GR", "math.AG", "math.DS", "14E07, 20F65, 37F10" ]
The alpha particle charge radius, the radion and the proton radius puzzle A. S. Lemos Received Month day, 2023; accepted June day, 2023 ========================================================================= We prove that any finitely generated subgroup of the plane Cremona group consisting only of algebraic elements is of bounded degree. This follows from a more general result on `decent' actions on infinite direct sums. We apply our results to describe the degree growth of finitely generated subgroups of the plane Cremona group. § INTRODUCTION The Cremona group _2(k) over a field k is the group of birational transformations of the projective plane ^2 over k. Cremona groups have been the delight of algebraic geometers and group theorists in both, classical and modern times. As of today, many aspects of _2(k) are well understood and there are many tools at hand to study those groups. For instance, _2(k) acts by isometries on an infinite dimensional hyperbolic space ℍ^∞ <cit.> and on various CAT(0) cube complexes <cit.>. Nevertheless, some questions have remained open. The goal of this article is to positively answer a question asked more than a decade ago by Favre in <cit.>. Let us fix projective coordinates [x:y:z] of ^2. An element f∈_2(k) is given by [x:y:z] [f_0:f_1:f_2], where the f_i∈ k[x,y,z] are homogeneous of the same degree and without non-constant common factor. The degree (f) of f is defined as the degree of the polynomials f_i. An element f∈_2(k) is algebraic, if (f^n) is uniformly bounded for all n∈. A subgroup G<_2(k) is bounded if the degree of all elements in G is uniformly bounded. Clearly, a bounded subgroup consists of algebraic elements. However, the converse is not true. For instance, consider the subgroup defined in the affine coordinates (x,y) of ^2 by G={(x,y) (x+R(y), y)| R∈ k(y)}. Every element in G is algebraic, but G is not bounded. In this paper, we show the following theorem, which solves Favre's question: Let G<_2(k) be a finitely generated subgroup such that every element of G is algebraic. Then G is bounded. Note that the properties of being algebraic and being bounded are invariant under field extensions, so it is enough to show Theorem <ref> for algebraically closed fields. The first step towards the proof of Theorem <ref> is due to Cantat, who showed in <cit.> that a finitely generated subgroup G<_2(k) consisting of algebraic elements is either bounded or preserves a rational fibration (see <cit.> for a proof of this result for fields of arbitrary characteristic). It is therefore enough to show Theorem <ref> for finitely generated groups that preserve a rational fibration. To this end, we introduce the Jonquières complex — a CAT(0) cube complex with an isometric action of the group of birational transformations preserving a rational fibration, whose vertex stabilizers are bounded subgroups. Using a suitable description of this complex and the dynamics of _2(k) on ^1, we show that the action of our group on this complex is decent, as defined below. Let X_0 be a set. We say that a group G_0 acts on X_0 purely elliptically, if each element of G_0 fixes a point of X_0. We say that a group G_0 acts on X_0 decently if * each subgroup of G_0 with a finite orbit fixes a point of X_0, and * each finitely generated subgroup of G_0 acting purely elliptically fixes a point of X_0. It is an easy exercise that if G_0 is the isometry group of a simplicial tree X_0, then G_0 acts on X_0 decently. More generally, if G_0 is the isometry group of a CAT(0) cube complex X_0 with no infinite cubes, then G_0 acts on X_0 decently <cit.>. Similarly, if G_0 is the isometry group of a CAT(0) 2-complex X_0 with rational angles, then G_0 acts on X_0 decently <cit.>. For further examples, see <cit.>. §.§ Applications. The Cremona group _2(k) can be equipped with the Zariski topology (see for instance <cit.>). An algebraic subgroup of _2(k) is a Zariski closed bounded subgroup. This explains the terminology: an element in _2(k) is algebraic if and only if it is contained in an algebraic subgroup. An algebraic subgroup G is always projectively regularizable, i.e., there exists a birational map φ^2 S such that φ Gφ^-1<(S) for some regular projective surface S. This follows from the theorems of Weil and Sumihiro or from the fact that the number of base-points of elements in an algebraic subgroup is uniformly bounded (we refer to <cit.> for references and a proof of this fact, or to <cit.>). Theorem <ref> has therefore the following direct consequence: Let G<_2(k) be a finitely generated subgroup such that every element of G is algebraic. Then G is projectively regularizable. From another point of view, algebraic elements correspond exactly to elements in _2(k) inducing an elliptic isometry on the infinite dimensional hyperbolic space ℍ^∞, on which _2(k) acts <cit.>. Theorem <ref> therefore states that the action of _2(k) on ℍ^∞ is decent. While all algebraic elements are projectively regularizable, there exist also non-algebraic elements that are projectively regularizable. It is still unknown whether a finitely generated subgroup of the Cremona group containing only projectively regularizable elements is projectively regularizable or not. This question is equivalent to the question whether _2(k) acts decently on the blow-up complex — a CAT(0) cube complex constructed in <cit.>. In <cit.>, the first and third authors together with Genevois positively answer this question when the base-field is finite. Since elements in _2(k) preserving a rational fibration are projectively regularizable if and only if they are algebraic, Corollary <ref> directly implies the following. Let G<_2(k) be a finitely generated subgroup preserving a rational fibration such that every element in G is projectively regularizable. Then G is projectively regularizable. The notion of algebraic elements generalizes to birational transformations of arbitrary regular projective surfaces and Theorem <ref> generalizes to this setting. However, the most interesting and difficult case is the one of rational surfaces. In order to keep the notation more accessible, we discuss and prove the general case in the separate Section <ref>. In Section <ref> we apply Theorem <ref> to give a first description of the asymptotic degree growth of finitely generated subgroups of _2(k). This opens up new interesting questions about the dynamical behaviour of finitely generated subgroups of _2(k). Let G<_2(k) be a finitely generated subgroup with a finite generating set T. Denote by B_T(n) the set of all elements in G of word length l_T at most n with respect to the generating set T, and define D_T(n):=max_f∈ B_T(n){(f)}. It has been shown in <cit.> that there are only countably many integer sequences that can appear in this way. However, still very little is known about them. In the case where T consists of a single element, the growth of the function D_T has been studied a lot (see Theorem <ref>). Theorem <ref> is the main ingredient for the following result, which we prove in Section <ref>. For two functions, f and g on , we write f≍ g if f(x)≤ ag(bx) and g(x)≤ cf(dx) for some a,b,c,d>0. Let be an algebraically closed field and let G<_2(k) be a finitely generated subgroup with generating set T. Then one of the following is true: * All elements in G are algebraic, G fixes a point in , and D_T(n) is bounded. (2a) The group G contains an element that induces a parabolic isometry of , D_T(n)≍ n, and G preserves a rational fibration. (2b) The group G contains an element that induces a parabolic isometry of , D_T(n)≍ n^2, and G preserves a rational fibration. (3) The group G contains an element that induces a loxodromic isometry of and D_T(n)≍λ^n for λ>0. It would be interesting to study the dynamical behaviour of the degrees of finitely generated subgroups in more detail. In Corollary <ref>, the asymptotic behaviour does not depend on the choice of the finite generating set T. For instance, one could ask, after fixing T, what is the precise asymptotic growth of D_T(n), if the group G contains a loxodromic element: Let G<_2(k) be a finitely generated subgroup containing an element that induces a loxodromic isometry of , with finite generating set T. What is the asymptotic growth of D_T(n)? Do we always have D_T(n)∼ cλ^n for some constants c,λ>0? Here, we write f∼ g if f and g are functions such that lim_n→∞f(n)/g(n)=1. In another direction, it has been shown that the dynamical degree of f∈_2(k), i.e., the number λ(f):=lim_n→∞(f)^1/n is always an algebraic integer (more precisely, it is always a Pisot or Salem number, see <cit.>). This leads to the following natural question: Let G<_2(k) be a finitely generated subgroup with finite generating set T. Which real numbers can be realized as lim_n→∞D_T(G)^1/n? For instance, it could be interesting to construct examples of such subgroups and generating sets such that lim_n→∞D_T(G)^1/n is irrational. §.§ Decent actions. In Section <ref> we will show that Theorem <ref> is a special case of a following more general result on groups acting decently on an infinite direct sum. A pointed set (X_0,x_0) is a set X_0 and a point x_0∈ X_0. The direct sum ⊕_p∈ P (X_p,x_p) of a family {(X_p,x_p)}_p∈ P of pointed sets is the set of sections {y_p}_p∈ P with y_p∈ X_p such that all but finitely many y_p are equal to x_p. Note that for finite P we have ⊕_p∈ P (X_p,x_p)=Π_p∈ P X_p. For infinite P, we have that ⊕_p∈ P (X_p,x_p) is a proper subset of Π_p∈ P X_p. If each X_p is a simplicial tree and each x_p is a vertex (which will be the case in our application towards Theorem <ref>), then ⊕_p∈ P (X_p,x_p) has a structure of a cube complex. This cube complex is (0) though this will not be exploited explicitly in the current article. Let G_0 be a group acting on X_0 and let H be a group acting on P. The (unrestricted) wreath product G_0≀_P H of G_0 and H over P is the semidirect product of Π_p∈ P G_p, where G_p=G_0, with the group H acting on Π_p∈ P G_p by h·{g_p}_p∈ P={g_h^-1(p)}_p∈ P. Let x_0∈ X_0. We will be considering the subgroup G^⊕ of G_0≀_P H preserving ⊕_p∈ P (X_p,x_p), where X_p=X_0 and x_p=x_0, and where the action is defined as follows. For g={g_p}_p∈ P∈Π_p∈ PG_p, we have g ·{y_p}_p∈ P={g_p(y_p)}_p∈ P. For h∈ H, we have h·{y_p}_p∈ P={y_h^-1(p)}_p∈ P. It is not hard to verify that if X_0 is a simplicial tree, and f is a combinatorial isometry of the cube complex ⊕_p∈ P (X_p,x_p), then f induces a bijection of P. Furthermore, if G_0 is the group of all simplicial isometries of X_0, then we have f∈ G^⊕ if and only if H contains that bijection. For example, if P={p,q} with X_0 the real line tiled by unit intervals, then ⊕_p∈ P (X_p,x_p) is the square tiling of the Euclidean plane, and a 90^∘ rotation of the plane induces the bijection interchanging p and q. Let G_0 be a group acting decently on X_0. Let P be the projective line ^1 over an algebraically closed field k, and let H=Aut(^1). Then G^⊕ acts decently on ⊕_p∈ P (X_p,x_p). We will prove Theorem <ref> in Section <ref>. Note that in Theorem <ref>, we have to make some assumptions on P and H. For example, assume that the set X_0 is finite and G_0 is the symmetric group of X_0. Suppose that H is a finitely generated infinite torsion group acting by left multiplication on P=H. Then the entire G^⊕ is finitely generated and acts purely elliptically on ⊕_p∈ P (X_p,x_p), but does not have a fixed-point. § PRELIMINARIES In this section, we briefly recall some well-known facts about blow-ups and conic bundles. We assume our base-field k to be algebraically closed. Unless mentioned otherwise, surfaces are assumed to be projective and smooth. §.§ Subgroups of algebraic elements of automorphism groups Let us observe that Theorem <ref> holds if we work with finitely generated groups of automorphism groups: Let S be a surface and let G<(S) be a finitely generated subgroup such that every element in G is algebraic. Then G is bounded. The linear action of (S) on the Néron–Severi lattice N^1(S) of S yields a homomorphism (S)→(N^1(S))≃_n(), whose kernel is an algebraic group and therefore bounded. The image of an element g∈ G in (N^1(S)) is of finite order, since g is algebraic (see for example <cit.>). Using that G is finitely generated, we obtain that the image of G in (N^1(S)) is finite. Therefore, G is a finite extension of a bounded group and therefore bounded itself. §.§ Bubble space, strong factorization, and base-points Let S be a surface and let s∈ S. Then there exists a surface S̃ and a morphism πS̃→ S such that the fibre E over s is a Cartier divisor and π induces an isomorphism between S̃∖ E and S∖{s}. The morphism πS̃→ S is called the blow-up of S in s, and it is unique up to isomorphism. The bubble space of S is the set (S) of triples (t,T,π), where π T→ S is a birational morphism from a surface T and t is a point on T. Two triples (t,T,π) and (t', T',π') are identified if π^-1π' is a local isomorphism around t' mapping t' to t. The bubble space can be thought of as the set of all points on S and on surfaces obtained from S by successively blowing up points. The points in (S) contained in S are called proper points. Zariski's strong factorization theorem states that every birational transformation f S T between surfaces can be factored into blow-ups of points. More precisely, there exists a surface Z and a factorization Z [ld, "π"'] [rd, "ρ"] S [rr, "f", dashed] T where π and ρ are compositions of blow-ups of points. Note that the points blown up by π and ρ can be seen as elements of the bubble space ( S) and ( T) respectively. Moreover, Z can be chosen in such a way, and will be denoted by Z_f, that for any other such factorization Z' [ld, "π'"'] [rd, "ρ'"] S [rr, "f", dashed] T there exists a surjective morphism η Z'→ Z_f such that π'=πη and ρ'=ρη. If we require Z_f to be minimal in this sense, this factorization is unique up to isomorphism and up to possibly changing the order of blowing up the points. The points (in (S)) blown up by π are called the base-points of f. A base-point s of f is called persistent if there exists l≥ 1 such that, for all n≥ l, the point s is a base-point of f^n but s is not a base-point of f^-n. §.§ Conic bundles A rational fibration on a surface S is a morphism π S→ C, where C is a curve, such that all the fibres are rational curves. Let π S→ C and π' S'→ C be rational fibrations. We say that a birational transformation f S S' preserves the fibrations if there exists an automorphism ħ(f) C→ C such that the following diagram commutes S [d, "π"] [rr, "f", dashed] S' [d, "π'"] C [rr, "ħ(f)"] C. An elementary transformation is blowing up a point s∈ S followed by blowing down the strict transform of the fibre containing s. The following fact is well-known (see for instance <cit.>): Let π S→ C and π' S'→ C be rational fibrations. Every birational transformation f S S' preserving the fibrations can be factored into a sequence of elementary transformations. A conic bundle is a composition with a rational fibration π S→ C of a sequence of blow-ups S̃→ S, where we blow up at most one point s∈ S in each fibre. Such a fibre that consists of two rational curves of self-intersection -1, called also -1-curves (the preimage of s under the blow-up and the strict transform of the fibre of s). § THE JONQUIÈRES COMPLEX In this section we reduce Theorem <ref> to Theorem <ref>. We always assume our base-field k to be algebraically closed and surfaces to be projective and smooth. §.§ The blow-up complex Let T be a surface. In <cit.>, the authors constructed the blow-up complex (T) — a (0) cube complex with an isometric action of (T). Let us briefly recall the construction of (T). The vertices of (T) are equivalence classes of marked surfaces, i.e., pairs (S,φ), where S is a surface and φ S T a birational transformation. Marked surfaces (S,φ) and (S',φ') are equivalent if φ^-1φ' S'→ S is an isomorphism. Vertices (represented by) (S,φ) and (S̃,φ̃), are connected by an edge if φ^-1φ̃S̃→S is the blow-up of a point (or its inverse). More generally, 2^n different vertices form an n-cube if there is a marked surface (S,φ) and distinct points s_1,…, s_n∈ S such that these vertices are obtained by blowing up subsets of {s_i}_{1≤ i ≤ n}. §.§ The Jonquières complex Let _0=^1×^1 with the rational fibration π_0_0→^1 onto the first factor. We define the Jonquières complex 𝒥 as the subcomplex of the blow-up complex 𝒞(_0) induced on the set of vertices represented by marked surfaces (S,φ) such that, for π=π_0φ, the rational map π S^1 is a conic bundle. Consider 2^n vertices of 𝒥 spanning a cube. This means that there exists a surface (S,φ) and distinct points s_1,…, s_n∈ S such that these vertices are obtained by blowing up subsets of {s_i}_{1≤ i ≤ n}. Since these vertices belong to the Jonquières complex, we have that the points s_i belong to distinct fibres of π. In other words, we have π(s_i)≠π(s_j) for all i≠ j. Note that the Jonquières complex is not a convex subcomplex of the blow-up complex 𝒞(_0). Indeed, consider the birational transformation φ:(x,y)↦ (x,x^2+y). It has three base-points, but only one of them is proper. Let ρ S→_0 be the minimal resolution of φ. Then the vertex represented by (S,ρ) is not a vertex of the Jonquières complex because more than one point has been blown-up in the same fibre. However, the vertex represented by (S, ρ) lies on a geodesic edge-path between the vertices (_0,𝕀) and (_0, φ^-1). Nevertheless, by Proposition <ref>, the 1-skeleton of 𝒥 it is isometrically embedded in the 1-skeleton of 𝒞(_0). The Jonquières group is the subgroup of (_0) consisting of the Jonquières transformations f, which preserve the rational fibration π_0_0→^1. The Jonquières group acts on the vertex set of by f·(S, φ)=(S, fφ), and this action extends to an action by isometries on the entire . A subgroup G of the Jonquières group is a subgroup of (S) for some conic bundle π S→^1 if and only if G fixes a point in . This is because if G fixes an interior point of a cube in 𝒥 described via a surface S in Remark <ref>, then G fixes the vertex corresponding to S, and so G< (S). Thus, by Lemma <ref>, to prove Theorem <ref> for a subgroup G of the Jonquières group, we need to find a fixed-point for G in . For any p∈^1, consider the subcomplex X_p of 𝒥 induced on the vertices represented by the marked surfaces (S,φ), where φ S_0 induces an isomorphism between S∖π^-1(p) and _0∖π_0^-1(p). For any p∈^1, the subcomplex X_p is a tree. First note that X_p is connected as a consequence of Proposition <ref> and the fact that two elementary transformations performed in distinct fibres commute. Second, consider the vertex v_0=(_0,𝕀) in X_p, and let ^1_p⊂_0 be the fibre over p. Let v_0,v_1,v_2,v_3,… be the consecutive vertices on an edge-path without backtracks in X_p. The surface v_1 is obtained from v_0 by blowing up a point in ^1_p to a -1-curve. The surface v_2=(S_2,φ_2) is obtained from v_1 by blowing down the other -1-curve in the fibre over p to a point s_2. In particular, the birational transformation φ^-1_2 sends the entire ^1_p to s_2. Continuing, for i≥ 2 the surface v_2i-1 is obtained from v_2i-2 by blowing up a point in the fibre over p distinct from s_2i-2 to a -1-curve, and the surface v_2i is obtained from v_2i-1 by blowing down the other -1-curve in the fibre over p to a point s_2i. Thus the birational transformation φ^-1_2i sends the entire ^1_p to s_2i. In particular, φ_2i≠𝕀, and so v_2i≠ v_0. This proves that there is no cycle in X_p, and so X_p is a tree. We choose a preferred vertex x_p=(_0,𝕀) in each X_p, and we consider the family of pointed sets {(X_p,x_p)}_p∈^1. The cube complex 𝒥 is isomorphic to ⊕_p∈^1 (X_p,x_p). As a consequence, the complex ⊕_p∈^1 (X_p,x_p) inherits the action of the Jonquières group from 𝒥. All the elements of the Jonquières group induce elements of Aut(^1) on ^1. Thus by Lemma <ref> and the preceding discussion, Theorem <ref> is a special case of Theorem <ref>. Let (S,φ) be a vertex of 𝒥. The birational transformation φ S_0 is decomposed as φ=σ_n⋯σ_1, where each σ_i is the blow-up of a point or the blow-down of a -1-curve in the fibre over a point p_i∈^1. For p_i≠ p_j, we have that σ_i and σ_j commute. Thus there is a finite subset Q⊂^1 such that φ=Π_p∈ Qφ_p, where each (of the commuting) φ_p is a composition of σ_i with p_i=p. Consider the marked surfaces φ_p S_p _0 for p∈ Q, and φ_p=𝕀, S_p=_0 for p∈^1∖ Q. They represent vertices of X_p. Then {(S_p,φ_p)}_p∈^1 defines a vertex of ⊕_p∈^1 (X_p,x_p). This is a bijective correspondence between the vertices of 𝒥 and ⊕_p∈^1 (X_p,x_p), and that it extends to an isomorphism on the entire complexes. The coordinate y_p of a point y∈⊕_p∈^1 (X_p,x_p)=𝒥 should be thought of as the “marked fibre” in the surface corresponding to y over the point p. Thus modifying the coordinate y_p of y corresponds to performing an alternating sequence of blow-ups of points and blow-downs of -1 curves in the fibre over p of the surface corresponding to y. § PROOF OF THE MAIN THEOREM In this section we prove Theorem <ref>. We keep the notation for the general ⊕_p∈^1 (X_p,x_p) (and not just for the Jonquières complex). We denote by ħ the quotient map G_0≀_P H→ H. Abusing the notation, for f∈ G_0≀_P H and p∈ P, we denote by f(p)∈ P the point ħ(f)(p). Note that, in Theorem <ref>, the first item in Definition <ref> for G^⊕ follows from the first item for G_0. Indeed, let Y⊂ be a finite orbit of G<G^⊕. For each orbit O⊂ P of ħ(G)<H, choose o∈ O. Note that the finite set Y_o={y_o y∈ Y} is a finite orbit in X_o of the projection to G_o of the stabiliser of o in G. Thus this projection has a fixed-point z_o∈ X_o. We set z_p=f(z_o) for any f∈ G with f(o)=p, which only depends on p and not on f since z_o was fixed by the projection to G_o of the stabiliser of o in G. Since Y⊂, we have z={z_p}_p∈. Then z is a fixed-point of G. §.§ Biregularity Note that for f=hg∈ G^⊕, with h∈ H and g∈Π_p∈ P G_p, and for y∈, the value f(y)_p equals g_h^-1(p)(y_h^-1(p)) and so it depends only on h,g_h^-1(p) and y_h^-1(p). Henceforth, slightly abusing the notation, we will refer to f(y)_p as f(y_f^-1(p)). This conveys the fact that for a Jonquières transformation f, the “marked fibre” in f(S) over p depends only on f and the “marked fibre” in S over f^-1(p). The following encapsulates the idea of a Jonquières transformation f having a persistent base-point in the fibre over a point p∈ P. Let z={z_r}_r∈ be a distinguished vertex. Let p∈ P and f∈ G^⊕. We say that f is biregular over p (with respect to z) if f(z)_f(p)=z_f(p) (or, in our notation from Convention <ref>, f(z_p)=z_f(p)). Equivalently, for f=hg with h∈ H, g∈Π_pG_0, we have g_p(z_p)=z_f(p) (which equals z_h(p)). Otherwise, we say that f is singular (w.r.t. z) over p. An element f∈ G^⊕ has persistent fibre over p∈ P if there exists l≥ 1 such that, for all n≥ l, we have that f^n is singular over p and f^-n is biregular over p. Note that f is biregular over p if and only if f^-1 is biregular over f(p). Furthermore, if f is biregular over p and f' is biregular over f(p), then f'f is biregular over p. The point z is a fixed-point for f if and only if f is biregular over all p∈ P. If f has persistent fibre over p, then it does not have a fixed-point y∈. Indeed, the orbit of p under ⟨ f⟩ is infinite, since f^m(p)=p with m>0 would imply, by Remark <ref>, that f^lm is simultaneously biregular and singular over p. Furthermore, for all n≥ l, we have z_f^n(p)≠ f^n(z_p)=f^n(f^n(z_f^-n(p))). For large n, we have y_f^± n(p)=z_f^± n(p), contradicting y_f^n(p)=f^2n(y_f^-n(p)). §.§ Abelian case The following proves Theorem <ref> in the case where ħ(G) is abelian. Note that here, as well as in Lemma <ref>, we do not need to assume that P is the projective line ^1. Let G_0 be a group acting decently on X_0. Let H be a group acting on P, and let G<G^⊕ be such that either ħ(G)<H is trivial or it contains an element that has only finitely many finite orbits on P. If G is finitely generated, acts purely elliptically on , and ħ(G) is abelian, then G fixes a point of . If ħ(G) is trivial, then G<Π_p∈ P G_p. Let Q⊂ P be a finite set such that each generator {g_p}_p∈ P of G satisfies g_p(x_p)=x_p for p∉ Q. For each q∈ Q, the projection of G to G_q is a purely elliptic subgroup, so it fixes a point y_q∈ X_q. Setting y_p=x_p for p∉ Q, we obtain a fixed-point {y_p}_p for G in . Thus from now on we can assume that there is t∈ G with ħ(t) having only finitely many finite orbits on P. Let z∈ be a fixed-point of t. Then t is biregular w.r.t. z over each p∈ P. Let Q⊂ P be the union of the finite orbits of ⟨ t⟩. Note that Q is preserved by ħ(G), since ħ(G) is abelian. Let p∈ P∖ Q. We claim that any f∈ G is biregular w.r.t. z over p. To justify the claim, let B be the finite set of b∈ P∖ Q over which f or f^-1 is singular. We will show that for some n>0, setting f_n=t^nf, we have f_n^i(p)∉ B, for all i ∈∖{0}. To find such n, suppose first that f^l(p)=t^k(p) for some k,l∈ with l> 0. Then choose m_0>0 such that, for all m≥ m_0, and all 0≤ j<l, we have t^± mf^j(p)∉ B. It suffices then to take n=m_0+|k|, since for any i≠ 0 we have f_n^i(p)=t^nif^i(p)=t^± mf^j(p) for some 0≤ j<l and m≥ m_0. Otherwise, if there is no l≠ 0 with f^l(p)∈⟨ t ⟩(p), then there are finitely many k∈ with t^k(p)∈⟨ f⟩ B, since B is finite. It suffices then to take n larger than the maximum of their |k|. Thus we have f_n^i(p)∉ B, for all i ∈∖{0}. If f was singular w.r.t. z over p, then by Remark <ref> f_n∈ G or its inverse would have persistent fibre over p (with l=1), contradicting Remark <ref> and justifying the claim. For each orbit O of ħ(G) in Q, choose o∈ O and y_o∈ X_o that is fixed by the projection to G_o of the stabiliser of o in G, which is of finite index in G, hence finitely generated. For each f∈ G, choose y_f(o)=f(y_o) (notation from Convention <ref>), which only depends on f(o) and not on f, since y_o was fixed by the projection to G_o of the stabiliser of o in G. Setting y_p=z_p for p∉ Q gives us a fixed-point y for G. §.§ Semisimple case In order to treat the case of non-abelian ħ(G), we use the dynamics of elements in (^1)=_2(k). An element a∈_2(k) is semisimple, if it is conjugate to a diagonal element. With respect to suitable local coordinates, the automorphism of ^1 induced by a is given by z↦λ z for some λ∈ k^*. Let us observe that a is of infinite order if and only if λ is not a root of unity. In this case, a fixes exactly two points of ^1 and it does not have any other finite orbit. If k admits a norm |·| with |λ|≠ 1, then a has north-south dynamics, defined below, in the topology of ^1 induced by |·|. Let a be a homeomorphism of a Hausdorff topological space P, fixing p,q∈ P, and having the following property. For any disjoint open sets U∋ p,V∋ q, there is n such that a^n(P∖ V)⊂ U and a^-n(P∖ U)⊂ V. We then say that a has north-south dynamics. Let G_0 be a group acting decently on X_0. Let H be a group acting on P. Let a∈ H have north-south dynamics for some Hausdorff topology on P in which H acts by homeomorphisms. Suppose also that Stab(p)∩Stab(q)<H is abelian. If G<G^⊕ is finitely generated, acts purely elliptically on , and ħ(G) contains a, then G has a fixed-point in . Choose t∈ G with ħ(t)=a. Let z∈ be a fixed-point of t. We claim that for any f∈ G and any r∈ P∖{p,q} with f(r)≠ p,q, we have that f is biregular w.r.t. z over r. To justify the claim, first consider the case where ħ(f) interchanges p and q. Then the group ⟨ a, ħ(f)⟩ is virtually abelian. By Lemma <ref>, the group ⟨ t,f⟩ has a finite orbit, hence a fixed-point y in by Remark <ref>. But all but finitely many coordinates of y have to be that of z, so t(y)=y implies y_r=z_r for all r in all infinite orbits of ⟨ t⟩, so for all r≠ p,q. Since f fixes y, we have that f is biregular w.r.t. z over r, as desired. Second, consider the case where ħ(f) does not interchange p and q. Then after possibly replacing t with t^-1 and interchanging p with q, we can assume f(p)≠ q. Let B⊂ P be the finite set of points over which f or f^-1 is singular w.r.t. z. Let U∋ p (respectively, V∋ q) be an open neighbourhood intersecting B∪{f^-1(p),f(q)} only possibly at p (respectively, q), and such that f(U) is disjoint from V, which is possible since f(p)≠ q. Let n>0 be as in Definition <ref>. If f is singular over r, then we have r,f(r)∈ P∖ V. Thus t^nf(r)⊂ U, but t^nf(r)≠ p since f(r)≠ p. Since f(U) is disjoint from V, and U does not contain f^-1(p), unless f(p)=p, we obtain inductively, for all m>0, that (t^nf)^m(r)∈ U∖ p. We also have t^-n(r)∈ V∖ q and analogously we obtain t^-n(t^nf)^m(r)∈ V∖ q for all m<0. Consequently, t^nf has persistent fibre over r (with l=1), which contradicts Remark <ref> and finishes the proof of the claim. Note that if the entire ħ(G) stabilises {p,q}, then we are done by Lemma <ref>. Suppose also for the moment that ħ(G) also does not stabilise p or q. Then there is f∈ G and r≠ p,q with f(r)=p. Consider another f'∈ G with r'≠ p,q and f'(r')=p. Then f^-1f'(r')=r. Applying the claim above to f^-1f', we have f^-1f'(z)_r'= z_r. Consequently, f'(z)_p= f(z)_p. We now replace the coordinate z_p of z by f(z)_p, which, as we have seen, does not depend on f. Note that the new z is still fixed by t, which can be verified by substituting above f'=tf. Furthermore, now f is biregular over r and f^-1 is biregular over p. We analogously change the z_q coordinate of z. We will verify that the new z is a fixed-point for G. It remains to verify the biregularity of f ∈ G over u∈{p,q} in the case where f(u)∈{p,q}. Choose any r≠ p,q and f'∈ G with f'(r)=u. Introduce f”=ff', which satisfies f”(r)=f(u). From the previous paragraph it follows that both f',f” are biregular over r. This implies that f is biregular over u, as desired. In the case where ħ(G) stabilises, say, p, we redefine z_p to be the fixed-point of the projection of G to G_p. §.§ Conclusion Recall that an element h∈_2(k) is unipotent, if it is conjugate to an element of the form ([ 1 c; 0 1 ]) for some c∈ k. By considering the Jordan decomposition, we observe that, for k algebraically closed, every element of _2(k) is either unipotent or semisimple. Suppose first that ħ(G) contains a semisimple element a of infinite order with eigenvalues λ, λ^-1 and fixed-points p,q∈^1. Since a is of infinite order, λ is not a root of unity. Let z∈. Let k̃⊂ k be the smallest field such that the points p and q, the scalar λ, the elements of ħ(G), and all the points over which the elements of G are singular w.r.t. z, are defined over k̃. Since G is finitely generated, the field k̃ is a finitely generated field extension over the prime field of k. Let _k̃=⊕_p∈^1(k̃) (X_p,x_p), where ^1(k̃) is the set of the k̃-rational points of ^1. Note that the action of G on projects to an action on _k̃. By <cit.>, we can embed k̃ as a subfield into some local field K with norm |·| such that |λ|≠ 1. Then a has north-south dynamics on ^1(k̃) with respect to the Hausdorff topology on ^1(k̃) induced by the topology of ^1(K). By Lemma <ref> applied with P=^1(k̃), we have that G fixes a point {y_p}_p∈^1(k̃)∈_k̃. Since all the elements of G are biregular w.r.t. z over all the points outside ^1(k̃), we obtain that G fixes the point {y_p}_p∈^1∈, where y_p=z_p for p∉^1(k̃). Otherwise, if ħ(G) does not contain a semisimple element of infinite order, then all the elements of ħ(G) are unipotent or of finite order. By <cit.>, there is a finite index subgroup G'<G with ħ(G') conjugate into the abelian subgroup of the elements of the form ([ 1 c; 0 1 ]). Lemma <ref> implies that also in this case G' (and hence G) fixes a point of . § FURTHER REMARKS AND APPLICATIONS §.§ Non-rational surfaces In order to keep the notation simple and accessible to readers not coming from algebraic geometry, we focused in this paper on rational surfaces, which is the richest and most interesting case. However, our main result also holds for arbitrary surfaces. Let S be a smooth projective surface over an algebraically closed field k. An ample divisor H on S defines a degree function _H on (S) by _H(f)=f^*H· H. An element g∈(S) is then algebraic if the degree sequence {_H(f^n)}_n is bounded. A subgroup G<(S) is bounded if {_H(f)| f∈ G} is bounded. The properties of being algebraic and being bounded do not depend on the choice of an ample divisor (see for example <cit.>). We have now the following result. Let S be a smooth projective surface over an algebraically closed field and let G <(S) be a finitely generated subgroup such that every element of G is algebraic. Then G is contained in an algebraic subgroup. If S is rational, then the result follows from Theorem <ref>. If the Kodaira dimension of S is non-negative, then there exists a smooth projective surface T birationally equivalent to S such that (T)=(T) and the result follows from Lemma <ref>. Finally, if the Kodaira dimension of S is -∞, but S is not rational, then S is birationally equivalent to ^1× C for some non-rational smooth curve C. In this case, all the elements of (S) preserve the rational fibration given by the projection to C. If the genus of C is >1, then (C)=(C) is finite. If the genus of C is 1, then C is an elliptic curve and (C)=(C) is virtually abelian. We can thus apply Lemma <ref>. §.§ Degree growth of finitely generated groups There is a well-known and important correspondence between the dynamical behaviour of birational transformations in _2(k) and the type of isometry they induce on the infinite dimensional hyperbolic space . The following theorem gives in particular a precise description of the degree growth. It is due to several people. We refer to <cit.> for details and references. Let be an algebraically closed field and f∈_2(k). Then one of the following is true: * The transformation f is algebraic, the isometry of induced by f is elliptic, and the degree sequence {(f^n)} is bounded. (2a) The isometry of induced by f is parabolic, (f^n)∼ cn for some c>0, and f preserves a rational fibration. (2b) The isometry of induced by f is parabolic, (f^n)∼ cn^2 for some n, and f preserves a fibration of genus 1 curves. (3) The isometry of induced by f is loxodromic, (f^n)=cλ^n +𝒪(1) for some c>0 and λ>1. We prove now Corollary <ref>: If all elements in G induce elliptic isometries on , i.e., they are algebraic, then G and hence D_T(n) are bounded by Theorem <ref>. This implies that the orbit of G on is bounded and hence that G fixes a point in . Next, consider the case, where no element in G induces a loxodromic isometry of , but there is an element f∈ G inducing a parabolic isometry. First, assume that f preserves a rational fibration. Then all elements in G preserve this same rational fibration (see for instance <cit.>) and after conjugation we may assume that G is a subgroup of the Jonquières group (note that the asymptotic growth of D_T(n) is invariant under conjugation). For an element g in the Jonquières group we have (g)=#(g)+1/2 (see for instance <cit.>), where #(g) denotes the number of base-points of g. Since #(gh)≤#(g)+#(h) we obtain that that D_T(n)≤ Kn, where K=max_g∈ T{#(g)}. At the same time, since by assumption G contains an element whose degree growth is linear, we have kn≤ D_T(n), for some k>0. Hence, D_T(n)≍ n. Now, assume that f preserves a fibration of curves of genus 1. Again, this implies that all of G preserves the same fibration of curves of genus 1 and after conjugation we may assume that G is a subgroup of (S), where S is a Halphen surface <cit.>. In this case, the statement follows from Lemma <ref> below. Finally, we consider the case, where G contains an element f inducing a loxodromic isometry on . By Theorem <ref>, there exist c and λ such that (f^n)=cλ^n+𝒪(1). On the other hand, for λ_2=max_g∈ T{(g)} we have D_T(n)≤λ_2^n. This shows that f≍λ^n. A Halphen surface is a rational smooth projective surface S such that |−mK_S| is a pencil of genus 1 curves with empty base locus for some m>0. The ideas used in the following lemma have been described in <cit.>. We follow the account described in <cit.>. Let S be a Halphen surface and let G<(S) be a finitely generated subgroup containing a non-algebraic element f. Then D_T(n)≍ cn^2 for some c>0. Let us first recall that after possibly passing to a finite index subgroup (which does not change the asymptotic growth of D_T(n)), we may assume that G is abelian. Moreover, all algebraic alements in (S) are of finite order and (S) does not contain any element inducing a loxodromic isometry on (see for instance <cit.> for these facts). Hence, again up to passing to a finite index subgroup, we may assume that all elements in G induce a parabolic isometry on . Let N_ℝ(S) be the Néron–Severi space of S. Recall that N_ℝ(S) comes with an intersection form of signature (1,(N_ℝ(S))-1), which is preserved by the action of (S) by push-forwards. There exists a nef divisor class D_0∈ N_ℝ(S) such that D_0· D_0=0 and such that g_*D_0=D_0 for all g∈ G (in fact, we can take D_0=mK_S). The assumption that all elements in G induce parabolic isometries on implies that for all f∈ G, the only eigenvectors of f are multiples of D_0. For all g∈ G, the restriction of g_* to D_0^⊥/D_0 has finite order, since g_* preserves an integral lattice and the induced intersection form on D_0^⊥/D_0 is negative definite. Hence, up to passing to a finite index subgroup of G, we may assume that the restriction of G to D_0^⊥/D_0 is the identity. Let f_1,…, f_k be generators of G. Let A∈ N_ℝ(S) be an ample divisor. Note that we have f_*A≠ A for all f∈ G and A∉ D_0^⊥. Write (f_i)_*A=A+R_i, where R_i∈ D_0^⊥. Since the restriction of (f_i)_* to D_0^⊥/D_0 is the identity, we can write (f_i)_*R_j=R_j+t_ijD_0 for some t_ij∈. Let us observe that (f_i)_*(f_j)_*A=(f_j)_*(f_i)_*A for all i and j implies that t_ij=t_ji for all i and j. By induction, we obtain (f_i)^n_*A=A+nR_i+n(n-1)t_ii/2D_0, and, as a consequence, (f_1)^n_1_*⋯ (f_k)^n_k_*A=A+∑_in_iR_i+(∑_in_i(n_i-1)/2t_ii+∑_i<jn_in_jt_ij)D_0. We obtain that _A(f_1^n_1⋯ f_k^n_k)=((f_1)^n_1_*⋯ (f_k)^n_k_*A)· A. Since A is ample, we have that A· D_0>0. Hence, D_T(n) has quadratic growth. § ACKNOWLEDGMENTS The first author would like to thank the CRM (Centre de Recherche Mathématiques de Montreal), the Simons foundation and the organisers of the thematic semester “Théorie géométrique des groupes” for her stay at the CRM where a part of this project was realized. amsalpha
http://arxiv.org/abs/2307.01598v1
20230704093849
Fully general relativistic simulations of rapidly rotating quark stars: Oscillation modes and universal relations
[ "Kenneth Chen", "Lap-Ming Lin" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "physics.comp-ph" ]
[][email protected] [][email protected] Department of Physics, The Chinese University of Hong Kong, Hong Kong, China Numerical simulation of strange quark stars (QSs) is challenging due to the strong density discontinuity at the stellar surface. In this paper, we report successful simulations of rapidly rotating QSs and study their oscillation modes in full general relativity. Building on top of the numerical relativity code , we implement a positivity-preserving Riemann solver and a dust-like atmosphere to handle the density discontinuity at the surface. The robustness of our numerical method is demonstrated by performing stable evolutions of rotating QSs close to the Keplerian limit and extracting their oscillation modes. We focus on the quadrupolar l=|m|=2 f-mode and study whether they can still satisfy the universal relations recently proposed for rotating neutron stars (NSs). We find that two of the three proposed relations can still be satisfied by rotating QSs. For the remaining broken relation, we propose a new relation to unify the NS and QS data by invoking the dimensionless spin parameter j. The onsets of secular instabilities for rotating QSs are also studied by analyzing the f-mode frequencies. Same as the result found previously for NSs, we find that QSs become unstable to the Chandrasekhar-Friedman-Schutz instability when the angular velocity of the star Ω≈ 3.4 σ_0 for sequences of constant central energy density, where σ_0 is the mode frequency of the corresponding nonrotating configurations. For the viscosity-driven instability, we find that QSs become unstable when j≈ 0.881 for both sequences of constant central energy density and constant baryon mass. Such a high value of j cannot be achieved by realistic rotating NSs before reaching the Keplerian limit. The critical value for the ratio between the rotational kinetic energy and gravitational potential energy of rotating QSs for the onset of the instability, when considering sequences of constant baryon mass, is found to agree with an approximate value obtained for homogeneous incompressible bodies in general relativity to within 4%. Fully general relativistic simulations of rapidly rotating quark stars: Oscillation modes and universal relations Lap-Ming Lin August 1, 2023 ==================================================================================================================== § INTRODUCTION §.§ Quark stars Do strange quark stars (QSs) exist in nature? The question remains unanswered since the hypothesis that strange quark matter composed of u-, d- and s-quarks may be the ground state of baryonic matter was proposed as early as fifty years ago <cit.>. If strange quark matter is only metastable, hybrid stars consisting of quark matter cores surrounded by nuclear matter in the envelope may also exist (e.g., <cit.>). More recently, the possibility that quark matter containing only u- and d-quarks is the true ground state of baryonic matter for baryon number larger than 300 has also been considered <cit.>. While there is still no evidence for their existence, QSs have been proposed to explain some compact-object observations in the past <cit.>. More recently, the low-mass (< 1 M_⊙) central object of the supernova remnant HESS J1731-347 is suggested to be a QS <cit.>, as it is not possible to form such a low mass neutron star (NS) by conventional core-collapse supernova <cit.>. In the era of gravitational wave (GW) astronomy, the event GW190814 <cit.> has also been suggested to be a black hole-QS system <cit.>. Constraints on the equation of state (EOS) of quark matter have also been considered by assuming that the events GW170817 <cit.> and GW190425 <cit.> were due to merging QSs instead of NSs <cit.>. While the observed kilonova signal associated with GW170817 suggests that the event was due to a binary NS merger, this event by itself does not rule out the existence of QSs, since NSs and QSs could coexist according to the two-families scenario <cit.>. Better constraints on the properties of quark matter or even direct evidence for QSs might be possible as more GW events are expected to be observed in the coming decade. Numerical relativity simulations are indispensable tools for studying the GWs emitted from strongly dynamical spacetimes, such as the mergers of binary compact objects. While hydrodynamic simulations of NSs in full general relativity are performed routinely nowadays by different research groups (see <cit.> for recent reviews), only a few relativistic simulations have been obtained for QSs. The first binary QS simulation was done in 2009 <cit.> using the smooth particle hydrodynamics method and the conformally-flat approximation in general relativity. Fully general relativistic simulations of single and binary QSs <cit.> become available only in the past two years. In this paper, we add a contribution to this line of research by demonstrating our ability to evolve rapidly rotating QSs and study their oscillation modes. Our simulations were performed using the publicly available code <cit.>, with our own implementation of a positivity-preserving Riemann solver and a dust-like EOS for the atmosphere. Apart from the fact that the study of oscillation modes of compact stars is important in its own right (see below), the demonstration of stable evolutions of rapidly rotating QSs would be an important milestone for us to achieve before attempting generic nonlinear dynamical situations such as the gravitational collapse of a rapidly rotating unstable QS. The challenge to evolve a bare QS (without a thin nuclear matter crust), as described by the standard MIT bag-model EOS in a hydrodynamic simulation is due to its sharp high-density surface where the pressure vanishes. The high-density surface is directly in contact with the numerical “atmosphere", which is introduced to fill up the vacuum space with the purpose of stabilizing traditional grid-based hydrodynamic simulations. The low-density atmosphere is considered to have a negligible impact on the dynamics of compact stars when the evolution time is relatively short and comparable to the dynamical timescale such as in the case of binary spiral and merger; however, its small effects, if not properly handled, would accumulate and eventually kill a long-time simulation of a single stable star. The large contact discontinuity at the QS surface due to the density can be regarded as a special case of shock waves. In the context of shock-capturing hydrodynamic schemes, it is well known that low-order Godunov-type schemes <cit.> that are strongly dissipative will smear the shock, while high-order schemes will usually introduce spurious oscillations that result in the erroneous reconstruction of density near the surface. The error introduced in NS modeling is typically small, but serious for QSs and can cause significant violations of mass conservation. It is essential to preserve the positivity of density (and pressure) for a QS near its surface. A fine balance could be achieved by combining the high-resolution shock-capturing (HRSC) methods together with the so-called positivity-preserving (PP) Riemann solver, which was first introduced into the numerical relativity community by Radice, Rezzolla and Galeazzi in <cit.>. The main idea is that one could always build a finite-volume PP scheme by integrating a high-order solver with a first-order one under a more restrictive Courant-Friedichs-Lewy (CFL) condition as shown by Shu et al. <cit.>, since first-order Godunov-type schemes are known to have the PP property <cit.>. The PP scheme was originally designed to treat the low-density atmosphere of NSs. Better mass conservation and sharper surface density profiles were obtained. In this study, we applied the idea to QSs and achieved similar improvements, with a dust-like EOS designed for the atmosphere. As required by the continuity conditions, the atmospheric density may no longer be small but can be in the same order as the surface density. When we model the atmosphere by nearly pressureless dust particles, large truncation errors on the densities will not cause noticeable disturbance on pressure profiles. Our strategy to handle the sharp QS surface is different from those employed in recent simulations of QSs. In <cit.>, Zhou et al. modified the primitive-variable recovery procedure together with an addition of a thermal component to the cold MIT bag model EOS. On the other hand, Zhu and Rezzolla <cit.> introduced a thin crust described by a polytropic EOS at the QS surface. §.§ Oscillations of compact stars Pulsations of compact stars are potential sources of GWs and their detection can provide important information for the uncertain properties of supra-nuclear EOS inside a traditional NS. The detected signals may even provide evidence for the existence of deconfined quark matter, which could exist in the core of a hybrid star model or in the form of a pure strange QS <cit.>. The successful detection of GWs from merger events by the LIGO-Virgo Scientific Collaboration <cit.> has opened a new era of observational astronomy. Advanced LIGO, Virgo, KAGRA, and the next-generation detectors such as the Einstein Telescope <cit.> would have sufficient sensitivities in the high-frequency band (∼ kHz) to probe the GWs emitted from pulsating compact stars. The most important oscillation modes that would have interests in GW astronomy are the quadrupolar (l=2) fundamental f-mode, the first few overtones of the pressure p-modes, rotational r-mode and maybe the first spacetime w-modes  <cit.>. The f-mode is particularly relevant to the GW signals emitted from isolated and binary NS systems. On the one hand, the f-mode is expected to contribute strongly to the GWs emitted from a proto NS <cit.>. For a binary NS system, the dynamical tidal effects due to the coupling between the excited f-mode and tidal fields during the late inspiral phase are important to the dynamics and emitted GW from the system <cit.>. While the oscillation modes of nonrotating compact stars can be formulated and computed as eigenvalue problems (see <cit.> for a review), the situation for rapidly rotating stars is more complicated as the effects of rotation cannot be treated perturbatively. A standard approach to studying the oscillation modes of a rapidly rotating NS in general relativity is to suitably perturb the star and follow its subsequent evolution using a hydrodynamic code, and the mode frequencies are then identified from the Fourier spectra of the fluid variables. Due to the complexity of general relativity, such simulations are usually performed under the Cowling approximations <cit.> and the conformally flat assumptions <cit.>. An exception is the work by Zink et al. <cit.> in 2010 which investigated the f-modes of uniformly rotating polytropic stars using nonlinear hydrodynamics code in full general relativity. More recently, Krüger and Kokkotas (hereafter K&K) in <cit.> studied the f-modes of rapidly rotating NSs with realistic EOSs taking into account the spacetime dynamics in a linearized theory. In this work, we shall study the oscillation modes of rapidly rotating QSs in full general relativity for the first time. Focusing on the quadrupolar (l=2) f-mode, the three m=0,±2 modes are degenerate for a nonrotating spherical star, where m is the azimuthal quantum number. They will split when the star rotates, similar to the Zeeman splitting in quantum mechanics, though the splitting does not increase linearly with the rotation rate due to the high non-linearity of the system. The two non-axisymmetric (m ≠ 0) modes are usually called bar modes, and they are subject to various instabilities <cit.>. In this work, we shall determine the onsets of secular instabilities of rapidly rotating QSs driven by GW and viscosity dissipations. For the GW-driven Chandrasekhar-Friedman-Schutz (CFS) instability <cit.>, the onset occurs at a neutral point, where the counterrotating m=2 mode frequency σ_i observed in the inertial frame passes through zero (i.e., σ_i=0). In <cit.>, K&K found that the onset of CFS instability for a rotating NS occurs when the angular velocity Ω of the star Ω≈ 3.4 σ_0 for sequences of constant central energy density, where σ_0 is the f-mode frequency of the corresponding nonrotating model. The conclusion is approximately insensitive to the chosen EOS models in their study. We shall see in our work that rapidly rotating QSs also satisfy this result as well. The bar modes are also subject to another type of instability which is driven by viscosity. The instability sets in when the m=-2 corotating mode frequency σ_c in the rotating frame passes through zero (i.e., σ_c=0). The Newtonian analysis of the onset of this instability was studied by Chandrasekhar <cit.> for a sequence of uniformly rotating uniform-density Maclaurin spheroids. It was found that a new sequence of triaxial Jacobi ellipsoids branches off the Maclaurin sequence when the Newtonian ratio between the rotational kinetic energy T and gravitational potential energy |W| reaches the critical value (T/|W|)_ crit,Newt = 0.1375. Above this critical value, the Maclaurin spheroids are subjected to the viscosity-driven instability and migrate towards the Jacobi sequence by dissipating energy while conserving angular momentum. A Jacobi ellipsoid is particularly relevant to GW astrophysics, as its time-varying mass quadrupole moment will continuously emit GW radiation. In <cit.>, it is found that general relativity weakens the Jacobi-like bar mode instability. Furthermore, a stiff EOS with an adiabatic index as large as 2.5 is required for a 1.4 M_⊙ polytropic star to become unstable before the Keplerian limit <cit.>. The onset of the instability is thus expected to be difficult to achieve (if not impossible) by realistic rotating NSs. On the other hand, rotating QSs would be the most promising candidates to achieve the instability as they can generally support higher rotation rates <cit.> and are stiff enough to be approximated well by incompressible models <cit.>. The viscosity-driven instability of rotating QSs was already studied more than twenty years ago <cit.> and the instability onset was found to occur generally before the Keplerian limit. However, these studies were not based on the analysis of the oscillation modes, but by perturbing the stellar configuration during the iteration steps of the calculation of an axisymmetric equilibrium rotating star. If the perturbation grows during the iteration, then the star is declared to be unstable. In this work, we study for the first time the onset of the viscosity-driven instability by observing how the corotating mode frequency σ_c in the rotating frame passes through zero as the rotation rates of sequences of QSs approach the Keplerian limit. It should be noted, however, that there is no physical viscosity in our simulations as all stars are modeled by perfect fluids. While the oscillation modes were identified in our 3D simulations, the spontaneous breaking of axisymmetry due to the instability was not observed in the dynamical timescale. §.§ Universal relations In the last decade, the discoveries of various approximate EOS-insensitive universal relations of compact stars (see <cit.> for reviews) are not only of theoretical interest, but also of importance in astrophysical applications such as measuring masses and radii with X-ray pulse profile modeling <cit.>, analyzing GW signals to constrain the maximum mass of NSs <cit.>, and reducing the number of parameters in theoretical gravitational waveform models for binary NS inspirals <cit.>. In contrast to the mass-radius relations of compact stars, universal relations connecting different physical quantities are generally insensitive to EOS models to about 1% level. Many of the investigations done on universal relations only focus on traditional NSs, though it is known that bare QSs also satisfy some of the relations established by NSs <cit.>. Besides searching for new universal relations which may provide astrophysical applications, it is also interesting to test existing universal relations against different physics inputs such as thermal effects relevant to hot newborn NSs <cit.> or superfluid dynamics for cold NSs <cit.>. Attempts to find universal relations for the oscillation modes of compact stars dated back to the seminal work of Andersson and Kokkotas <cit.> more than twenty years ago, which was then followed by Benhar et al. <cit.> and Tsui and Leung <cit.>. While these earlier universal relations depend weakly on the EOS models to a certain accuracy, they are not as robust as those discovered later. The f-mode is now known to connect to the moment of inertia <cit.> and tidal deformability <cit.> by robust universal relations which are insensitive to the EOS models to within about 1% level, when the relevant physical quantities are suitably scaled (see, e.g., <cit.> for recent work). However, these studies were based on nonrotating NSs and QSs only. Recently, K&K <cit.> found three universal relations for the bar modes of rapidly rotating NSs using their newly developed code that takes into account spacetime dynamics in a linearized theory <cit.>. In this paper, we shall study whether their universal relations can also be applied to rapidly rotating QSs. We find that two of their relations can still be satisfied by bare QSs very well, but one of them is broken quite significantly already at moderate rotation rates. In addition to the f-mode, we also study the first p-mode of rotating QSs. For the class of QS models studied in this paper, we report fitting relations for the p-mode frequencies of both nonrotating and rotating stars. The plan of the paper is as follows. In Sec. <ref>, we discuss the formulation and numerical methods employed in this work. Sec. <ref> presents the numerical results, including tests that were performed to validate our simulations. Finally, we conclude the paper in Sec. <ref>. Unless otherwise noted, we adopt the unit convention c=G=M_⊙=1, where c is the speed of light, G is the gravitational constant and M_⊙ is the solar mass. § FORMULATION AND NUMERICAL METHODS Our simulations were performed using the publicly available code which is built on top of the computational infrastructure <cit.>. The spacetime is evolved using the standard CCZ4 formulation of the Einstein equations <cit.> implemented in the thorn code  <cit.> of . We choose the parameters of the CCZ4 formulation to be κ_2 = 0 and κ_3 = 0.5 in our simulations. As for κ_1, we typically choose it to be 0.05. Although its optimal value can vary for different models, physical results are insensitive to these choices as long as the constraint violation does not grow and invalidate the simulations. The general relativistic hydrodynamics equations are solved using the thorn code  <cit.>. The mesh-refinement driver  <cit.> is employed to provide an adaptive mesh refinement approach to increase resolution. The standard gauge conditions “1+log" slicing <cit.> and Gamma-driver shift condition <cit.> are adopted, where the damping coefficient which is introduced to avoid strong oscillations in the shift is chosen to be 1/M (with M being the gravitational mass). Furthermore, a numerical dissipation of the Kreiss-Oliger type <cit.> is introduced for spacetime variables and gauge quantities following the suggestion of <cit.>. The formulation and numerical setup used in our simulations are quite standard choices for general relativistic hydrodynamic modelings, such as in the cases of binary neutron star mergers. In order to simulate rapidly rotating QSs for sufficient duration in our study, we implemented a positivity-preserving Riemann solver to the thorn which will be discussed below. §.§ Positivity Preserving Riemann Solver The fluid-vacuum interface at the surface of a star in hydrodynamics modeling is subject to perturbations mainly due to truncation errors. These perturbations could be significant when the surface has a nonzero finite density, as in the situation for a bare QS. This is particularly the case for simulations using Cartesian coordinates where the grid points do not match well with the smooth stellar surface. When a free boundary condition is used, physical laws would tend to bring balance to the fluid-vacuum interface if the star itself is stable. The freely evolved vacuum would quickly encounter numerical problems as it may lead to non-physical negative densities. The problem is tackled in general by introducing an artificial atmosphere with a floor density ρ_f in hydrodynamic simulations. It is therefore necessary and desirable to preserve the positivity of certain hydrodynamical variables, essentially the density and the pressure, in a free evolution scheme. For the classical Euler equation, it has been shown that both the density and the pressure are guaranteed to be positive by a well-designed limiter when source terms are present <cit.>. However, in relativistic hydrodynamics, a rigorous strategy is still lacking for a generic EOS. Here we discuss the positivity preserving (PP) Riemann solver introduced in <cit.> which we implemented and proved to provide stable evolutions of rapidly rotating QSs in the study. Let us first give an outline of the conservative form of the relativistic hydrodynamics equations to define the variables for further discussion. The standard 3+1 Arnowitt-Deser-Misner (ADM) form <cit.> of spacetime metric is given by ds^2 = g_μνdx^μ dx^ν = (-α^2+β_iβ^i) dt^2 + 2β_i dt dx^i + γ_ij dx^i dx^j, where g_μν, α, β^i and γ_ij are the spacetime 4-metric, lapse function, shift vector, and spatial 3-metric respectively. The energy-momentum tensor of the matter inside the star is assumed to take the perfect fluid form <cit.> T_μν = ρ h u_μ u_ν + P g_μν, where ρ is the rest-mass density, u^μ is the fluid four-velocity, P is the pressure, h=1+ϵ+P/ρ is the specific enthalpy and ϵ is the specific internal energy. The equations of motion for the fluid are the conservation laws of baryon number and energy-momentum, ∇_μ(ρ u^μ) = 0 , ∇_μ T^μν = 0 , which are solved by the Valencia formulation <cit.> in the conservative form, ∂𝐔/∂ t + ∂𝐅^i/∂ x^i = 𝐒, with the conserved variables 𝐔 = [ D,S_j,τ] = √(γ)[ ρ W,ρ h W^2 v_j, ρ h W^2- P -ρ W ] , where γ is the determinant of γ_ij and the three-velocity is v^i = (u^i/u^t + β^i)/α; W=(1-v^iv_i)^-1/2 is the Lorentz factor. For three-vectors like v^i and β^i, their indices are raised and lowered by the 3-metric, e.g., v_i = γ_ijv^j. The fluxes are 𝐅^i = α [Dṽ^i,S_jṽ^i + √(γ)Pδ^i_j,τṽ^i+√(γ)P v^i], and the source functions are 𝐒 = α√(γ) [0,T^μν(∂_μ g_ν j-Γ^λ_μν g_λ j),α(T^μ 0∂_μlnα -T^μνΓ^0_μν)] , where ṽ^i=v^i - β^i/α and Γ^λ_μν are the 4-Christoffel symbols. To illustrate the idea of the PP scheme, we first consider a source-free scalar conservation law in one dimension <cit.> ∂ u/∂ t + ∂ f(u)/∂ x = 0. A theory <cit.> states that a high-order temporal integration scheme such as the Runge-Kutta (RK) scheme which is a convex combination of forward Euler steps will maintain the total variation diminishing (TVD) property and the positivity of u, provided this is true for the first-order forward Euler method. Methods of this kind are known as strong stability-preserving (SSP) or TVD methods. Considering a discretization scheme using the forward Euler method u_i^n+1-u_i^n/Δ t = f_i-1/2-f_i+1/2/Δ x, we can arrange it in the form u_i^n+1 = 1/2(u_i^+ + u_i^-) , where u_i^+ = u_i^n+2Δ t/Δ xf_i-1/2, u_i^- = u_i^n-2Δ t/Δ xf_i+1/2. Sufficiently, when both u_i^+ and u_i^- are positive, so will be u_i^n+1. It is proven that positivity is guaranteed for the first-order Lax-Friedrichs (LF) flux <cit.> with a more restrictive CFL-like condition Δ t/Δ x≤ 1/2c, where c is the largest speed of sound <cit.>. However, this low-order scheme is too dissipative to capture features of shocks. The principle of PP solver is to combine it with a high-order (HO) scheme for optimization f_i+1/2^PP = α f_i+1/2^HO + (1-α) f_i+1/2^LF, where f_i+1/2^HO is the HO flux, f_i+1/2^LF is the LF flux, and α∈[0,1] is an undetermined coefficient. In our simulations, we selected the Marquina solver <cit.> as our HO flux. When α=1, the PP scheme fully restores to the powerful HO flux, which is applied for the bulk of a star. On the other hand, around the fluid-atmosphere interface at the stellar surface, the PP scheme then searches for the optimal value of α, compromising accuracy for positivity. The first component of Eq. (<ref>), i.e., the continuity equation, is source-free (see Eq. (<ref>)) and the PP scheme is applicable. Its conserved variable is D=√(γ)W ρ, where γ and W are both definitely positive. Ensuring the positivity of D will serve our purpose of ensuring the positivity of ρ. However, the pressure-related term, the conserved energy density τ, has a complex source term in Eq. (<ref>). While the authors of <cit.> suggested enforcing a floor value on τ, empirically we found it adequate to apply the PP limiter also on τ. In our three-dimensional Cartesian-grid case, the CFL-like condition becomes Δ t/Δ x≤ 1/6c and the PP flux f_i+1/2^ PP is calculated component-by-component. This condition is too restrictive to require each interface value to be non-negative, for only the positivity of their sum is really demanded. It could tolerate a small negative contribution (if any) from the source term. Let the conserved variable u represent either D or τ. The value of α is determined as follows. If u_i+1^+(f_i+1/2^HO) is positive, α(u_i+1^+)=1, meaning that the original HO flux is used. Otherwise, α(u_i+1^+) = u_i+1^+(f_i+1/2^LF)/u_i+1^+(f_i+1/2^LF)-u_i+1^+(f_i+1/2^HO) , and similarly for α(u_i^-). The PP property of the LF scheme ensures that a solution α≥ 0 always exists. We then have α(u) = min(α(u_i+1^+),α(u_i^-)), and the final value is the smaller one between α(D) and α(τ). This determines the PP flux f_i+1/2^ PP of one component. Practically, a smaller Courant factor or a more conservative choice of α can always be chosen if there is an intolerably large violation of mass conservation which indicates a poor preservation of positivity. The implementation of the PP solver allows us to set the floor density of the atmosphere to be ρ_f=10^-18 which is about 10^-15 of the typical central density. In theory, the PP scheme allows the atmosphere to evolve freely to be as small as the round-off precision. In our typical simulations, the violation of the total mass conservation near t=2000≈ 9.85 ms for a Courant factor 0.16 is of 𝒪(0.1%). More numerical tests are presented in Sec. <ref>. §.§ Equation of state §.§.§ For QSs and NSs As we are interested in extracting the oscillation modes of QSs excited by small perturbations, the thermal effects like shock heating will play a negligible role in the simulations. We thus assume the stars are described by zero-temperature EOS models. In order to model bare QSs, we parameterize the linear approximation of the MIT-bag-model EOS <cit.> by the square of the speed of sound c_ss and the bag constant B as P = c_ss e - (1+c_ss) B , where P is the pressure for a given energy density e. It will be convenient to further parameterize it by the ratio κ≡ρ/ρ_S between the rest-mass density ρ and the surface density at zero pressure ρ_S=(1+c_ss)B/c_ss. The full EOS is then given by ρ = ρ_S κ, P = B(κ^1+c_ss-1), e = B ( κ^1+c_ss/c_ss+1 ), and κ is closely related to the enthalpy h through the relation h=e+P/ρ=κ^c_ss . In addition to a conventional choice of c_ss=1/3 and B = B_60≡ 60 MeV/fm^3, hereafter denoted by “MIT1", we also include the following models: “MIT2" for c_ss=1 and B/B_60=3, “MIT3" for c_ss=2/3 and B/B_60=3/2 and “MIT4" for c_ss=1/2 and B/B_60=3/2, to cover a range of parameter space for QSs as we shall also explore the robustness of some EOS-insensitive universal relations. We also include a nuclear-matter EOS model for NSs as a benchmark for comparison. Instead of using the original tabular EOS data, we use a piecewise polytropic model <cit.> to represent analytically the SFHo EOS <cit.>, which was not included in the study of the universal relations of rapidly rotating NSs by K&K <cit.>. In particular, we construct a five-piece model so that the pressure P and specific internal energy ϵ are everywhere continuous and satisfy P(ρ) = K_iρ^Γ_i, ϵ(ρ) = a_i+K_i/Γ_i-1ρ^Γ_i-1, inside the range of rest-mass density ρ_i-1≤ρ < ρ_i (i=1,2,3,4). The parameters of our EOS models are summarized in Tables <ref> and <ref> and the corresponding mass-radius relations for nonrotating QSs and NSs are shown in Fig. <ref>. The mass-radius relations of MIT bag models differ qualitatively from that of the SFHo EOS due to the well-known fact that QSs are self-bound objects. §.§.§ For the atmosphere of QSs The surface of a bare QS modeled by the MIT bag model is identified by the vanishing pressure just like an ordinary NS, but it has a finite density ρ_S which is of the same order as the central density and it requires novel treatments in dynamical modelings. In traditional hydrodynamic simulations for NSs, a low-density “atmosphere" is introduced to fill up all vacuum space outside the stars, such that a fluid element is reset to become part of the atmosphere when its density evolves to become smaller than a prescribed value of the atmospheric density or even negative. This approach works well in highly dynamical situations such as inspiraling binary stars when the stars move across the computational grid in a short time scale and the effects of the low-density atmosphere become relatively unimportant. However, in studying the oscillations of stable stars whose vacuum-fluid interface may move slowly, excessive oscillations may cause the star to extract (lose) mass from (to) the atmosphere and violate the conservation of mass and momentum. As the effects accumulate and amplify, they ultimately destabilize the evolution <cit.>. This situation only gets worse for QSs when the vacuum-fluid density discontinuities are many (ten) orders of magnitude larger than those in traditional NS cases. Immediately after starting the simulation, a large violation of mass conservation would be observed, and it soon rises up to the order of total mass, completely destroying the simulation. Furthermore, during a numerical simulation it can happen that fluid elements on the surface of a QS may evolve to a density smaller than the surface density ρ_S, which is defined by the vanishing pressure of the MIT bag model. As their densities (∼ρ_S) are typically many orders of magnitude larger than the density of the atmosphere, they cannot simply be treated as part of the atmosphere. This then poses a question of how to evolve such fluid elements and with what type of EOS so that the dynamics near the surface of a QS can be modeled correctly. Instead of arbitrarily modifying some atmospheric elements during evolution, we want to maintain the balance between inertial and gravitational forces on all fluid elements and enforce the conservation law around the vacuum-fluid interface. In other words, a scheme allowing for a free evolution of the atmosphere is required. To achieve this purpose, we introduce here a dust-like EOS to model fluid elements near the surface of a QS, whose rest-mass density is not necessarily small but whose pressure is always close to zero. Even though the truncation errors from finite differencing may cause a large density dislocation, its disturbance to the pressure profile would be minimal. The gravitational pull of the star tends to bring a dislocated fluid element back to its equilibrium position. In practice, after importing the initial data of a bare QS into the computational domain, an atmosphere with a floor rest-mass density ρ_f is set outside the star. In our simulations, the value of ρ_f is chosen to be 10^-18, about 10^-15 times the central rest-mass density of the star. During the evolution, we set a pressure cutoff or equivalently a density cutoff slightly larger than the surface density following our parametrization of the MIT bag model EOS: ρ_cutoff = ρ_S(1+ξ), P_ cutoff ≈ B(1+c_ss)ξ≈ c_ssρ_ cutoffξ, where ξ is small. The specific enthalpy and energy are given by h=(1+ξ)^c_ss≈ 1+ c_ssξ and ϵ≈ c_ssξ^2/2, respectively. It should be noted that the effective adiabatic index diverges near the stellar surface Γ=dln P /dlnρ = (1+c_ss)κ^1+c_ss/κ^1+c_ss-1≈1/ξ, meaning the surface is theoretically infinitely stiff. If the rest-mass density is below the cutoff density ρ_ cutoff, we will use the following EOS for the fluid element P(ρ) = c_ssξρ , e(ρ) = (1+c_ss/2ξ^2)ρ . Both the specific energy and enthalpy are enforced to be continuous across the surface discontinuity. The first law of thermodynamics, which requires de/dρ=h, is violated to the order of 𝒪(ξ). By choosing a small ξ, we expect our model to capture the dynamics near the surface of a bare QS, and eventually the error is dominated only by the finite-differencing error. We used ξ=10^-12 in our simulations. The cutoff pressure is then also about 10^-12 of the central pressure, and the surface specific internal energy ϵ∝ξ^2 is below the roundoff precision of the double precision floating point numbers. As we shall see in the following, the introduction of this dust-like EOS near the surface of a QS enables us to determine the radial oscillation modes of QSs accurately. §.§ Other numerical issues In addition to the Riemann solver (see Sec. <ref>), which determines the numerical flux at the cell interfaces, one also needs a reconstruction method to interpolate the fluid variables. In our simulations, we implemented the classic piecewise parabolic method (PPM) reconstruction method <cit.>. It should be pointed out that the original PPM scheme applies a steepening procedure for density discontinuity only if the following condition is satisfied (see Eq. (3.2) in <cit.>): Γ K_0 |ρ_j+1-ρ_j-1|/min(ρ_j+1,ρ_j-1)≥|P_j+1-P_j-1|/min(P_j+1,P_j-1) , where K_0 is a constant parameter. This condition determines whether the j-th zone can be treated as being inside a discontinuity. However, this criterion does not work properly for QSs due to the divergence of the effective adiabatic indices Γ≈ 1/ξ of QSs near the surface. As a result, any pair of constants Γ and K_0 in the PPM scheme would not properly detect discontinuities near the surface of a QS. In our simulations, we simply turned off this condition and always allowed the steepening procedure for QS models. This adjustment can sharpen surface density profiles and prolong our simulations. We have also tested the fifth-order monotonicity preservation scheme (MP5) <cit.> and found no advantage regarding mass conservation, as we shall discuss in Sec. <ref>. In this study, we do not attempt to extract the gravitational wave signals emitted from oscillating stars using the Newman-Penrose formalism, which is routinely employed in binary neutron star simulations. The outer boundary of the computational domain in our simulations can then be set closer to the stellar surface. Nevertheless, we found that the quality of the hydrodynamic modeling of a rapidly rotating QS, such as mass conservation, can be affected strongly if the outer boundary is too close to the stellar surface. In our simulations, we employ three refinement levels with a 2:1 refinement ratio for successive levels provided by the mesh refinement driver in order to maintain enough grid resolution inside the star, while the outer boundary can be put far away from the stellar surface to reduce its effects. The first refinement boundary is at a radius of 1.2, the second at 2.4, and the outer boundary at 4.8 times the equatorial radius of the star. In Sec. <ref>, numerical results of three spatial resolutions Δ x=0.12 (≈ 177 m), 0.16 (≈ 236 m), and 0.24 (≈ 354 m), in the case of a nonrotating QS are compared, where Δ x is the grid size for the finest level. Since we could already extract the frequencies of radial oscillation modes of QSs up to the fifth overtone using Δ x =0.24, we produced our results with the default resolution Δ x=0.16. For a typical slowly rotating star, its radius will cover about 50 computational grids. Fast rotation can cause a large deformation of the star, and the ratio between the polar and equatorial radii can be below 0.5 for some extreme models. For these cases, there are about 30 and 60 cells along the polar and equatorial radii, respectively. To save computational resources, reflection symmetry about the equatorial plane is assumed and interesting modes (l=|m|=2) will not be affected by this choice. Octant symmetry is applied when bar modes are not concerned. In short summary, our main numerical results in this work are obtained by using the PP Riemann solver with the PPM reconstruction method. The time update is performed using the standard RK4 integrator. §.§ Initial data and perturbations We use the numerical code in the  <cit.> library to construct uniformly rotating NS and QS models for our study. The code uses a multi-domain spectral method <cit.> and has been used for calculating rapidly rotating QSs <cit.>. Sequences of constant baryon mass and constant central energy density are produced, whose corresponding nonrotating configurations are labeled by “Seqs" data points in Fig. <ref>. An important dimensionless parameter to characterize a rotating compact star is its spin parameter j=J/M^2, where J and M are the angular momentum and gravitational mass of the star, respectively. In Fig. <ref>, the spin parameters of our constant baryon mass sequences are plotted against the angular velocity Ω, normalized by the corresponding maximal rotation limit Ω_ max. There is a gap separating the band of QS models from the two NS sequences in the figure. For the same baryon mass, the spin parameters of QSs are significantly larger than the NS counterparts, especially when Ω / Ω_ max approaches unity. It should be pointed out that, in contrast to the situation for rotating NSs, the maximum angular velocity Ω_ max of a QS with a given baryon mass is generally higher than the Keplerian limit Ω_K by about 2%, a characteristic feature of self-bound objects that a QS can further gain angular momentum by slightly slowing down its rotation but increasing its oblateness (i.e., the moment of inertia) before reaching the Keplerian limit <cit.>. While the two NS sequences with a 10% difference in baryon mass match each other very well, the QS sequences for a given EOS model are seen to depend more sensitively on the mass. In particular, the spin parameter for QSs increases as the baryon mass decreases. As pointed out in <cit.>, the spin parameter of QSs can even be larger than the Kerr bound j=1 for rotating black holes. We study two such models in the MIT1 2M_⊙ sequence, which are also degenerate models in terms of the rotational frequency, as shown in the inset of Fig. <ref>. Clearly, the maximal rotational frequency is a turning point. On the other hand, there is an upper bound of j ∼ 0.7 for NSs, the value of which is relatively insensitive to EOS models <cit.>. Similarly, the sequences of constant central energy density are plotted in Fig. <ref>. In the figure, we plot j against the ratio Ω / σ_0, where σ_0 is the f-mode frequency of the corresponding nonrotating star for each sequence. In contrast to Fig. <ref>, there is now no qualitative difference between NS and QS, except that the latter can reach higher values of j and Ω / σ_0. The reason to normalize Ω by σ_0 is due to the fact that the f-mode frequencies of NSs for these sequences establish a universal relation with Ω / σ_0 <cit.>. On the other hand, for the sequences of constant baryon mass, the f-mode frequencies of NSs observed in the rotating frame are connected to Ω / Ω_K by another universal relation. We shall study whether the f-modes of rotating QSs for these two types of sequences still satisfy the universal relations found for NSs. Every data point in Figs. <ref> and <ref> represents a rotating star model we perturbed and evolved dynamically to t=2000 (≈ 9.85 ms), and their oscillation modes were then extracted for further analysis. To excite the quadrupolar non-axisymmetric (l=|m|=2) oscillation modes of rotating stars, we add initial velocity perturbations following the suggestion of <cit.>. After importing the initial data for an equilibrium rotating star to the evolution code, we perturb the star by adding the velocity perturbations v^θ = v^r sin 2θ (cos 2ϕ+sin2ϕ), v^ϕ = -2v^rsinθ(sin2ϕ-cos2ϕ), where the radial component v^r controls the perturbation strength which we set to be v_0sin[π r/2r_s(θ)] for some small values of v_0, and r_s(θ) is the estimated coordinate radius along the θ direction. Although these perturbation functions are not the exact eigenmodes of rapidly rotating stars, they can effectively excite the fundamental f-modes and also the first pressure p-modes. § NUMERICAL RESULTS §.§ Nonrotating QSs Before studying the oscillation modes of rotating QSs, let us first present various tests for nonrotating models to demonstrate that our numerical method is capable to provide a stable and accurate evolution for a bare QS. In particular, we focus on a nonrotating QS with a gravitational mass 1.71 M_⊙ and a radius 11.1 km described by the MIT1 EOS. This star corresponds to the nonrotating configuration of a constant baryon mass (2 M_⊙) sequence in our study of rotating QSs. As discussed before, we choose the PP Riemann solver with PPM reconstruction method (hereafter PP+PPM) as our default hydrodynamic scheme. Here we first compare the performance of the PP+PPM scheme with other standard Riemann solvers, the Harten-Lax-van Leer-Einfeldt (HLLE) <cit.> and Marquina <cit.> schemes, and the fifth-order monotonicity preserving (MP5) <cit.> reconstruction method. In Fig. <ref>, we plot the percentage changes of total baryon mass against time for the simulations using different combinations of the Riemann solvers and reconstruction methods. The simulations were performed with the same grid resolution Δ x=0.16 at the finest refinement level. It is seen that the Marquina+PPM scheme loses a large percentage of mass immediately at the beginning of the evolution. Similarly, the HLLE+MP5 scheme also has a large decrease in mass initially and the mass loss increases to 5% by about 1.3 ms. Replacing the fifth-order MP5 method with a lower order (third-order) PPM method can improve the mass conservation as can be seen by comparing the HLLE+MP5 and HLLE+PPM schemes in the figure. The HLLE+PPM scheme gradually loses 3% of total mass by about 10 ms. In fact, we noticed that a higher-order reconstruction method actually causes more spurious oscillations near the sharp surface discontinuity. By comparison, it is seen clearly that our implemented PP solver, whether using the MP5 or PPM reconstruction method, performs much better than the other solvers and can conserve the total mass to within 1% up to 10 ms. While the PP+MP5 run still suffers an initial drop in mass, the PP+PPM scheme is nearly a flat line. The numerical results presented in this paper are obtained by the PP+PPM scheme hereafter. An important quantity to monitor the quality of a numerical-relativity simulation is the Hamiltonian constraint. A small constraint violation is required for any trustworthy simulation. In Fig. <ref>, we plot the L^2 norm of the Hamiltonian constraint against time for the evolution of the nonrotating QS using three different resolutions Δ x = 0.24 (low), 0.16 (medium), and 0.12 (high). Thanks to the constraint damping and propagation properties of the CCZ4 formulation, the Hamiltonian constraint violation quickly drops to a steady plateau of 𝒪(10^-6) even in the low-resolution run. The figure shows that the violation decreases with increasing resolution. The inset of Fig. <ref> plots the stable plateau values ||H||_s of the constraint violation against Δ x, and demonstrates a linear-order convergence for ||H||_s. After checking the stability and accuracy of the evolution, we now turn to the oscillations of the nonrotating QS model. While the star is a static equilibrium configuration initially, finite-differencing errors can trigger the radial oscillation modes during the evolution. The frequencies of the oscillation modes can then be obtained by performing Fourier transforms (FT) of physical quantities such as the density and velocity. For nonrotating stars, the oscillation mode frequencies can alternatively be computed using a perturbative eigenmode analysis. Comparing the mode frequencies obtained from the simulation with the known eigenmode frequencies is an important test of the hydrodynamic simulation. For a general rotating star, we use the module in the open source visualization and data analysis software  <cit.> to extract the physical quantities inside the star at data points along the line at polar angle θ=π/4 on the x-z plane, where z is the rotation axis. The values of the rest-mass density ρ at the data points are added up to produce a Fourier spectrum of ρ. Similarly, we also consider the Fourier spectra of the velocity components defined by v^r=(v^x+v^z)/√(2), v^θ = (v^x-v^z)/√(2), and v^ϕ = v^y, where (v^x , v^y , v^z) are the velocity components obtained in our Cartesian grid simulations. In practice, we found that the Fourier spectrum of a physical quantity obtained from the superposition of multi-data points could improve the quality of the spectrum and was helpful in the mode identification. Let us first study the radial oscillation modes of the nonrotating QS model discussed above in Fig. <ref> as a test for our simulations. In Fig. <ref>, we show the Fourier spectra of density FT(ρ) obtained from the evolutions using three different grid resolutions. The vertical dashed lines in the figure stand for the frequencies of the radial oscillation modes, ranging from the fundamental mode F_0 to the tenth overtone F_10, determined by the perturbative method. It is seen that our simulation results can produce Fourier peaks matching the dashed lines very well. The amplitudes of the spectra are dominated by the fundamental mode F_0 as expected. Higher overtones with much smaller amplitudes are still identifiable in the spectra. The Cartesian grid cannot match the stellar surface exactly, and hence many overtones can be excited due to numerical perturbations near the surface. Being able to identify the high-frequency overtones would be a good criterion for a proper simulation of a stable QS. In order to show clearly the Fourier peaks of the high overtones, we zoom in the frequency range from the F_3 to F_10 overtones in the inset of Fig. <ref>. The high-resolution result (black curve) aligns very well with all overtones up to the ninth overtone with frequency F_9=43808 Hz. Near the tenth overtone with frequency F_10=48215 Hz, a small bump still exists at the correct position. It is seen that the peaks of the medium-resolution result (red curve) can also match up to the ninth overtone. However, the low-resolution result (blue curve) can only recover up to the fifth overtone with frequency F_5=26138 Hz as the run suffers from more numerical dissipation. Let us end this subsection by discussing how we determined the mode frequencies from the Fourier spectra quantitatively. Obtaining accurate mode frequencies from a Fourier spectrum is important to our study of the universal relations of rotating QSs. In Fig. <ref>, the QS is evolved to t≈ 9.85 ms for each resolution run and the resolution in the frequency of the Fourier spectrum is inversely proportional to this evolution time, meaning that the frequency resolution is 101.5 Hz. Fig. <ref> is the same plot of FT(ρ) as Fig. <ref>, but focuses around the fundamental F_0 mode. It is seen that the width of the peak decreases as the resolution increases, and the medium (red line) and high (black line) resolution results agree very well. To extract the mode frequency from the high-resolution result, we fit a quadratic curve around the peak and the mode frequency is approximated by the position at which the slope of the curve passes through zero as illustrated by the smooth solid line in Fig. <ref>. We obtain the fundamental mode frequency 2745 Hz from the high-resolution run using this method, which differs from the known normal-mode value F_0=2778 Hz by about 1.2%. In general, the radial oscillation modes are sensitive to the stellar profile, and the capability of our simulations to recover the correct mode frequencies to high overtones accurately suggests that we have modeled the sharp surface of the QS properly. §.§ Stability of rotating QSs To demonstrate that we can perform stable and accurate simulations of rapidly rotating QSs, we first show in Fig. <ref> the L^2-norm of the Hamiltonian constraint violations for a sequence of 2M_⊙ baryon mass MIT1 QSs with rotational frequencies ranging from 300 to 1200 Hz, where the maximal rotation limit is near 1228 Hz. The runs were performed with the same grid resolution Δ x = 0.16, which is the default resolution we used for obtaining the oscillation modes of rotating stars. Similar to what we have seen for nonrotating QSs, the constraint violations quickly drop to stable plateau levels at the beginning of the simulations and remain flat until t ≈ 9.85 ms for all models including the most rapidly rotating one. Although the stable plateau values of the constraint violation get larger for faster rotation, it still maintains a relatively small value below 10^-5 and does not grow noticeably even for the 1200 Hz model, which is close to the maximal rotation limit of the sequence. One important challenge for us is to demonstrate our ability to simulate the sharp surface of rapidly rotating QSs for a long duration. Fig. <ref> compares the snapshots of density profiles at t≈ 9.78 ms for the 300 Hz and 1200 Hz rotating QSs considered in Fig. <ref>. The top panels in the figure show the rest-mass densities of the stars in the first quadrant of the x-z plane, where the z-axis is the rotation axis. The large color contrast from the large density gradient and the imperfect matching of the Cartesian grids to the star surfaces result in visible serrate edges at the surfaces. While the slowly rotating 300 Hz model (left panel) still maintains a spherical shape very well, the 1200 Hz model (right panel) is flattened at the pole and develops an oblate shape due to rapid rotation. It can be seen that some tiny amount of matter is ejected from the surface under the influence of centrifugal force near the equatorial plane. Nevertheless, the baryon mass of this rapidly rotating model remains very well conserved to within 0.1% error by the end of the simulation at t≈ 9.85 ms, which is equivalent to about 12 rotation periods. This mass-shedding effect would unavoidably occur for rapidly rotating models close to the Keplerian limit. The effect would damp the stellar pulsations and the amplitudes of the oscillation modes would gradually decrease for rapidly rotating stars <cit.>. It also affects the sharpness of the surface discontinuity near the equator. The dislocation of mass elements from their balance positions caused by the truncation brings constant disturbances to the stars and excites many oscillation modes. To clearly demonstrate the sharpness of the stellar surface, the middle and bottom panels in Fig. <ref> show the density profiles of the two models along the x axis at t=0 and t≈ 9.78 ms on the equatorial plane in linear (middle panels) and logarithmic (bottom panels) scales, respectively. The MIT1 EOS has a surface density ρ_S ≈6.93×10^-4 in the code units, which drops to the floor density ρ_f=10^-18 in the slowly rotating model over one cell, but only drops three orders of magnitude for the rapidly rotating model due to the mass-shedding effect. Along other directions, the surface density would drop to the floor density over one or two cells. To check the stability of the rotational velocity profile, we plotted v^y along the θ=π/4 direction on the x-z plane (ϕ=0) in Fig. <ref> for the 1200 Hz rapidly rotating model. The profiles at t=0, 4.93 ms, and 9.85 ms overall agree very well, though small oscillations of the star surface across four grid cells can be seen. Figs. <ref> and <ref> clearly demonstrate the stability of the density and velocity profiles of rapidly rotating QSs in our simulations. In particular, the sharp density jump at the stellar surface can also be maintained very well. §.§ Oscillation modes of rotating QSs In this section, we will focus on a sequence of MIT1 QS models with the same constant baryon mass 2M_⊙, but different rotational frequencies. The sequence can be considered as a quasi-equilibrium evolution of a rapidly rotating QS being slowed down to lower rotational frequencies if angular momentum is effectively transported away. By studying the Fourier spectra of the fluid variables of these stars, such as the rest-mass density ρ and three-velocity components v^r, v^θ, and v^ϕ, we can extract their oscillation mode frequencies. §.§.§ Fourier spectra and mode selectivity In perturbation theory, when expanded in spherical harmonics Y_lm, each oscillation mode is associated to a pair of indices (l, m). For a spherical nonrotating star, the different orders of m are degenerate for a given l and it is enough to consider the m=0 mode. For the l=2 quadrupolar modes that we focus in this work, the degeneracy is broken by rotation and the bar modes (m=±2) split from the axisymmetric (m=0) mode, similar to the Zeeman effect in quantum mechanics. This phenomenon is clearly observed in Fig. <ref> which shows the Fourier spectra of density FT(ρ) and velocity components FT(v^r), FT(v^θ), and FT(v^ϕ) for QS models with rotational frequencies 300 Hz (first row), 450 Hz (middle row), and 600 Hz (bottom row) of the chosen sequence. The positions of the f-mode (f_0 =1897 Hz), the first pressure mode (p_0=7868 Hz), and the fundamental radial mode (F_0 = 2778 Hz) for the nonrotating configuration of the sequence are labeled by the gray dashed lines. In each panel, the f_0 and p_0 gray lines are each sandwiched by two sharp Fourier peaks labeled by the red and blue dashed lines, respectively. The separation between the red (blue) lines decreases and converges to the f_0 (p_0) gray line as the rotation rate decreases towards zero. These peaks are the nonaxisymmetric m=±2 modes split from the f_0 and p_0 modes. The peak labeled by the left red line (and similarly for the left blue line) is the counterrotating m=2 mode, while the right red line is the corotating m=-2 mode. Our initial velocity perturbation is chosen to excite the m=±2 modes strongly, but not the axisymmetric m=0 modes. However, as the rotation rate increases, small peaks corresponding to the m=0 modes near the positions of f_0 and p_0 start to appear. The fundamental quasi-radial mode is also strongly excited in our simulations as can be seen by the peaks near the F_0 lines. It is not so surprising as radial oscillation modes can be easily excited due to finite-differencing errors as we have already seen for nonrotating QSs. The frequency of the quasi-radial mode increases slightly with the rotation rate as can be seen from the spectra of ρ and v^r. Nevertheless, it is still well approximated by its nonrotating counterpart F_0 even for the model rotating at 600 Hz, as the ratio between the polar and equatorial radii of this star is about 0.92 and the rotation effect is relatively small. Another interesting feature of the spectra shown in Fig. <ref> is a selective effect of the appearance of different modes and their amplitudes in different spectra. For instance, the fundamental quasi-radial mode establishes strong peaks in the spectra of ρ and v^r, but not for v^θ and v^ϕ, which may already be expected. Similarly, the peaks associated to the m=0 p-mode can be observed in the spectra of v^θ for the 450 Hz and 600 Hz models, while the corresponding peaks in the other spectra have much smaller amplitudes. §.§.§ Onsets of secular instabilities As the rotation rate increases, the peaks of the interesting m=±2 bar modes become less distinct and their amplitudes can even be smaller than the m=0 modes in some of the Fourier spectra. Fig. <ref> plots the Fourier spectra of the same sequence as in Fig. <ref>, but for four models with higher rotational frequencies up to 1225 Hz, which is very close to the maximum rotational frequency (1228 Hz) of this sequence. In each panel, the red (blue) lines still track the m=±2 f-mode (p-mode), though the gray lines for f_0, F_0, and p_0 are not shown. The green line tracks the position of twice the rotation frequency of the star and its role will be explained below. It is clear that the Fourier spectra in Fig. <ref> shows some qualitative differences comparing to those for the slower rotating models considered in Fig. <ref>. First of all, starting from the 1100 Hz model, the fundamental quasi-radial mode now has large amplitudes not only in the spectra of ρ and v^r, but also those of v^θ as can be seen from the large peaks between the red and blue dashed lines in these spectra. As the rotation rate and oblateness of the star increase, the quasi-radial modes couple v^θ and v^r, however, this coupling only becomes strong when the rotation rate is above 1000 Hz, which is close to the maximal rotation rate 1228 Hz of this sequence. In addition, the axisymmetric m=0 f- and p-modes are also excited to relatively large amplitudes comparing to the case for slower rotating models. By tracking the mode positions and comparing the amplitudes in different spectra, the m=± 2 modes can still be identified. In contrast to Fig. <ref>, the m=0 p-mode, which is identified to be the peak between the two blue lines in each panel, establishes larger amplitudes than the m=±2 counterparts (blue lines) in the ρ, v^r, and v^θ spectra when the rotation frequency is above 1000 Hz. However, the frequencies of the m=0 modes are not sensitive to the rotation rate. Let us now focus on the m=± 2 f-mode (red dashed lines) and see how the onsets of secular instabilities for them are identified. As already been seen in Fig. <ref>, the frequency of the counterrotating m=2 mode, which is tracked by the left red line in each panel, decreases as the rotation rate increases. However, further increasing the rotation rate from 900 Hz as illustrated in Fig. <ref> will push the mode to cross zero and become negative. Since the Fourier spectrum has even symmetry, the counterrotating mode appears to be “reflected" by the zero point and then shifts towards the right. The reflection occurs when the rotation frequency is at about 1000 Hz, which stands for the onset of the CFS instability (see Sec. <ref>) for this sequence. For the m=-2 corotating mode, which is tracked by the right red line in each panel, its frequency increases initially along the sequence and then starts to decrease when the rotation rate increases above 900 Hz. We find that this sequence passes the viscosity-driven instability point (see Sec. <ref>) when the rotation rate is about 1200 Hz. This instability sets in when the frequency σ_c of the corotating mode in the rotating frame goes through zero. Since σ_c is related to the inertial-frame mode frequency σ_i and the angular velocity Ω of the star by σ_c = σ_i + m Ω/2 π, the instability sets in when σ_i = 2 Ω/(2π) (for m=-2). In Fig. <ref>, the quantity 2Ω/(2π) is tracked by the green line in each panel, and hence the instability sets in when the right red line crosses the green line, as illustrated in the 1200 Hz model in the figure. As pointed out in Sec. <ref>, the viscosity-driven instability of rotating QSs was studied before by perturbing the stellar configuration during the iteration steps in the construction of an axisymmetric equilibrium rotating star <cit.>. Our study represents the first investigation based on the analysis of the oscillation modes. §.§ Universal relations of f-modes §.§.§ Comparison to the universal relations for NSs K&K <cit.> recently proposed three universal relations for the l=|m|=2 f-modes of rapidly rotating NSs. Here we shall study whether rapidly rotating QSs also satisfy these relations. We first compare our extracted mode frequencies from a total of 161 rotating NS and QS models with their relation given by Eq. (6) in <cit.>, which relates the scaled mode frequency σ̂_i≡M̅σ_i/kHz in the inertial frame to the scaled angular velocity Ω̂≡M̅Ω/kHz and the effective compactness η_45≡√(M̅^3/I_45) by σ̂_̂î = (c_1+c_2Ω̂+c_3Ω̂^2) + (d_1+d_3Ω̂^2)η_45, where M̅≡ M/M_⊙ and I_45≡ I/(10^45 g·cm^2) are the star's scaled gravitational mass and moment of inertia. The fitting coefficients c_i and d_i are given by (c_1,c_2,c_3)=(-2.14,-0.201,-7.68×10^-3) and (d_1,d_2,d_3)=(3.42,0,1.75×10^-3) for the counterrotating branch. For the corotating branch, (c_1,c_2,c_3)=(-2.14,0.220,-14.6×10^-3) and (d_1,d_2,d_3)=(3.42,0,6.86×10^-3). As each branch of data lies on a surface in the three dimensional η_45-σ̂-Ω̂ parameter space, to have a clear visualization of the data, we define Σ̂_i ≡σ̂_i-c_1-(d_1+d_3Ω̂^2)η_45 , and plot it against Ω̂ in Fig. <ref>. In the figure, the lower (upper) branch of data consists of the counterrotating (corotating) modes. The predictions from Eq. (<ref>), which is Eq. (6) in <cit.>, are labeled by the gray lines. It is noted that the nuclear matter SFHo EOS was not used in <cit.>, and hence our NS data can serve as an independent check for the universal relation. It is seen that the f-modes of rapidly rotating QSs can also be described by this relation very well. The root-mean-square of the residuals is 0.111 for the counterrotating branch and 0.0897 for the corotating branch. We next examine another universal relation for the mode frequency σ_i observed in the inertial frame for sequences of constant central energy density, given by Eq. (4) in <cit.> σ_i/σ_0 = 1+a_1 (Ω/σ_0)+a_2(Ω/σ_0)^2 , where (a_1,a_2)=(-0.193,-0.0294) for the counterrotating branch and (a_1,a_2)=(0.220,-0.0170) for the corotating branch, and the angular velocity Ω is normalized by the f-mode frequency σ_0 of the corresponding nonrotating star. Hereafter physical quantities with a subscript “0" refer to nonrotating stars. Our extracted mode frequencies also match closely to Eq. (<ref>) as shown in Fig. <ref>. The root-mean-square of the residuals is 0.0341 for the counterrotating branch and 0.0794 for the corotating branch. It is noted that the data points for the corotating branch (i.e., the upper branch in Fig. <ref>) have larger deviations from Eq. (<ref>) for high rotation rates close to the Keplerian limit, the region where Eq. (<ref>) does not fit well even for NS data as can be seen from Fig. 1 in <cit.>. The purple horizontal dashed line represents the zero-frequency line, on which the counterrotating mode becomes unstable to the CFS instability. We find that QSs become unstable when the rotation rate Ω≈ 3.4σ_0, which agrees with the finding for NSs <cit.>. Finally, we consider the universal relation for the f-mode frequency σ_c observed in the rotating frame for sequences of constant baryon mass, given by Eq. (5) in <cit.> σ_c/σ_0 = 1+b_1 (Ω/Ω_ max)+b_2(Ω/Ω_ max)^2, where (b_1,b_2)=(0.517,-0.542) for the counterrotating branch and (b_1,b_2)=(-0.235,-0.491) for the corotating branch. In contrast to <cit.>, we normalize the angular velocity Ω by its maximum rotation limit Ω_ max instead of the the Keplerian limit Ω_K, since Ω_ max can be larger than Ω_K by about 2% for QSs as we have discussed. The ambiguity between the two values does not arise in <cit.> as Ω_ max = Ω_K for NSs. Fig. <ref> plots σ_c /σ_0 against Ω / Ω_ max for 109 NS and QS models from various sequences of constant baryon mass. Let us recall that the mode frequencies observed in the rotating and inertial frames are related by σ_c = σ_i + m Ω / 2 π. Contrary to Figs. <ref> and <ref>, the corotating modes are now represented by the lower branch of data in Fig. <ref>. Our SFHo NS data still satisfy Eq. (<ref>) very well, but the QS data deviate a lot from the fitting relations. However, it should be pointed out that the spread of the data around the upper gray line at high rotation rates is similar to that of the original NS data used in <cit.> to produce the fitting curve (see Fig. 2 in <cit.>). For the corotating modes (lower branch), the QS data deviate significantly from Eq. (<ref>). This can be attributed to the fact that, in contrast to rotating QSs, realistic NS models generally cannot rotate fast enough to reach the onset of viscosity-driven instability, marked by the purple horizontal line where σ_c=0 in the figure. Fig. <ref> shows that the QS data cross the purple line shortly before reaching the maximum rotation rate. In retrospect, the deviation between the NS and QS data at high rotation rates may be associated to the fact that there is an upper bound of the spin parameter j ∼ 0.7 for realistic NSs when Ω≈Ω_ max <cit.>, while there is no such bound for QSs (see also Fig. <ref>). As Eq. (<ref>) was originally proposed to fit realistic NSs only <cit.>, the equation would not be able to cover QS models with j ≳ 0.7. We shall show below that a better universal relation for the corotating modes satisfied by NSs and QSs can be obtained by invoking the spin parameter directly. §.§.§ Critical values of the spin parameter, energy ratio and eccentricity We now investigate further the onset of the viscosity-driven instability for rotating QSs. As it is expected to be difficult for realistic NSs to rotate fast enough to achieve this instability before reaching the Keplerian limit, the onset of this instability is a special (if not unique) phenomenon for rapidly rotating QSs among stellar objects. To determine the onset of the instability, which is close to the maximal rotation rate where physical quantities become sensitive to the angular frequency, we first propose a fitting relation which relates the corotating mode frequency σ_c in the rotating frame to the spin parameter j with relatively small variance. We first define (in the code units) a scaled frequency for the corotating mode Σ̃_c = 2πMσ_c/-0.0047+0.133 η+0.575 η^2 , where η=√(M^3/I) is the effective compactness originally introduced in <cit.> for nonrotating stars, but here it is generalized to rotating stars. The denominator on the right-hand side of Eq. (<ref>) is motivated by the universal relation between η and the scaled f-mode angular frequency 2 π M σ_0 for nonrotating NSs and QSs <cit.>. Note that we have corrected a typographical error in the coefficient of η^2 in Eq. (6) of <cit.>. In Fig. <ref>, we plot Σ̃_c against the spin parameter j for both constant central energy density and constant baryon mass sequences. Comparing to the corotating modes plotted in Fig. <ref>, the NS and QS data are now “unified" and can be fit by Σ̃_c = -0.477 j^2 -0.714 j + 1 . The root-mean-square of the residuals is 0.0251. The rapidly rotating QS data with j ≳ 0.7 behave as if they are merely an extension of NS data to higher spin parameters. Our fitting curve crosses the zero-frequency point at j≈ 0.881, which represents the onset of the viscosity-driven instability for both sequences of constant central energy density and constant baryon mass. Traditionally, the onset of the instability is characterized by the critical value of the ratio between the rotational kinetic energy and gravitational potential energy T/|W| and the eccentricity ζ = (1-(r_ p/r_ eq)^2)^1/2, where r_ p and r_ eq are the polar and equatorial coordinate radii, respectively. As discussed in Sec. <ref>, the Newtonian limit (T/|W|)_ crit,Newt = 0.1375 was obtained for Maclaurin sequences <cit.>, while general relativity weakens the instability by increasing the critical energy ratio <cit.>. An approximate relation for the critical energy ratio was obtained in <cit.> for constant baryon mass sequences of homogeneous incompressible bodies in general relativity (T/|W|)_ crit = (T/|W|)_ crit,Newt + 0.126 χ_0(1+χ_0), where χ_0=M_0/R_0, M_0, and R_0 are the compactness, gravitational mass, and radius of the corresponding nonrotating model, respectively. The difference between the relativistic and Newtonian critical values of T / |W| is about 20% for compactness χ_0 ≈ 0.2. QSs described by the MIT bag model can be approximated very well by homogeneous incompressible bodies <cit.>. To check whether our QS data can also be approximated by Eq. (<ref>), we plot the scaled corotating mode frequency σ_c / σ_0 in the rotating frame against the normalized energy ratio λ≡ (T/|W|)/(T/|W|)_ crit for constant baryon mass sequences in Fig. <ref>. The trend of the numerical data can be fitted by σ_c/σ_0=1 + 0.130 (e^-27.3 λ - 1) - 1.10 λ + 0.256 λ^2. The root-mean-square of the residuals is 0.0233. In addition to a quadratic fitting, an exponential function is included to take into account the fast initial decrease. The fitting curve crosses the zero point at λ≈ 1.04, meaning that the critical value for our QS models is only 4% higher than the approximate value of homogeneous incompressible bodies predicted by Eq. (<ref>). On the other hand, the critical value of eccentricity depends weakly on the compactness, it should thus be close to the Newtonian critical value ζ_ crit,Newt=0.8127 <cit.>. In Fig. <ref>, we plot the scaled mode frequency σ_c/σ_0 against the eccentricity ζ. Its fitting curve is σ_c/σ_0 = 1 - 0.622 ζ - 0.799ζ^3, with the root-mean-square of the residuals being 0.0207. The fit predicts the onset of the instability at ζ≈ 0.842, which is about 3.6% higher than the Newtonian value. §.§ Fitting relations of p-modes We end this section by also providing fitting relations for the first p-modes of rotating QSs which were strongly excited in our simulations. As it is well known that p-modes are more EOS-sensitive <cit.> and dependent strongly on the density and pressure profiles, universal relations are not expected to exist for them. Since our generalized MIT bag model EOS contains only two parameters, namely the bag constant B and the square of the speed of sound c_ss, it is possible to find fitting relations for the p-modes by invoking these parameters. Furthermore, it is found that dimensionless frequencies like Mp (with p being the p-mode frequency) are independent of the bag constant in the MIT bag model <cit.> as it is just a scaling factor, and hence only c_ss will be relevant to our fitting relations. This can be illustrated by considering the p-mode frequency p_0 of nonrotating QSs. We found that the scaled frequency M_0 p_0 of our nonrotating QS models can be fitted well by M_0 p_0 = ( a_1 c_ss + a_2) χ_0^2 + (a_3 c_ss^2 + a_4 c_ss + a_5 + a_6 /c_ss)χ_0 + a_7 c_ss^2 , where M_0 and χ_0 = M_0 / R_0 are the gravitational mass and compactness, respectively. The seven fitting parameters are a_1=-2.700, a_2=-0.5845, a_3=0.2183, a_4=1.202, a_5=0.2664, a_6=-0.006893 and a_7=-0.09141. This relation is obtained by fitting to nonrotating QS data with M_0 ≥ 1.4 M_⊙ and different values of c_ss ranging from 1/10 to 1 as shown in Fig. <ref>. The root-mean-square of the residuals is 0.000450. For the l=2 p-modes of rotating QSs, we use the following ansatz for the m= 2 (m=-2) p-mode frequencies p^+_i (p^-_i) observed in the inertial frame for constant baryon mass sequences p̂_i^± = 1∓ F(χ_0,c_ss,Ω̅) Ω̅+ G(χ_0,c_ss) Ω̅^2 , where p̂_i^±=p_i^±/p_0, Ω̅=Ω/(2π p_0), and p_0 is the p-mode frequency of the corresponding nonrotating star with gravitational mass M_0 and compactness χ_0. The two functions F(χ_0,c_ss,Ω̅) and G(χ_0,c_ss) are given by F(χ_0,c_ss,Ω̅) = b_1 √(χ_0/c_ss) + b_2 χ_0/c_ssΩ̅ , and G(χ_0,c_ss) = (b_3 c_ss^2 + b_4 )χ_0 + (b_5 c_ss^2 + b_6) , where the fitting parameters are b_1=2.22, b_2=-2.24, b_3=82.1, b_4=44.6, b_5=-28.8 and b_6=-11.4. To illustrate the fitting relation, we plot p̂_i^± - G Ω̅^2 against √(χ_0/c_ss)Ω̅ in Fig. <ref>. The numerical data can be fitted well by Eq. (<ref>) with the root-mean-square of residues being 0.00741 for the upper branch and 0.00701 for the lower branch. It should be noted that the above fitting parameters are obtained by excluding those rapidly rotating degenerate models close to the maximum rotation limit illustrated in Fig. <ref>. In reality, it is not expected to be able to detect the p-modes of compact stars from their emitted GW signals anytime soon, even with the next generation of detectors. However, it might still be interesting to consider how one could (in principle) make use of these fitting relations. As an illustration, let us first ignore the rotational effects and assume that the f-mode frequency (and its damping time) and the p-mode frequency of a nonrotating compact star are observed. Applying the universal relations for the f-mode of nonrotating stars in <cit.>, which are valid for both NSs and QSs, the mass M_0 and radius R_0, and hence the compactness χ_0, of the star can then be inferred approximately. Eq. (<ref>) can then be solved for the single variable c_ss and one can check whether the observation data is consistent to our generalized MIT bag model. For instance, an inferred value of c_ss=1/3 would mean that the star is consistent with a QS described by the canonical MIT bag model. On the other hand, an inferred value of c_ss which is far away from our fitting range of Eq. (<ref>) would serve as a strong evidence against our QS models. Similarly, if the angular velocity Ω and the two frequencies p^±_i are observed for a rotating compact star, Eq. (<ref>) can then be used to relate the three parameters p_0, c_ss, and χ_0. If the star is slowly rotating so that its compactness can be approximated by the value of the nonrotating counterpart, χ≈χ_0, one can solve for p_0 and c_ss. We can then compare the inferred value of p_0 to the observed frequency of the m=0 axisymmetric p-mode (if it is available), which is well approximated by p_0 for a slowly rotating star, and determine whether the observed compact star is consistent with our QS models. § CONCLUSION The sharp high-density surface of a bare QS presents a great challenge for grid-based hydrodynamical modelling of the star. In this paper, building on top of the numerical relativity code , we have implemented a numerical method based on a positivity-preserving Riemann solver and a dust-like EOS for the atmosphere to perform stable evolutions of rapidly rotating QSs in general relativity. Our work represents a new addition to the list of just a few fully general relativistic simulations of QSs available up to today <cit.>. The fidelity of our method has been tested and confirmed by comparing the oscillation mode frequencies of nonrotating QSs extracted from simulations with the results obtained from perturbative calculations. The f-mode of rapidly rotating QSs are investigated in details. In particular, we find that two of the universal relations for the l=|m|=2 nonaxisymmetric modes proposed originally for rotating NSs <cit.> are still valid for QSs (see Figs. <ref> and <ref>). However, the QS data deviate significantly from another universal relation for the corotating modes observed in the rotating frame (see Fig. <ref>). In addition to the f-modes, we have also studied the first p-modes of rotating QSs. For QSs described by our generalized MIT bag model, we report fitting relations for the p-mode frequencies of both nonrotating and rotating stars. We also find that, when considering sequences of constant central energy density, the onset of the CFS instability for QSs occurs when the angular velocity Ω≈ 3.4 σ_0, which agrees with the finding for NSs <cit.>. In addition to the CFS instability, we have also studied the viscosity-driven instability of QSs. We find that the onset of the instability for rotating QSs occurs when the spin parameter j ≈ 0.881 for both sequences of constant central energy density and constant baryon mass. For QS sequences of constant baryon mass , we also find that the critical value of the ratio between the rotational kinetic energy and gravitational potential energy T/|W| for the onset of the instability agrees with the value predicted for homogeneous incompressible bodies in general relativity to within 4%, and the critical value of the eccentricity ζ is only 3.6% larger than the Newtonian value <cit.>. Realistic NSs are generally not expected to be able to rotate fast enough to trigger this instability before reaching the Keplerian limit. This can be seen from Fig. <ref> that the NS data for the frequencies of the corotating modes σ_c observed in the rotating frame do not cross zero before the Kelperian limit. The universal relation between the spin parameter and Σ̃_c, which is just a rescaled σ_c, proposed by us in Eq. (<ref>) can unify the NS and QS data, and also predict the onset of the instability to occur at j ≈ 0.881 as shown in Fig. <ref>. The fact that realistic NSs cannot trigger the instability can be associated to the existence of an upper bound j∼ 0.7 for rotating NSs <cit.>. We thank Hoi-Ka Hui for useful discussions and Shu-Yan Lau for sharing his oscillation code for us to compute the mode frequencies of nonrotating stars for benchmarking. This work is partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region (Project No. 14304322). We also acknowledge the support of the CUHK Central High Performance Computing Cluster, on which our simulations were carried out. 125 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Itoh(1970)]Itoh:1970 author author N. Itoh, https://doi.org/10.1143/PTP.44.291 journal journal Prog. Theor. Phys. volume 44, pages 291 (year 1970)NoStop [Bodmer(1971)]Bodmer:1971 author author A. R. Bodmer, https://doi.org/10.1103/PhysRevD.4.1601 journal journal Phys. Rev. D volume 4, pages 1601 (year 1971)NoStop [Witten(1984)]Witten_1984 author author E. Witten, https://doi.org/10.1103/PhysRevD.30.272 journal journal Phys. Rev. D volume 30, pages 272 (year 1984)NoStop [Alford et al.(2005)Alford, Braby, Paris, and Reddy]Alford:2005 author author M. Alford, author M. Braby, author M. Paris, and author S. Reddy, https://doi.org/10.1086/430902 journal journal Astrophys. J. volume 629, pages 969 (year 2005)NoStop [Alford et al.(2013)Alford, Han, and Prakash]Alford2013 author author M. G. Alford, author S. Han, and author M. Prakash, https://doi.org/10.1103/PhysRevD.88.083013 journal journal Phys. Rev. D volume 88, pages 083013 (year 2013)NoStop [Holdom et al.(2018)Holdom, Ren, and Zhang]Holdom:2018 author author B. Holdom, author J. Ren, and author C. Zhang, https://doi.org/10.1103/PhysRevLett.120.222001 journal journal Phys. Rev. Lett. volume 120, pages 222001 (year 2018)NoStop [Weber(2005)]Weber:2005 author author F. Weber, https://doi.org/10.1016/j.ppnp.2004.07.001 journal journal Prog. Part. Nucl. Phys. volume 54, pages 193 (year 2005)NoStop [Doroshenko et al.(2022)Doroshenko, Suleimanov, and Santangelo]Doroshenko:2022 author author V. Doroshenko, author V. Suleimanov, and author A. Santangelo, https://doi.org/https://doi.org/10.1038/s41550-022-01800-1 journal journal Nature Astron. volume 6, pages 1444 (year 2022)NoStop [Clemente et al.(2023)Clemente, Drago, and Pagliara]DiClemente:2022 author author F. D. Clemente, author A. Drago, and author G. Pagliara, https://arxiv.org/abs/2211.07485 journal journal arXiv:2211.07485 (year 2023)NoStop [Horvath et al.(2023)Horvath, Rocha, de Sá, Moraes, Barão, de Avellar, Bernardo, and Bachega]Horvath:2023 author author J. E. Horvath, author L. S. Rocha, author L. M. de Sá, author P. H. R. S. Moraes, author L. G. Barão, author M. G. B. de Avellar, author A. Bernardo, and author R. R. A. Bachega, https://doi.org/10.1051/0004-6361/202345885 journal journal Astron. Astrophys. volume 672, pages L11 (year 2023)NoStop [Suwa et al.(2018)Suwa, Yoshida, Shibata, Umeda, and Takahashi]Suwa:2018 author author Y. Suwa, author T. Yoshida, author M. Shibata, author H. Umeda, and author K. Takahashi, https://doi.org/10.1093/mnras/sty2460 journal journal Mon. Not. R. Astron. Soc. volume 481, pages 3305 (year 2018)NoStop [Abbott et al. (2020)]GW190814 author author R. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.3847/2041-8213/ab960f journal journal Astrophys. J. Lett. volume 896, eid L44 (year 2020)NoStop [Bombaci et al.(2021)Bombaci, Drago, Logoteta, Pagliara, and Vidaña]Bombaci:2021 author author I. Bombaci, author A. Drago, author D. Logoteta, author G. Pagliara, and author I. Vidaña, https://doi.org/10.1103/PhysRevLett.126.162702 journal journal Phys. Rev. Lett. volume 126, pages 162702 (year 2021)NoStop [Abbott et al.(2017)]GW170817 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.1103/PhysRevLett.119.161101 journal journal Phys. Rev. Lett. volume 119, pages 161101 (year 2017)NoStop [Abbott et al.(2020)]GW190425 author author B. P. Abbott et al., https://doi.org/10.3847/2041-8213/ab75f5 journal journal Astrophys. J. Lett. volume 892, eid L3 (year 2020)NoStop [Miao et al.(2021)Miao, Jiang, Li, and Chen]Miao:2021 author author Z. Miao, author J.-L. Jiang, author A. Li, and author L.-W. Chen, https://doi.org/10.3847/2041-8213/ac194d journal journal Astrophys. J. Lett. volume 917, eid L22 (year 2021)NoStop [Drago et al.(2016)Drago, Lavagno, Pagliara, and Pigato]Drago:2016 author author A. Drago, author A. Lavagno, author G. Pagliara, and author D. Pigato, https://doi.org/10.1140/epja/i2016-16040-3 journal journal Eur. Phys. J. A volume 52, eid 40 (year 2016)NoStop [De Pietri et al.(2019)De Pietri, Drago, Feo, Pagliara, Pasquali, Traversi, and Wiktorowicz]DePietri:2019 author author R. De Pietri, author A. Drago, author A. Feo, author G. Pagliara, author M. Pasquali, author S. Traversi, and author G. Wiktorowicz, https://doi.org/10.3847/1538-4357/ab2fd0 journal journal Astrophys. J. volume 881, eid 122 (year 2019)NoStop [Baiotti and Rezzolla(2017)]Baiotti2017 author author L. Baiotti and author L. Rezzolla, https://doi.org/10.1088/1361-6633/aa67bb journal journal Rep. Prog. Phys. volume 80, pages 096901 (year 2017)NoStop [Duez and Zlochower(2019)]Duez:2019 author author M. D. Duez and author Y. Zlochower, https://doi.org/10.1088/1361-6633/aadb16 journal journal Rep. Prog. Phys. volume 82, eid 016902 (year 2019)NoStop [Kyutoku et al.(2021)Kyutoku, Shibata, and Taniguchi]Kyutoku:2021 author author K. Kyutoku, author M. Shibata, and author K. Taniguchi, https://doi.org/10.1007/s41114-021-00033-4 journal journal Living Rev. Relativ. volume 24, eid 5 (year 2021)NoStop [Bauswein et al.(2009)Bauswein, Janka, Oechslin, Pagliara, Sagert, Schaffner-Bielich, Hohle, and Neuhäuser]Bauswein:2009 author author A. Bauswein, author H.-T. Janka, author R. Oechslin, author G. Pagliara, author I. Sagert, author J. Schaffner-Bielich, author M. M. Hohle, and author R. Neuhäuser, https://doi.org/10.1103/PhysRevLett.103.011101 journal journal Phys. Rev. Lett. volume 103, pages 011101 (year 2009)NoStop [Zhu and Rezzolla(2021)]Zhenyu2021 author author Z. Zhu and author L. Rezzolla, https://doi.org/10.1103/PhysRevD.104.083004 journal journal Phys. Rev. D volume 104, pages 083004 (year 2021)NoStop [Zhou et al.(2021)Zhou, Kiuchi, Shibata, Tsokaros, and Uryū]Enping2021 author author E. Zhou, author K. Kiuchi, author M. Shibata, author A. Tsokaros, and author K. Uryū, https://doi.org/10.1103/PhysRevD.103.123011 journal journal Phys. Rev. D volume 103, pages 123011 (year 2021)NoStop [Zhou et al.(2022)Zhou, Kiuchi, Shibata, Tsokaros, and Uryū]Enping2022 author author E. Zhou, author K. Kiuchi, author M. Shibata, author A. Tsokaros, and author K. Uryū, https://doi.org/10.1103/PhysRevD.106.103030 journal journal Phys. Rev. D volume 106, pages 103030 (year 2022)NoStop [Löffler et al.(2012)Löffler, Faber, Bentivegna, Bode, Diener, Haas, Hinder, Mundim, Ott, Schnetter, Allen, Campanelli, and Laguna]ETpaper author author F. Löffler, author J. Faber, author E. Bentivegna, author T. Bode, author P. Diener, author R. Haas, author I. Hinder, author B. C. Mundim, author C. D. Ott, author E. Schnetter, author G. Allen, author M. Campanelli, and author P. Laguna, https://doi.org/10.1088/0264-9381/29/11/115001 journal journal Class. Quantum Grav. volume 29, pages 115001 (year 2012)NoStop [Brandt et al.(2020)]ET_Turing author author S. R. Brandt et al., https://doi.org/10.5281/zenodo.3866075 title Einstein Toolkit (the “Turing" release, ET_2020_05) (year 2020)NoStop [ETw()]ETweb @noop title website <http://einsteintoolkit.org/>NoStop [Godunov and Bohachevsky(1959)]godunov:hal-01620642 author author S. K. Godunov and author I. Bohachevsky, https://hal.science/hal-01620642 journal journal Mat. Sb. volume 47(89), pages 271 (year 1959)NoStop [Harten et al.(1983)Harten, Lax, and van Leer]HartenLaxLeer1983 author author A. Harten, author P. D. Lax, and author B. van Leer, https://doi.org/10.1137/1025002 journal journal SIAM Rev. volume 25, pages 35 (year 1983)NoStop [Radice et al.(2014)Radice, Rezzolla, and Galeazzi]Radice:2013xpa author author D. Radice, author L. Rezzolla, and author F. Galeazzi, https://doi.org/10.1088/0264-9381/31/7/075012 journal journal Class. Quantum Grav. volume 31, pages 075012 (year 2014)NoStop [Perthame and Shu(1996)]Perthame1996 author author B. Perthame and author C.-W. Shu, https://doi.org/10.1007/s002110050187 journal journal Numer. Math. volume 73, pages 119 (year 1996)NoStop [Zhang and Shu(2010)]ZHANG20108918 author author X. Zhang and author C.-W. Shu, https://doi.org/https://doi.org/10.1016/j.jcp.2010.08.016 journal journal J. Comput. Phys. volume 229, pages 8918 (year 2010)NoStop [Hu et al.(2013)Hu, Adams, and Shu]HU2013169 author author X. Y. Hu, author N. A. Adams, and author C.-W. Shu, https://doi.org/https://doi.org/10.1016/j.jcp.2013.01.024 journal journal J. Comput. Phys. volume 242, pages 169 (year 2013)NoStop [Einfeldt et al.(1991)Einfeldt, Munz, Roe, and Sjögreen]EINFELDT1991273 author author B. Einfeldt, author C. Munz, author P. Roe, and author B. Sjögreen, https://doi.org/https://doi.org/10.1016/0021-9991(91)90211-3 journal journal J. Comput. Phys. volume 92, pages 273 (year 1991)NoStop [Acernese et al.(2014)]virgo author author F. Acernese et al., https://doi.org/10.1088/0264-9381/32/2/024001 journal journal Class. Quantum Grav. volume 32, pages 024001 (year 2014)NoStop [Aasi et al.(2015)]LIGO author author J. Aasi et al. (collaboration LIGO Scientific Collaboration), https://doi.org/10.1088/0264-9381/32/7/074001 journal journal Class. Quantum Grav. volume 32, pages 074001 (year 2015)NoStop [Abbott et al.(2016)]PhysRevLett.116.061102 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016)NoStop [ Punturo et al.(2010)]EinsteinTelescope author author M.  Punturo et al., https://doi.org/10.1088/0264-9381/27/19/194002 journal journal Class. Quantum Grav. volume 27, pages 194002 (year 2010)NoStop [Kokkotas and Schmidt(1999)]Kokkotas1999 author author K. D. Kokkotas and author B. G. Schmidt, https://doi.org/10.12942/lrr-1999-2 journal journal Living Rev. Relativ. volume 2, pages 2 (year 1999)NoStop [Andersson et al.(2011)Andersson, Ferrari, Jones, Kokkotas, Krishnan, Read, Rezzolla, and Zink]Andersson2011 author author N. Andersson, author V. Ferrari, author D. I. Jones, author K. D. Kokkotas, author B. Krishnan, author J. S. Read, author L. Rezzolla, and author B. Zink, https://doi.org/10.1007/s10714-010-1059-4 journal journal Gen. Relativ. Gravit. volume 43, pages 409 (year 2011)NoStop [Radice et al.(2019)Radice, Morozova, Burrows, Vartanyan, and Nagakura]Radice_2019 author author D. Radice, author V. Morozova, author A. Burrows, author D. Vartanyan, and author H. Nagakura, https://doi.org/10.3847/2041-8213/ab191a journal journal Astrophys. J. Lett. volume 876, pages L9 (year 2019)NoStop [Morozova et al.(2018)Morozova, Radice, Burrows, and Vartanyan]Morozova_2018 author author V. Morozova, author D. Radice, author A. Burrows, and author D. Vartanyan, https://doi.org/10.3847/1538-4357/aac5f1 journal journal Astrophys. J. volume 861, pages 10 (year 2018)NoStop [Hinderer et al.(2016)Hinderer, Taracchini, Foucart, Buonanno, Steinhoff, Duez, Kidder, Pfeiffer, Scheel, Szilagyi, Hotokezaka, Kyutoku, Shibata, and Carpenter]Hinderer_2016 author author T. Hinderer, author A. Taracchini, author F. Foucart, author A. Buonanno, author J. Steinhoff, author M. Duez, author L. E. Kidder, author H. P. Pfeiffer, author M. A. Scheel, author B. Szilagyi, author K. Hotokezaka, author K. Kyutoku, author M. Shibata, and author C. W. Carpenter, https://doi.org/10.1103/PhysRevLett.116.181101 journal journal Phys. Rev. Lett. volume 116, pages 181101 (year 2016)NoStop [Steinhoff et al.(2016)Steinhoff, Hinderer, Buonanno, and Taracchini]Steinhoff2016 author author J. Steinhoff, author T. Hinderer, author A. Buonanno, and author A. Taracchini, https://doi.org/10.1103/PhysRevD.94.104028 journal journal Phys. Rev. D volume 94, pages 104028 (year 2016)NoStop [Font et al.(2001)Font, Dimmelmeier, Gupta, and Stergioulas]Font:2001 author author J. A. Font, author H. Dimmelmeier, author A. Gupta, and author N. Stergioulas, https://doi.org/10.1046/j.1365-8711.2001.04555.x journal journal Mon. Not. R. Astron. Soc. volume 325, pages 1463–1470 (year 2001)NoStop [Stergioulas et al.(2004)Stergioulas, Apostolatos, and Font]Stergioulas2004 author author N. Stergioulas, author T. A. Apostolatos, and author J. A. Font, https://doi.org/10.1111/j.1365-2966.2004.07973.x journal journal Mon. Not. R. Astron. Soc. volume 352, pages 1089 (year 2004)NoStop [Gaertig and Kokkotas(2008)]Gaertig:2008 author author E. Gaertig and author K. D. Kokkotas, https://doi.org/10.1103/PhysRevD.78.064063 journal journal Phys. Rev. D volume 78, pages 064063 (year 2008)NoStop [Doneva et al.(2013)Doneva, Gaertig, Kokkotas, and Krüger]DonevaGaertig_2013 author author D. D. Doneva, author E. Gaertig, author K. D. Kokkotas, and author C. Krüger, https://doi.org/10.1103/PhysRevD.88.044052 journal journal Phys. Rev. D volume 88, pages 044052 (year 2013)NoStop [Dimmelmeier et al.(2006)Dimmelmeier, Stergioulas, and Font]Dimmelmeier2006 author author H. Dimmelmeier, author N. Stergioulas, and author J. A. Font, https://doi.org/10.1111/j.1365-2966.2006.10274.x journal journal Mon. Not. R. Astron. Soc. volume 368, pages 1609 (year 2006)NoStop [Ng et al.(2021)Ng, Cheong, Lin, and Li]Ng_2021 author author H. H.-Y. Ng, author P. C.-K. Cheong, author L.-M. Lin, and author T. G. F. Li, https://doi.org/10.3847/1538-4357/ac0141 journal journal Astrophys. J. volume 915, pages 108 (year 2021)NoStop [Zink et al.(2010)Zink, Korobkin, Schnetter, and Stergioulas]PhysRevD.81.084055 author author B. Zink, author O. Korobkin, author E. Schnetter, and author N. Stergioulas, https://doi.org/10.1103/PhysRevD.81.084055 journal journal Phys. Rev. D volume 81, pages 084055 (year 2010)NoStop [Krüger and Kokkotas(2020a)]Kokkotas2020 author author C. J. Krüger and author K. D. Kokkotas, https://doi.org/10.1103/PhysRevLett.125.111106 journal journal Phys. Rev. Lett. volume 125, pages 111106 (year 2020a)NoStop [Krüger and Kokkotas(2020b)]PhysRevD.102.064026 author author C. J. Krüger and author K. D. Kokkotas, https://doi.org/10.1103/PhysRevD.102.064026 journal journal Phys. Rev. D volume 102, pages 064026 (year 2020b)NoStop [Andersson(2003)]Andersson_2003 author author N. Andersson, https://doi.org/10.1088/0264-9381/20/7/201 journal journal Class. Quantum Grav. volume 20, pages R105 (year 2003)NoStop [Friedman and Stergioulas(2013)]Friedman:2013 author author J. L. Friedman and author N. Stergioulas, @noop title Rotating Relativistic Stars (publisher Cambridge University Press, year 2013)NoStop [Chandrasekhar(1970)]Chandrasekhar1970 author author S. Chandrasekhar, https://doi.org/10.1103/PhysRevLett.24.611 journal journal Phys. Rev. Lett. volume 24, pages 611 (year 1970)NoStop [Friedman and Schutz(1978)]FriedmanSchutz1978 author author J. L. Friedman and author B. F. Schutz, https://doi.org/10.1086/156098 journal journal Astrophys. J. volume 221, pages 937 (year 1978)NoStop [Chandrasekhar(1969)]CHANDRASEKHAR_1969 author author S. Chandrasekhar, @noop title Ellipsoidal Figures of Equilibrium (publisher Yale University Press, year 1969)NoStop [Gondek-Rosi ńńska and Gourgoulhon(2002)]PhysRevD.66.044021 author author D. Gondek-Rosi ńńska and author E. Gourgoulhon, https://doi.org/10.1103/PhysRevD.66.044021 journal journal Phys. Rev. D volume 66, pages 044021 (year 2002)NoStop [Bonazzola et al.(1998)Bonazzola, Frieben, and Gourgoulhon]Bonazzola1998 author author S. Bonazzola, author J. Frieben, and author E. Gourgoulhon, https://ui.adsabs.harvard.edu/abs/1998A A...331..280B journal journal Astron. Astrophys. volume 331, pages 280 (year 1998)NoStop [Gourgoulhon et al.(1999)Gourgoulhon, Haensel, Livine, Paluch, Bonazzola, and Marck]Gourgoulhon_1999 author author E. Gourgoulhon, author P. Haensel, author R. Livine, author E. Paluch, author S. Bonazzola, and author J.-A. Marck, https://articles.adsabs.harvard.edu/pdf/1999A journal journal Astron. Astrophys. volume 349, pages 851 (year 1999)NoStop [Lo and Lin(2011)]Lo_2011 author author K.-W. Lo and author L.-M. Lin, https://doi.org/10.1088/0004-637x/728/1/12 journal journal Astrophys. J. volume 728, pages 12 (year 2011)NoStop [Sham et al.(2015)Sham, Chan, Lin, and Leung]Sham_2015 author author Y. H. Sham, author T. K. Chan, author L. M. Lin, and author P. T. Leung, https://doi.org/10.1088/0004-637X/798/2/121 journal journal Astrophys. J. volume 798, eid 121 (year 2015)NoStop [Gondek-Rosińska et al.(2000)Gondek-Rosińska, Bulik, Zdunik, Gourgoulhon, Ray, Dey, and Dey]Gondek2000 author author D. Gondek-Rosińska, author T. Bulik, author L. Zdunik, author E. Gourgoulhon, author S. Ray, author J. Dey, and author M. Dey, https://doi.org/10.48550/arXiv.astro-ph/0007004 journal journal Astron. Astrophys. volume 363, pages 1005 (year 2000)NoStop [Gondek-Rosińska et al.(2003)Gondek-Rosińska, Gourgoulhon, and Haensel]Gondek2003 author author D. Gondek-Rosińska, author E. Gourgoulhon, and author P. Haensel, https://doi.org/10.1051/0004-6361:20031431 journal journal Astron. Astrophys. volume 412, pages 777 (year 2003)NoStop [Yagi and Yunes(2017)]Yagi2017 author author K. Yagi and author N. Yunes, https://doi.org/https://doi.org/10.1016/j.physrep.2017.03.002 journal journal Phys. Rep. volume 681, pages 1 (year 2017)NoStop [Doneva and Pappas(2018)]Doneva:2018 author author D. D. Doneva and author G. Pappas, in https://doi.org/10.1007/978-3-319-97616-7_13 booktitle The Physics and Astrophysics of Neutron Stars, editor edited by editor L. Rezzolla, editor P. Pizzochero, editor D. I. Jones, editor N. Rea, and editor I. Vidaña (publisher Springer, Cham, year 2018)NoStop [Psaltis et al.(2014)Psaltis, Özel, and Chakrabarty]Psaltis_2014 author author D. Psaltis, author F. Özel, and author D. Chakrabarty, https://doi.org/10.1088/0004-637x/787/2/136 journal journal Astrophys. J. volume 787, pages 136 (year 2014)NoStop [Rezzolla et al.(2018)Rezzolla, Most, and Weih]Rezzolla_2018 author author L. Rezzolla, author E. R. Most, and author L. R. Weih, https://doi.org/10.3847/2041-8213/aaa401 journal journal Astrophys. J. volume 852, pages L25 (year 2018)NoStop [Lackey et al.(2019)Lackey, Pürrer, Taracchini, and Marsat]Lackey:2019 author author B. D. Lackey, author M. Pürrer, author A. Taracchini, and author S. Marsat, https://doi.org/10.1103/PhysRevD.100.024002 journal journal Phys. Rev. D volume 100, pages 024002 (year 2019)NoStop [Schmidt and Hinderer(2019)]Schmidt:2019 author author P. Schmidt and author T. Hinderer, https://doi.org/10.1103/PhysRevD.100.021501 journal journal Phys. Rev. D volume 100, pages 021501(R) (year 2019)NoStop [Barkett et al.(2020)Barkett, Chen, Scheel, and Varma]Barkett:2020 author author K. Barkett, author Y. Chen, author M. A. Scheel, and author V. Varma, https://doi.org/10.1103/PhysRevD.102.024031 journal journal Phys. Rev. D volume 102, pages 024031 (year 2020)NoStop [Andersson and Pnigouras(2021)]Pnigouras:2021 author author N. Andersson and author P. Pnigouras, https://doi.org/10.1093/mnras/stab371 journal journal Mon. Not. R. Astron. Soc. volume 503, pages 533 (year 2021)NoStop [Lau et al.(2010)Lau, Leung, and Lin]Lau2010 author author H. K. Lau, author P. T. Leung, and author L. M. Lin, https://doi.org/10.1088/0004-637x/714/2/1234 journal journal Astrophys J. volume 714, pages 1234 (year 2010)NoStop [Yagi and Yunes(2013a)]Yagi2013 author author K. Yagi and author N. Yunes, https://doi.org/10.1126/science.1236462 journal journal Science volume 341, pages 365 (year 2013a)NoStop [Yagi and Yunes(2013b)]Yagi:2013b author author K. Yagi and author N. Yunes, https://doi.org/10.1103/PhysRevD.88.023009 journal journal Phys. Rev. D volume 88, pages 023009 (year 2013b)NoStop [Chan et al.(2014)Chan, Sham, Leung, and Lin]Chan2014 author author T. K. Chan, author Y.-H. Sham, author P. T. Leung, and author L.-M. Lin, https://doi.org/10.1103/PhysRevD.90.124023 journal journal Phys. Rev. D volume 90, pages 124023 (year 2014)NoStop [Martinon et al.(2014)Martinon, Maselli, Gualtieri, and Ferrari]Martinon:2014 author author G. Martinon, author A. Maselli, author L. Gualtieri, and author V. Ferrari, https://doi.org/10.1103/PhysRevD.90.064026 journal journal Phys. Rev. D volume 90, pages 064026 (year 2014)NoStop [Marques et al.(2017)Marques, Oertel, Hempel, and Novak]Marques:2017 author author M. Marques, author M. Oertel, author M. Hempel, and author J. Novak, https://doi.org/10.1103/PhysRevC.96.045806 journal journal Phys. Rev. C volume 96, pages 045806 (year 2017)NoStop [Yeung et al.(2021)Yeung, Lin, Andersson, and Comer]Yeung:2021 author author C.-H. Yeung, author L.-M. Lin, author N. Andersson, and author G. Comer, https://doi.org/10.3390/universe7040111 journal journal Universe volume 7, pages 111 (year 2021)NoStop [Andersson and Kokkotas(1998)]AnderssonKokkotas1998 author author N. Andersson and author K. D. Kokkotas, https://doi.org/10.1046/j.1365-8711.1998.01840.x journal journal Mon. Not. R. Astron. Soc. volume 299, pages 1059 (year 1998)NoStop [Benhar et al.(2004)Benhar, Ferrari, and Gualtieri]Benhar:2004 author author O. Benhar, author V. Ferrari, and author L. Gualtieri, https://doi.org/10.1103/PhysRevD.70.124015 journal journal Phys. Rev. D volume 70, pages 124015 (year 2004)NoStop [Tsui and Leung(2005a)]Tsui:2005prl author author L. K. Tsui and author P. T. Leung, https://doi.org/10.1103/PhysRevLett.95.151101 journal journal Phys. Rev. Lett. volume 95, pages 151101 (year 2005a)NoStop [Tsui and Leung(2005b)]Tsui:2005 author author L. K. Tsui and author P. T. Leung, https://doi.org/10.1111/j.1365-2966.2005.08710.x journal journal Mon. Not. R. Astron. Soc. volume 357, pages 1029 (year 2005b)NoStop [Sotani and Kumar(2021)]Sotani:2021 author author H. Sotani and author B. Kumar, https://doi.org/10.1103/PhysRevD.104.123002 journal journal volume 104, eid 123002 (year 2021)NoStop [Lioutas et al.(2021)Lioutas, Bauswein, and Stergioulas]Lioutas:2021 author author G. Lioutas, author A. Bauswein, and author N. Stergioulas, https://doi.org/10.1103/PhysRevD.104.043011 journal journal volume 104, pages 043011 (year 2021)NoStop [Zhao and Lattimer(2022)]Zhao:2022 author author T. Zhao and author J. M. Lattimer, https://doi.org/10.1103/PhysRevD.106.123002 journal journal volume 106, pages 123002 (year 2022)NoStop [Kuan et al.(2022)Kuan, Krüger, Suvorov, and Kokkotas]Kuan:2022 author author H.-J. Kuan, author C. J. Krüger, author A. G. Suvorov, and author K. D. Kokkotas, https://doi.org/10.1093/mnras/stac1101 journal journal Mon. Not. R. Astron. Soc. volume 513, pages 4045 (year 2022)NoStop [Goodale et al.(2003)Goodale, Allen, Lanfermann, Massó, Radke, Seidel, and Shalf]cactus2003 author author T. Goodale, author G. Allen, author G. Lanfermann, author J. Massó, author T. Radke, author E. Seidel, and author J. Shalf, in https://link.springer.com/chapter/10.1007/3-540-36569-9_13 booktitle High Performance Computing for Computational Science — VECPAR 2002, editor edited by editor J. M. L. M. Palma, editor A. A. Sousa, editor J. Dongarra, and editor V. Hernández (publisher Springer Berlin, year 2003)NoStop [Cac()]Cactusweb @noop title website <http://www.cactuscode.org/>NoStop [Alic et al.(2012)Alic, Bona-Casas, Bona, Rezzolla, and Palenzuela]Alic2012 author author D. Alic, author C. Bona-Casas, author C. Bona, author L. Rezzolla, and author C. Palenzuela, https://doi.org/10.1103/PhysRevD.85.064040 journal journal Phys. Rev. D volume 85, pages 064040 (year 2012)NoStop [Alic et al.(2013)Alic, Kastaun, and Rezzolla]Alic2013 author author D. Alic, author W. Kastaun, and author L. Rezzolla, https://doi.org/10.1103/PhysRevD.88.064049 journal journal Phys. Rev. D volume 88, pages 064049 (year 2013)NoStop [Brown et al.(2009)Brown, Diener, Sarbach, Schnetter, and Tiglio]Brown:2008sb author author D. Brown, author P. Diener, author O. Sarbach, author E. Schnetter, and author M. Tiglio, https://doi.org/10.1103/PhysRevD.79.044023 journal journal Phys. Rev. D volume 79, pages 044023 (year 2009)NoStop [mlw()]mlweb @noop title website <http://www.cct.lsu.edu/ eschnett/McLachlan/>NoStop [Baiotti et al.(2005)Baiotti, Hawke, Montero, Löffler, Rezzolla, Stergioulas, Font, and Seidel]Baiotti:2004wn author author L. Baiotti, author I. Hawke, author P. J. Montero, author F. Löffler, author L. Rezzolla, author N. Stergioulas, author J. A. Font, and author E. Seidel, https://doi.org/10.1103/PhysRevD.71.024035 journal journal Phys. Rev. D volume 71, pages 024035 (year 2005)NoStop [Mösta et al.(2013)Mösta, Mundim, Faber, Haas, Noble, Bode, Löffler, Ott, Reisswig, and Schnetter]M_sta_2013 author author P. Mösta, author B. C. Mundim, author J. A. Faber, author R. Haas, author S. C. Noble, author T. Bode, author F. Löffler, author C. D. Ott, author C. Reisswig, and author E. Schnetter, https://doi.org/10.1088/0264-9381/31/1/015005 journal journal Class. Quantum Grav. volume 31, pages 015005 (year 2013)NoStop [Schnetter et al.(2004)Schnetter, Hawley, and Hawke]Schnetter_2004 author author E. Schnetter, author S. H. Hawley, and author I. Hawke, https://doi.org/10.1088/0264-9381/21/6/014 journal journal Class. Quantum Grav. volume 21, pages 1465 (year 2004)NoStop [Car()]CarpetCode:web @noop title website <https://bitbucket.org/eschnett/carpet.git>NoStop [Bona et al.(1995)Bona, Massó, Seidel, and Stela]PhysRevLett.75.600 author author C. Bona, author J. Massó, author E. Seidel, and author J. Stela, https://doi.org/10.1103/PhysRevLett.75.600 journal journal Phys. Rev. Lett. volume 75, pages 600 (year 1995)NoStop [van Meter et al.(2006)van Meter, Baker, Koppitz, and Choi]PhysRevD.73.124011 author author J. R. van Meter, author J. G. Baker, author M. Koppitz, and author D.-I. Choi, https://doi.org/10.1103/PhysRevD.73.124011 journal journal Phys. Rev. D volume 73, pages 124011 (year 2006)NoStop [Kreiss and Oliger(1973)]kreiss1973methods author author H.-O. Kreiss and author J. Oliger, @noop title Methods for the approximate solution of time dependent problems (publisher International Council of Scientific Unions, World Meteorological Organization, Paris, year 1973)NoStop [Baiotti and Rezzolla(2006)]BaiottiRezzolla2006 author author L. Baiotti and author L. Rezzolla, https://doi.org/10.1103/PhysRevLett.97.141101 journal journal Phys. Rev. Lett. volume 97, pages 141101 (year 2006)NoStop [Arnowitt et al.(2008)Arnowitt, Deser, and Misner]Arnowitt1962 author author R. Arnowitt, author S. Deser, and author C. W. Misner, https://doi.org/10.1007/s10714-008-0661-1 journal journal Gen. Relativ. Gravit. volume 40, pages 1997 (year 2008)NoStop [Martí et al.(1991)Martí, Ibáñez, and Miralles]PhysRevD.43.3794 author author J. M. Martí, author J. M. Ibáñez, and author J. A. Miralles, https://doi.org/10.1103/PhysRevD.43.3794 journal journal Phys. Rev. D volume 43, pages 3794 (year 1991)NoStop [Banyuls et al.(1997)Banyuls, Font, Ibanez, Marti, and Miralles]Banyuls_1997 author author F. Banyuls, author J. A. Font, author J. M. Ibanez, author J. M. Marti, and author J. A. Miralles, https://doi.org/10.1086/303604 journal journal Astrophys. J. volume 476, pages 221 (year 1997)NoStop [Gottlieb et al.(2001)Gottlieb, Shu, and Tadmor]sigal2001 author author S. Gottlieb, author C.-W. Shu, and author E. Tadmor, https://doi.org/10.1137/S003614450036757X journal journal SIAM Rev. volume 43, pages 89 (year 2001)NoStop [Hesthaven and Warburton(2008)]Hesthaven2008 author author J. S. Hesthaven and author T. Warburton, @noop title Nodal Discontinuous Galerkin Methods : Algorithms, Analysis, and Applications (publisher Springer, year 2008)NoStop [Donat and Marquina(1996)]DONAT199642 author author R. Donat and author A. Marquina, https://doi.org/https://doi.org/10.1006/jcph.1996.0078 journal journal J. Comput. Phys. volume 125, pages 42 (year 1996)NoStop [Aloy et al.(1999)Aloy, Ibanez, Marti, and Muller]Aloy_1999 author author M. A. Aloy, author J. M. Ibanez, author J. M. Marti, and author E. Muller, https://doi.org/10.1086/313214 journal journal Astrophys. J. Suppl. Ser. volume 122, pages 151 (year 1999)NoStop [Zdunik(2000)]Zdunik2000strange author author J. L. Zdunik, https://doi.org/10.48550/arXiv.astro-ph/0004375 journal journal Astron. Astrophys. volume 359, pages 311 (year 2000)NoStop [Read et al.(2009)Read, Lackey, Owen, and Friedman]Read_2009 author author J. S. Read, author B. D. Lackey, author B. J. Owen, and author J. L. Friedman, https://doi.org/10.1103/PhysRevD.79.124032 journal journal Phys. Rev. D volume 79, pages 124032 (year 2009)NoStop [Hempel and Schaffner-Bielich(2010)]Hempel2010 author author M. Hempel and author J. Schaffner-Bielich, https://doi.org/https://doi.org/10.1016/j.nuclphysa.2010.02.010 journal journal Nucl. Phys. A volume 837, pages 210 (year 2010)NoStop [Colella and Woodward(1984)]COLELLA_1984 author author P. Colella and author P. R. Woodward, https://doi.org/https://doi.org/10.1016/0021-9991(84)90143-8 journal journal J. Comput. Phys. volume 54, pages 174 (year 1984)NoStop [Suresh and Huynh(1997)]SURESH199783 author author A. Suresh and author H. Huynh, https://doi.org/https://doi.org/10.1006/jcph.1997.5745 journal journal J. Comput. Phys. volume 136, pages 83 (year 1997)NoStop [Mignone et al.(2010)Mignone, Tzeferacos, and Bodo]MIGNONE20105896 author author A. Mignone, author P. Tzeferacos, and author G. Bodo, https://doi.org/https://doi.org/10.1016/j.jcp.2010.04.013 journal journal J. Comput. Phys. volume 229, pages 5896 (year 2010)NoStop [Lor()]Loreneweb @noop title website <http://www.lorene.obspm.fr/>NoStop [Bonazzola et al.(1993)Bonazzola, Gourgoulhon, Salgado, and Marck]Bonazzola:1993 author author S. Bonazzola, author E. Gourgoulhon, author M. Salgado, and author J.-A. Marck, https://ui.adsabs.harvard.edu/abs/1993A A...278..421B journal journal Astron. Astrophys. volume 278, pages 421 (year 1993)NoStop [Bonazzola et al.(1998)Bonazzola, Gourgoulhon, and Marck]Bonazzola:1998 author author S. Bonazzola, author E. Gourgoulhon, and author J.-A. Marck, https://doi.org/10.1103/PhysRevD.58.104020 journal journal Phys. Rev. D volume 58, pages 104020 (year 1998)NoStop [Cipolletta et al.(2015)Cipolletta, Cherubini, Filippi, Rueda, and Ruffini]Cipolletta:2015 author author F. Cipolletta, author C. Cherubini, author S. Filippi, author J. A. Rueda, and author R. Ruffini, https://doi.org/10.1103/PhysRevD.92.023007 journal journal Phys. Rev. D volume 92, pages 023007 (year 2015)NoStop [Koliogiannis and Moustakidis(2020)]Koliogiannis:2020 author author P. S. Koliogiannis and author C. C. Moustakidis, https://doi.org/10.1103/PhysRevC.101.015805 journal journal Phys. Rev. C volume 101, pages 015805 (year 2020)NoStop [Einfeldt(1988)]Einfeldt1988 author author B. Einfeldt, https://doi.org/10.1137/0725021 journal journal SINUM volume 25, pages 294 (year 1988)NoStop [vis()]visitweb @noop title website <https://visit-dav.github.io/visit-website/>NoStop [Andersson(1998)]Andersson_1998 author author N. Andersson, https://doi.org/10.1086/305919 journal journal Astrophys. J. volume 502, pages 708 (year 1998)NoStop [Chan(2013)]chan2013thesis author author T. K. Chan, https://repository.lib.cuhk.edu.hk/en/item/cuhk-1291489 Master's thesis, school The Chinese University of Hong Kong (year 2013)NoStop
http://arxiv.org/abs/2307.03307v1
20230706214029
Efficient parallel implementation of the multiplicative weight update method for graph-based linear programs
[ "Caleb Ju", "Serif Yesil", "Mengyuan Sun", "Chandra Chekuri", "Edgar Solomonik" ]
cs.DC
[ "cs.DC", "cs.DM", "math.OC" ]
[email protected] Georgia Institute of Technology University of Illinois at Urbana-Champaign Atlanta GA [email protected] University of Illinois at Urbana-Champaign [email protected] University of Illinois at Urbana-Champaign [email protected] University of Illinois at Urbana-Champaign Positive linear programs (LPs) model many graph and operations research problems. One can solve for a (1+ϵ)-approximation for positive LPs, for any selected ϵ, in polylogarithmic depth and near-linear work via variations of the multiplicative weight update (MWU) method. Despite extensive theoretical work on these algorithms through the decades, their empirical performance is not well understood. In this work, we implement and test an efficient parallel algorithm for solving positive LP relaxations, and apply it to graph problems such as densest subgraph, bipartite matching, vertex cover and dominating set. We accelerate the algorithm via a new step size search heuristic. Our implementation uses sparse linear algebra optimization techniques such as fusion of vector operations and use of sparse format. Furthermore, we devise an implicit representation for graph incidence constraints. We demonstrate the parallel scalability with the use of threading OpenMP and MPI on the Stampede2 supercomputer. We compare this implementation with exact libraries and specialized libraries for the above problems in order to evaluate MWU's practical standing for both accuracy and performance among other methods. Our results show this implementation is faster than general purpose LP solvers (IBM CPLEX, Gurobi) in all of our experiments, and in some instances, outperforms state-of-the-art specialized parallel graph algorithms. Efficient Parallel Implementation of the Multiplicative Weight Update Method for Graph-based Linear Programs Edgar Solomonik August 1, 2023 ============================================================================================================ § INTRODUCTION Large-scale graph processing is invaluable in many applications in areas like network science, machine learning, and neuroscience  <cit.>. Designing general graph libraries for massively-parallel machines is challenging as combinatorial graph algorithms often have limited concurrency and arithmetic intensity <cit.>. Many have parallel depth at least proportional to graph diameter, though in practice these algorithms contain sufficient concurrency to achieve high-efficiency on shared-memory machines <cit.>. Designing efficient distributed-memory parallelizations is more challenging, though it has been achieved with use of graph partitioning <cit.>, formulations via sparse matrix products <cit.>. However, existing approaches do not generalize to some algorithms and may be inefficient for large diameter graphs Due to this nature, combinatorial algorithms may not scale well on large graphs or large diameter graphs. In addition, some parallelization techniques do not easily generalize between combinatorial algorithms so they don't naturally allow code reuse. Finally, for certain graph problems, like densest subgraph  <cit.>, a combinatorial approach does not lend algorithms that are efficient, simple, and almost exact. Another method for solving graph problems is to reformulate or relax <cit.> the graph problem into a linear program (LP). Linear programs (LPs) enable exact and approximate solutions of many important graph problems. Recent theoretical breakthroughs in both sequential and parallel algorithms for several graph problems <cit.> indicate that carefully designed LP solvers can enable algorithms with tunable accuracy that have lower cost and depth than their combinatorial counterparts. Given a matrix A∈𝐑^m × n, vector c∈𝐑^n, and vector b ∈𝐑^m, an LP is the optimization problem, max_ x ∈𝐑^n⟨ c, x ⟩ s.t. Ax≤ b, where ⟨ c, x ⟩ = c_1 x_1 + c_2x_2 + … + c_nx_n, and u ≤ v means u_i ≤ v_i ∀ i. LP relaxations also are a standard and powerful tool in the development of approximation algorithms and heuristics for problems that do not have efficient exact algorithms <cit.>. Typically, to solve a general LP, one employs variants of the simplex or interior point methods. While these algorithms are efficient in the sequential setting, they often have limited parallelism. LP solving is P-Complete, which implies that a poly-logarithmic depth parallel algorithm is believed unlikely to exist <cit.>. However, many graph problems can be solved via LPs that contain only positive entries, and these class of LPs admit efficient parallelizable solvers if some approximation is allowed. We consider approximation algorithms to the standard feasibility mixed packing and covering LP <cit.>, ∃ x ∈𝐑^n s.t. Px≤1, Cx≥1, x ≥0, where P and C are nonnegative. Vectors 1 and 0 are the all ones and zeros vector, respectively. The algorithms we consider seek a (1+ϵ)-relative approximation: that is, a solution x such that Px≤ (1+ϵ)1 and Cx≥1. Two important special cases are pure packing and pure covering LPs that we describe formally later. There is extensive work on approximating positive LPs in the sequential and parallel setting; we refer the reader to <cit.>. The first efficient parallel approximation algorithms for positive LPs was developed by Luby and Nisan <cit.> which was refined later by work of Young <cit.>. These approaches yield a parallel algorithm with depth that is a polynomial in 1/ϵ and log n (the number of variables), and is independent of the structure of the constraint matrices P and C. It outputs a result that is a (1+ϵ)-approximation, which, for a maximization problem entails a relative error of (1-ϵ) in the objective value, and for a minimization problem, a relative error of (1+ϵ). The polynomial scaling with 1/ϵ has been improved in recent work. Our implementation is based on an accelerated version of a more recent algorithm with an improved depth of Õ(ϵ^-3) for mixed problems and Õ(ϵ^-2) for pure covering or pure packing problems. The only other previously published study  <cit.> of a distributed and parallel LP solver built using this approach implemented and compared runtimes of two earlier algorithms: MPCSolver and Young's algorithm, with Õ(ϵ^-5) and Õ(ϵ^-4) depth, respectively. Further details on theoretical developments in approximately solving positive LPs in parallel are provided in Section <ref>. The algorithm we focus on is from Mahoney et. al. <cit.>, which offers the fastest theoretical performance for pure packing, pure covering, and mixed packing and covering LPs in a parallel setting. We describe these problems and their associated graph problems formally in Section <ref>. This algorithm we call MWU since it utilizes the weights vector from the more general MWU method <cit.>. Despite the algorithm's strong theoretical foundation, little is known about its empirical performance, especially for solving real-world graph problems over large datasets. Furthermore, understanding how this algorithm compares to other LP solvers and specialized graph libraries is an open question. In this paper, we create a practical and scalable implementation of a parallel (1+ϵ)-approximate positive LP solver by adapting the MWU algorithm with the current fastest theoretical performance, and we also provide a comprehensive empirical study comparing MWU against exact solvers, specialized libraries, and previous related work. We also test a line search method that finds the largest step-size permitted without violating theoretical guarantees. This step-size search reduces in the number of overall MWU iterations in practice by multiple orders of magnitude. Our results suggest that MWU can be fast in both theory and practice, with runtimes comparable to even specialized libraries but with the added benefit of, one, being generalizable and, two, more versatile, as it can benefit from high-performance sparse linear algebra libraries. We believe this marks MWU as a viable alternative for solving graph problems approximately on a large dataset. In this paper, we implement a practical, parallel, approximate positive LP solver, and we conduct extensive numerical experiments on a supercomputer. Our results suggest that MWU can be fast in both theory and practice. Overall, our paper makes the following contributions: * the first shared and distributed-memory implementation of a general-purpose approximate solver based on Mahoney et. al. <cit.> for positive LPs, called MWU * a step size search method for MWU that empirically reduces the iteration count by up to three orders of magnitude without adding significant overhead * an efficient implementation of MWU using an implicit representation of common constraint matrices observed in graph LPs as well as other standard techniques for SLA. We show these optimizations provide good scalability and lead up to 5.2x speedup relative to MWU implemented with PETSc (a library for parallel sparse linear algebra). * the first comparison of an approximate positive LP solver against general LP solvers as well as specialized parallel graph algorithms. In particular, MWU finds (1 + ϵ)-relative (ϵ =0.1) solutions up to 3-2800x and 5-1180x faster than CPLEX and Gurobi (which find exact solutions for implicit ILPs and exact fractional solutions for relaxed LPs), respectively, for solving several graph problems on large real-world graphs on the Stampede2 supercomputer using KNL compute nodes. § BACKGROUND ON LINEAR PROGRAM SOLVERS §.§ Positive LP Solvers Fast approximate solvers for positive LPs in the sequential settings have been developed since the early 1990's <cit.>, and there is extensive and continued attention to this line of work. We mostly focus on parallel algorithms and refer the reader to <cit.> for extensive pointers. Luby and Nisan provide the first parallel algorithm for explicit positive LPs which obtain a (1+ϵ)-approximation in Õ(ϵ^-4) iterations[We write Õ(f(n)) to be proportion to f(n) and a polylogarithmic of f(n), i.e., Õ(f(n)) ∝ O(f(n)log^O(1)(f(n)))] for pure packing and covering LPs <cit.>. Young clarified and extended this work to solve the more general mixed packing and covering LPs in parallel in Õ(ϵ^-4) iterations <cit.>. The dependence on ϵ has remained unchanged for over 10 years until the work of Allen-Zhu and Orecchia, who solve pure packing and pure covering LPs in parallel with Õ(ϵ^-3) iterations <cit.>. They also obtained better dependence on ϵ in the sequential setting <cit.> for pure packing and covering LPs (see also <cit.>). Recently, Mahoney et. al. <cit.> utilized ideas of Young <cit.> on faster near-linear time sequential solver to develop new parallel algorithms for positive LPs. For mixed packing and covering LPs, their algorithm converges in O(log(m_p+m_c)log(n/ϵ)/ϵ^3) iterations and is also work-efficient, i.e., work is near linear to the number of nonzeros in the LP. For pure packing and pure covering LPs, the dependence on ϵ is ϵ^-2 instead. When ϵ is not too small, these algorithms have low depth in the PRAM model. Empirical studies of these fast approximate algorithms have been mainly limited to relatively small problems. Koufogiannakis and Young adapt a sequential mixed packing and covering LP and show it outperforms simplex on randomly generated binary matrices with dimension up to 2222 <cit.>. Allen-Zhu and Orecchia compare their Õ(ϵ^-1)-dependent sequential algorithm <cit.> to the algorithms of Luby and Nisan <cit.> and Awerbuch and Khandekar <cit.> for solving pure packing LPs with a randomly generated matrix of size 60 × 40. Jelic et. al. implement a parallel primal-dual method on GPUs to solve positive LPs, although their constraint matrices are randomly generated binary matrices with dimensions up to 25000 <cit.>. The most closely related work to ours is that of Makari et. al. <cit.>, who implement a gradient descent algorithm to solve generalized matching on large real-world and synthetic graphs. Their implementation, based on the algorithm from <cit.>, has a Õ(1/ϵ^5) number of iterations, and they confine their attention to a single graph problem. In this paper we focus on only obtaining fractional solutions to the LP problems we consider. The work in  <cit.> also has a rounding step to convert the fractional solution to an integral matching. §.§ The MWU Algorithm We now introduce MWU, the algorithm of Mahoney et. al. <cit.> for approximately solving the standard mixed packing and covering LP in parallel. This section does not contain our modifications for improving the empirical performance, which are found in Section  <ref>. First, we start describing how MWU solves the mixed packing and covering feasibility LP  <cit.>. We will discuss how to modify this into an optimization problem later in the section. The feasibility LP is: ∃ x ∈𝐑^n s.t. Px≤1, Cx≥1, x ≥0, where P and C are nonnegative. Vectors 1 and 0 are the all ones and zeros vector, respectively. The algorithms we consider seek a (1+ϵ)-relative approximation: that is, a solution x such that Px≤ (1+ϵ)1 and Cx≥1. MWU (Algorithm <ref>) ensures that both packing and convering constraints, Px≤1 and Cx≥1, are approximately satisfied by approximating max(Px) and min(Cx) with smoothed maximum and minimum functions, smax_η( x) = 1/ηlog( ∑_i=1^n exp(η· x_i) ), smin_η( x) = -1/ηlog( ∑_i=1^n exp(-η· x_i) ), where η>2 is a smoothing parameter. The MWU algorithm and step size search make use of their gradients, ∇smax_η( x) = exp(η· x) /⟨1, exp(η· x) ⟩, ∇smin_η( x) = exp(-η· x) /⟨1, exp(-η· x) ⟩. For more details on these functions, see Chapter 2 of <cit.>. The algorithm initializes the vector x with small values so that each starting packing constraint is at most ϵ (Line <ref>). The smoothing parameter η is set so that both smax_η and smin_η are within an ϵ additive error of max and min, respectively. In each MWU iteration, the algorithm multiplicatively updates x. This is done by defining a step or update vector, d, where d_i is a multiple of x_i (Line <ref>), and adding d to x (Line <ref>). Vectors g and h, which are gradients of the smoothed max packing and min covering constraints, respectively, are also utilized to define d (Lines <ref>, <ref>). In particular, d is an approximate solution to the Lagrangian relaxation, ∃ d ∈𝐑^n_≥ 0 s.t. ⟨ w_p, P d⟩ = ⟨P^T w_p, d ⟩≤ 1, ⟨ w_c, C d⟩ = ⟨C^T w_c, d ⟩≥ 1, where w_p = ∇smax_η( Px) and w_c = ∇smin_η( C x). If the positive LP is infeasible, then there exists some MWU iterations where d=0, in which case the algorithm reports the LP is infeasible (Line <ref>) <cit.>. Assuming otherwise, the theoretical analysis guarantees MWU will return an (1+ϵ)-relative solution. Throughout the algorithm, we drop satisfied covering constraints, as these can unnecessarily slow down progress (Line <ref>). Finally, the algorithm returns x when all the covering constraints are satisfied or when there exists no covering constraints. We note that Algorithm <ref> can also solve pure packing or pure covering LPs, which are, respectively, max ⟨1, x ⟩ s.t. Px≤1, x≥0, x∈𝐑^n min ⟨1, x ⟩ s.t. Cx≥1, x≥0, x∈𝐑^n. For example, to solve a pure packing LP, we embed the objective function as the added constraint, 1/M1^T x ≥ 1, where M is the estimate of the maximum value. Then we do binary search over M, using Algorithm <ref> to determine if the resulting mixed packing and covering LP is a feasible. Since there is one covering constraint, then smin_η(Cx) = min(Cx). This exact approximation permits one to scale the step direction (Line <ref>) by a factor of 2 in the theoretical analysis, which improves the number of iterations by a factor of ϵ <cit.>. Also, noting C = 1/M1^T and ∇smin_η(Cx)=1, we have h=1/M1 (Line <ref>), so we do not need to explicitly compute h. Solving a pure covering LP is done via a similar transformation. Solving a mixed covering and packing optimization problem, likewise, involves embedding the constraint that corresponds to the direction of optimization. § GRAPH PROBLEMS AS POSITIVE LPS We now consider several graph problems and LP formulations for them. We first define integer programming (IP) formulations which exactly model the underlying graph problem. We then obtain an LP by relaxing the integrality constraints. Note that for some problems, such as bipartite matching and densest subgraph, the solution to the LP relaxation matches the IP's solution (i.e., the solution is integral), whereas for NP-hard problems dominating set and vertex cover, there is an integrality gap between the solution to the LP relaxation and IP. We refer the reader <cit.> for the role of LP relaxations in the development of approximation algorithms, and also  <cit.> for a performance study on recovering an integral output from a relaxed solution for dominating set as a case study. In this paper, we do not consider rounding for LPs with integrality gaps in both our implementation and in performance comparisons. Let G=(V,E) be an unweighted, undirected graph where V is the set of vertices and E is the set of edges, with n=|V| and m=|E|. For simplicity, we assume G has no self-loops. Note that our formulations can be extended towards weighted graphs as well. For a vertex v ∈ V, let N(v) be the neighbor vertices of v (v is not included in N(v)), and inc(v) be the set of edges incident to v. The neighbor relations between the vertices of G can be represented as an adjacency matrix, A∈{0,1}^n × n, which is symmetric and has a nonzero for each edge e ∈ E. The incidence relation between the vertices and the edges can be represented as a vertex-edge incidence matrix, M, where M_u,e = 1 : u ∈ e, e ∈ E 0 : otherwise, M∈{0,1}^n × m. In this work, we consider four graph applications: maximum matching, dominating set, vertex cover, and densest subgraph. Maximum Matching (). A matching is a subset F ⊂ E of edges such that no vertex is incident to more than one edge in F. We can write this optimization problem as the IP, max∑_e ∈ E x_e s.t. ∑_e ∈inc(v) x_e ≤ 1 ∀ v ∈ V x_e ∈{0,1} ∀ e ∈ E. The x_e variables indicate whether edge e is in set F. The constraints are defined over the vertices of the graph such that at most one edge (x_e) in the incident edges of a vertex v (inc(v)) can be selected for matching set F. When the input graph is a bipartite graph, we call this problem maximum bipartite matching (). Given the vertex-edge incidence matrix M of the graph, we can write the LP relaxation for maximum matching as the pure packing LP, max ⟨1, x ⟩ s.t. Mx≤1, x ≥0, x ∈𝐑^m. It is well-known that this LP relaxation has no integrality gap for while for general graphs there is an integrality gap of 2/3; there is an exponential sized exact LP relaxation for general graph matching but we do not consider it here. Dominating Set (). Dominating set is the problem of finding the smallest subset of vertices S ⊆ V such that every vertex in the graph is either in S or is a neighbor of a vertex in S. We can formulate the problem as the IP, min ∑_v ∈ V x_v s.t. x_v + ∑_u ∈ N(v) x_u ≥ 1, ∀ v ∈ V x_v ∈{0,1} ∀ v ∈ V. The variable x_v indicates whether vertex v is in set S or not. The constraints are defined over the vertices such that either vertex v itself or one of its neighbors is in the set S. The LP relaxation is a pure covering LP, min ⟨1, x ⟩ s.t. (I+A) x ≥1, x ≥0, x ∈𝐑^n, where I is the identity matrix. Vertex Cover (). In the vertex cover problem the goal is to find the smallest subset of vertices S ⊆ V such that every edge has one of its endpoints in S (hence S covers all the edges). A simple IP formulation is min ∑_v ∈ V x_v s.t. x_u + x_v ≥ 1 ∀ (u,v) ∈ E x_v ∈{0,1} ∀ v ∈ V. Here, the x_v variables determines whether a vertex v is in set S. These variables are defined over the vertices. There is one constraint per edge. The LP relaxation is a pure covering LP, min ⟨1, x ⟩ s.t. M^𝖳 x≥1, x ≥0, x ∈𝐑^n, where M^T is the transpose of the vertex-edge incidence matrix of the graph. Densest Subgraph (). Densest subgraph finds a subgraph S ⊂ G that maximizes the edge to vertex count ratio, i.e., |E(S)|/|S|. The LP, as formulated in <cit.>, is max ∑_e ∈ E x_e s.t. x_e ≤ y_u, x_e ≤ y_v ∀ e = (u,v) ∈ E ∑_v ∈ V y_v ≤ 1 x_e,y_v ≥ 0 ∀ v ∈ V, ∀ e ∈ E. The variables x_e represent the edges and the variables y_v represent the vertices, which are no longer binary. Since this problem is not a positive LP, we consider its dual <cit.>, min D s.t. z_u,e + z_v,e≥ 1 ∀ e = (u,v) ∈ E ∑_e ∈inc(v) z_v,e≤ D ∀ v ∈ V z_v,e≥ 0 ∀ v ∈ V, e ∈inc(v). While (<ref>) is still not a positive LP, we can convert it to a mixed packing and covering LP by fixing D to be a constant and treating (<ref>) as a feasibility problem instead. We find an approximate minimum value to (<ref>) via binary search for D and solving the feasibility LP for each choice. Unlike previous LPs, the variables z_v,e represent a vertex-edge pair instead of vertices or edges, so we require new constraint matrices. Let I be an identity matrix of size m, and let the function interweave take two equally-sized matrices and put the first and second matrices' first columns as the first two columns of the combined matrix, then their second columns as the next pair of columns, and so on. We call the resulting matrix W the interweaved identity matrix, where W_e,2e, W_e,2e+1 = 1, ∀ e ∈ E, W∈{0,1}^n × 2m In order to model the vertex-edge pair variables, we can form a matrix called vertex-edge pairs matrix. Vertex-edge pairs matrix will have a column for each vertex v and edge (u,v). Specifically, we can form the vertex-edge pair matrix, O∈{0,1}^n × 2m, from a graph G as follows: O_u,2e+b = 1 : b=0, e=(u,v) ∈ E 1 : b=1, e=(v,u) ∈ E 0 : otherwise Then the feasibility variant of the dual to densest subgraph is ∃ z ∈ℝ^2m s.t. Wz ≥1, O z ≤ D ·1, z ≥0. @caleb: updated up to opt-transport Optimal Transport (). Finally, we define the discrete optimal transport (OT) <cit.>. Given a non-negative matrix C∈ℝ_≥ 0^k ×ℓ, we seek to solve ∑_i=1^k ∑_j=1^ℓ c_ij x_ij p_j where X∈ℝ_≥ 0^k ×ℓ s.t. Xp = q, X^T 1 = 1. In OT, we have two distributions p and q, and we seek to transport the probability mass from p to q while minimizing the cost determined by C. In this paper, we will take for simplicity p and q to be the all ones vector and k=ℓ. In fact, this problem has a graph interpretation. Given a complete bipartite graph (i.e., a bipartite graph where every vertex in the left partite is connected to every vertex in the right partite), the objective is to assign a fractional matching (assign each edge a value from [0,1]) between the two partites while minimizing the incurred cost. Here, the cost incurred from each edge is the weight of an edge times the fractional amount of an edge we take, where the weight of the edge is determined by the cost matrix C. In Figure <ref>, we show an example cost matrix for optimal transport problem and its graph interpretation. Matrix formulation? §.§ LP Relaxations and Matrix Formulations Besides densest subgraph and OT, all the optimization problems above are integer programs since the variables can only take binary values. Integer programming problems are NP-hard, and certainly MWU is unequipped to solve them. Instead, we focus on solving the LP relaxations, where the constraint x_i ∈{0,1} is replaced by x_i ≥ 0. This permits us to express the optimization problems in matrix notation. Let A be the adjacency matrix for the graph G. Since we assumed the graph to be unweighted and undirected, A is symmetric and all its nonzeros are 1. Let M be the vertex-edge incidence matrix, which is a |V| × |E| matrix where m_v,e = 1 : e=(u,v) 0 : otherwise. Then we can rewrite the maximum matching problem as max ⟨1, x ⟩ s.t. Mx≤1, x ≥0, dominating set as (where diag() zeros all but the diagonal elements) min ⟨1, x ⟩ s.t. (I+A - diag(A)) x ≥1, x ≥0, and vertex cover as min ⟨1, x s.t. M^𝖳 x≥1, x ≥0. The structure of densest subgraph is different, so we require additional definitions. Let I be an identity matrix of size |E|. Let the function interweave take two equally-sized matrices and put the first and second matrices' first columns as the first two columns of the combined matrix, then their second columns as the next pair of columns, and so on. The function oddshift takes in a matrix with two nonzeros (e.g. the vertex-edge incidence matrix) in each column. The function takes the bottom nonzero in each column and moves it into a new column directly to the right of the current column. Then we can write the dual formulation of densest subgraph as min D s.t. interweave(I, I)z ≥1 oddshift(M) z ≤ D ·1 z ≥0, z ∈ℝ^2|E|. To solve this problem, we will search for the smallest D where the above problem is feasible by conducting exponential search to find upper and lower bounds on the minimum D followed by binary search. § HEURISTICS TO IMPROVE CONVERGENCE IN PRACTICE MWU WITH LINE SEARCH We now introduce algorithmic modifications to Algorithm <ref> that improve its performance in practice, which we review in <ref>. We write it in Algorithm <ref> and mark the modifications in red. Similar to Algorithm <ref>, we can adapt this algorithm to solve pure packing (e.g., maximum matching) and pure covering (e.g., dominating set, vertex cover) LPs more efficiently, as described in Section <ref>. We store the packing and covering constraints y = Px and z = Cx as well as d^(y) = Pd and d^(z) = Cd (Line <ref> and <ref>) to minimize the number of sparse matrix-vector products, or SpMVs. The step direction d is unchanged (Line <ref>). While the algorithm drops satisfied constraints (Line <ref>), in practice we keep satisfied constraints since this simplifies the implementation and we did not find it impacts the convergence on the problems we tested. The sub-routine (Line <ref>) takes the step vector d and constraint vectors, and returns a step size α > 0. We call this modification step size search and design algorithms for it in the next subsection. When α < 1, we report that finding a solution is infeasible, because otherwise a step size of α = 1 is always possible due to the theoretical analysis of Mahoney et. al. <cit.>. Therefore, we call a step size α=1 found without step size search the standard step size. Assuming α≥ 1, we scale the step direction d by α and add it to x (Line <ref>). Afterwards, we update y = Px and z= Cx without SpMVs (Line <ref>). Note that α may be large enough so that Px = y + α·d^(z)≥ 1, in which case we terminate MWU. §.§ Line Search as a Constrained Optimization Problem In this section, we consider algorithms for finding a step size for . When selecting α, we want it to be sufficiently large to accelerate MWU convergence while ensuring that we recover a feasible solution to (<ref>). First, we consider the case of the mixed packing covering LP. One of the qualities of Mahoney et. al's algorithm is that the algorithm reaches a (1+ϵ)-approximation when the difference in a potential function f(x) = 1/η ( smax_η(Px) - smin_η(Cx) ) becomes sufficiently small <cit.>. Each step that their algorithm takes is non-increasing on f(x). We can show that in order for the potential function to be non-increasing, f(x^(t+1)) - f(x^(t)) = Ψ(α) - Φ(α) ≤ 0 where, Φ(α) = smin_η(C( x + α· d)) - smin_η(Cx) Ψ(α) = smax_η(P( x + α· d)) - smax_η(Px). This gives us an equivalent invariant f(α) = Φ(α) / Ψ(α) ≥ 1. Therefore, if we find a step size α for which this invariant holds, then for this α we can still say f(x) is non-increasing, and furthermore, if we can reach a point where Px≤ (1+ϵ)1 and Cx≥1 then we have converged to a feasible solution. Hence, our method is to find the largest step size α>0 such that the “bang-for-buck” value is at least one, or f(α) = Φ(α) / Ψ(α) ≥ 1, For pure packing and pure covering problems we instead have these invariants, respectively: ⟨1, α d ⟩ / Ψ(α) ≥ 1 ⟨1, α d ⟩ / Φ(α) ≤ 1 We now show MWU with line search maintains the same theoretical properties as MWU with the standard step size <cit.>. Recall x ∈𝐑^n and m is the number of rows in the matrices P and C. MWU with line search (Algorithm <ref>) either returns an (1+ϵ)-relative approximate solution, i.e., an x ≥0 such that Px≤ (1+ϵ) 1 and Cx≥1, or correctly reports the LP is infeasible. The number of iterations is at most Õ(ϵ^-3), where Õ hides polylogarthmic dependence on n, m, and ϵ. The proof is similar to the one shown in <cit.>, which implicitly sets the step size to α = 1. There will be two main differences in the convergence proof, which we highlight here. First, the proof of correctness in  <cit.> shows the potential function, defined as smax_η(Px) - smin_η(Cx), is monotonically decreasing by taking a first-order approximation of smooth max and min. On the other hand, our bang-for-buck invariance (<ref>) explicitly ensures this monotonicity property. Second, the argument in <cit.> upper bounds the number of MWU iterations by lower bounding the values in step direction vector d. Since line search only increases d because α·d≥d, then line search can only decrease the number of MWU iterations. While line search can decrease the number of iterations (in fact, quite significantly in our experiments), finding a step size increases the work per iteration. In the following lemma, we leverage the monotonicitiy of f(α) to design efficient line search algorithms. f is monotonically decreasing for α∈𝐑_+. We show that as α increases Ψ(α)/α is increasing while Φ(α)/α is decreasing, hence f(α) = (Φ(α)/α)/(Ψ(α)/α) is decreasing. Note that Ψ is convex since smax_η is convex, and Φ is concave since smin_η is concave. Since Ψ is convex, Ψ(α) ≤Ψ(0) + αΨ'(α) = αΨ'(α). Hence, we can show that Ψ(α)/α is increasing, since (Ψ(α)/α)' = 1/α(Ψ'(α) - Ψ(α)/α) ≥ 0. Analogously, since Φ is concave, the inequalities above are reversed, and so Φ(α)/α must be strictly decreasing in α. §.§ Implementing Line Search To approximate the maximum step size α^* satisfying (<ref>), we first perform exponential search to find an integer p where f(2^p) ≥ 1 and f(2^p+1) < 1. By Proposition <ref>, α^* ∈ [2^p,2^p+1). Next, we run binary search starting with a lower and upper bound of l=2^p and u=2^p+1, and update the lower and upper bounds so that l (resp. u) is the largest (smallest) value such that f(l) ≥ 1 (f(u) < 1). Once we find an ϵ-relative step size, or when u-l/l≤ϵ, we return l. We formalize the aforementioned procedure in Algorithm <ref>. When f(α) ≥ 1 and max(C( x + α·d)) ≥ 1 (Line <ref>), this means the step size can lead MWU to completion, so we immediately return α. Moreover, binary search makes use of y = Px, z = Cx, d^(y) = Pd, and d^(z) = Cd, where x is our current solution and d is the computed MWU update direction, to avoid additional SpMVs. We can derive another line search method by using Newton's method, which has the update, α_k+1 = α_k - g(α_k)/g'(α_k), where g(α) = f(α)-1. Because Newton's method converges when its solution is in the neighborhood of the optimal solution, we require estimates of α^* to ensure convergence. We do so via a warm start for Newton's search, where we set our initial α_0 to the previous optimal step size, if available, or use exponential search. The reason for the former strategy is we observed in our tests that the optimal step size between two MWU iteration are relatively close. Finally, we note that once Newton's method converges to some solution, it may not strictly satisfy (<ref>). Thus, we multiplicatively decrease the solution by a factor of (1-ϵ)^p for some integer p (p is typically small) until (<ref>) is satisfied. § SOFTWARE OPTIMIZATIONS AND PARALLELIZATION We now describe the details of our implementation and parallelization of linear algebra operations within MWU. To efficiently perform sparse matrix-vector products with matrices introduced in Section <ref>, such as the vertex-edge adjacency matrix, we leverage implicit representations derived from a standard sparse vertex-vertex adjacency matrix data structure. We accelerate vector operations with loop fusion and vectorization. §.§ Shared-Memory Optimizations We design implicit SpMVs for matrices that arise in graph-based positive LPs such as vertex-edge incidence matrices. We adapt previous fusion techniques for the needs of MWU framework <cit.>. Forming vertex-edge incidence and odd shifted edge-incidence matrices Section <ref> gives details of the specialized constraint matrices required for LPs of graph problems. Among these matrices, vertex-edge incidence and oddshifted-incidence matrices can be formulated using the adjacency matrix A of a given graph. Equation (<ref>) shows the formulation of a vertex-incidence matrix (M) from an adjacency matrix A. In this case, we can utilize the COO representation used by CSB format to form this matrix implicitly. To do so, we use the position of the edge (nonzero) in A in CSB fromat as the column id of M. Therefore, if we have a nonzero (v,u) at position e in A, M will have nonzeros (v,e) and (u,e). We can follow a similar formulation for oddshifted-incidence matrices (O) shown in (<ref>). In this case, for each nonzero (r,c) at position e in the adjacency matrix, the oddshifted-incidence matrix will have two nonzeros (r, 2e) and (c, 2e+1). Figure <ref> shows vertex-incidence matrix M and oddshifted-incidence matrix O of an adjacency matrix A with 4 rows and columns and 5 nonzeros. Each nonzero of matrix A is shown with a different color, and corresponding nonzeros created in M and O matrices. §.§.§ Choice of Matrix Format To efficiently traverse the non-zeros in the adjacency matrix during SpMVs, we use the Compressed Sparse Blocks (CSB) format <cit.>, which can achieve good cache locality for both SpMVs of the matrix and its transpose. CSB divides the matrix into two-dimensional r× k tiles. Each tile is represented as a list of tuples, where each tuple stores the non-zeros in column major order in coordinate (COO) format. The group of tiles that belong to r consecutive rows is called a row-block while the group of tiles that belong to c consecutive columns is called a column-block. In our implementation, we store the tiles in row-major order. Caleb: I have replaced the text. Do we want to refer to Figure <ref>?if we are not referring to this, we can remove it? Similar to <cit.>, we parallelize y=Ax and y=A^T x over the row-blocks and column-blocks, respectively. Provided the tile size (r× k) is selected carefully, the block of the input vector (x) processed by a tile and output vector y corresponding to a row-block (updated by a single thread) is contained in private L1 or L2 caches. Figure <ref> illustrates the parallelization of SpMV using CSB format. §.§.§ Implicit Representations In each iteration of MWU, we perform one SpMV with the constraint matrix and with its transpose. As seen in Section <ref>, the constraint matrices of many graph-based LPs are the vertex-edge incidence matrix of the graph. We notice that the edge (u,e) encoded in an incidence matrix can be described over over a vertex and a vertex pair (u,(v,w)) where the pair (v,w) represents an edge in the adjacency matrix. In addition, in the incidence matrix, the value is 1 if u=v or u=w and 0 otherwise. Therefore, we can store the nonzero data in an incidence matrix implicitly as an adjacency matrix in memory. This reduces the memory cost of the constraint matrix by about half and therefore also reduce the number of accesses to the memory subsystem, particularly cache. Having an implicit representation means that linear algebra operations on these matrices can also be expressed implicitly by formulating the computations on the adjacency matrix, A. We emulate SpMVs for M and O by storing the edge, row, and column index for each non-zero in A, which we denote by e, r, and c, respectively. For example, we can compute y=M x by accumulating element x_e to y_r and y_c for each (r,c,e) in A, and likewise for y=O x, then evaluating y = y_r + y_c. To parallelize this SpMV while avoiding race conditions, we first traverse in row major order while reading the row indices of A, then in column major order while reading the column indices. We compute y = M^T x by accumulating elements x_r and x_c to y_e for all (r,c,e) in A, and likewise for y = O^T x. Parallelizing these SpMVs are straightforward, since we accumulate to any given element of y two and one times, respectively. SpMV with vertex-incidence matrices (M) Equation (<ref>) shows the formulation of a vertex-edge incidence matrix M from an adjacency matrix A. In this case, we can utilize the COO representation used by CSB format to form this matrix implicitly. To do so, we use the position of the edge (nonzero) of A in CSB format as the column id of M. Therefore, if we have a nonzero (r,c) at position e in A, M will have nonzeros (r,e) and (c,e). For M× x = y can be performed in 2 steps, y = y_r + y_c such that y_r[r] = x[e] and y_c[c] = x[e] ∀ (r,c) ∈ A. The transposed SpMV can also be formulated as follows: M^T × x = y where y[e] = x[r] + x[c], ∀ (r,c) ∈ A. SpMV over the vertex-edge incidence matrix and its transpose for undirected graphs can be obtained from the adjacency matrix. Figures <ref> and <ref> show the code sequences emulating the behavior of M×x and M^T×x for incidence matrices using an adjacency matrix of an undirected graph in COO format. Note that the indices of the elements of the input vector of the SpMV (or output vector in M^T×x) correspond to the index of the edge in the nonzero list, i.e., the column id in the incidence matrix is the index of the corresponding nonzero in the adjacency matrix. For M× x (Figure <ref>), we iterate over all the nonzeros of the adjacency matrix. Since the column ids of the incidence matrix is the same as the index of the nonzero (e) in the adjacency matrix (in CSB/COO format), we can easily locate the elements of x. The incidence matrix has two nonzeros for every nonzero in the adjacency matrix: (r, e) and (c, e). Therefore, both y[r] and y[c] are updated. For the transposed SpMV, the r^th and c^th elements are read from the input vector x, and y[e] is updated. In a parallel execution of M× x, we need to pay attention to synchronization (lines 3 and 4 in Figure <ref>), because they can cause race conditions. We solve this problem by iterating over the row and column ids of the matrix in separate phases, i.e., traversing tiles of the CSB representation first in row major order while reading the row ids of the adjacency matrix, then in column major order while reading the column ids. Such an execution model allows us to have good locality for accesses to the input vector thanks to the cache sized tiles, while avoiding synchronization. For M×x, updates to output vector y are contained in private caches because the tile size r× k is selected accordingly. On the other hand, the column indices of the vertex-edge incidence matrix are the index of the nonzeros of the adjacency matrix, enabling spatial locality in the input vector. Parallelizing M^T× x is straightforward. Figure <ref> has independent iterations. This approach is also amenable to vectorization. Moreover, it provides locality for accesses to the input vector x, because the tiles of the adjacency matrix are already sorted first in row-major, and the nonzeros in a single tile are sorted by the column ids. SpMV with vertex-edge pair matrices (O) Similar to vertex-edge incidence matrices, vertex-edge pair matrices (O) shown in (<ref>) can be implicitly represented from an adjacency matrix. For each nonzero (r,c) at position e in the adjacency matrix, the vertex-edge pair matrix will have two nonzeros (r, 2e) and (c, 2e+1). Similar to incidence matrices, O× x = y can be performed in 2 steps, y = y_r + y_c such that y_r[r] = x[2e] and y_c[c] = x[2e+1] ∀ (r,c) ∈ A. The transposed SpMV can also be formulated as follows: O^T × x = y where y[2e] = x[r] and y[2e+1] = x[c], ∀ (r,c) ∈ A. Figures <ref> and <ref> show the code sequence for emulating SpMV of a vertex-edge pair matrix using an adjacency matrix. For O× x (Figure <ref>), we again iterate over all the nonzeros of the adjacency matrix. For each nonzero of the adjacency matrix, we implicitly process two nonzeros (r,2e) and (c,2e+1), while reading elements x[2e] and x[2e+1] from the input vector and updating elements y[r] and y[c] of the output vector. For its transpose, we read elements x[2e] and x[2e+1] from x and update y[r] and y[c] of the output vector. For parallelism, we follow the same approach as in vertex-edge incidence matrices. SpMV with interweaved identity (W) An interweaved identity matrix W is simply an identity matrix where each column is replicated. These matrices do not need to be formed explicitly, as can be seen in Equation (<ref>), nor require an adjacency matrix. add a few sentence here with formulas for mxv and mtxv can remove the next few sentences and figure to shrink.Figures <ref> and <ref> show the SpMV implementations for an interweaved identity matrix of an E× E identity matrix and its transpose. For W×x, we observe that each element of the output vector y[e] is simply the combination of two consecutive elements of the input vector x[2e] and x[2e+1]. In contrast, for W^T×x, elements 2e and 2e+1 of output vector y are calculated using x[e]. §.§.§ Loop Fusion and Vectorization Opportunities In each iteration of the MWU algorithm, we do several vector operations and our augmentation to step size search adds many vector operations, too. For some problems, such as vertex-cover or densest subgraph, the vectors in these operations have size |E|, which means they can be as costly as a single SpMV. Since these vector operations loop to apply simple arithmetic to each element in the vector, combining multiple vector operations in one pass via loop fusions can accelerate these methods. We identify two operations for fusion: (1) the gradient calculations using smax and smin (Lines <ref> and <ref>) and (2) the calculation of the new step direction  (<ref>). In both cases, loop fusion can reduce memory accesses and facilitate automatic vectorization. An example fusion optimization is shown in Figure <ref> for the smax function. Figure <ref> shows the implementation using BLAS vector operations while Figure <ref> shows the fusion opportunity. In this example, the scale, exponential, and sum operations can be fused to form a single loop, which reduces reads and writes significantly while eliminating temporary vectors (x_tmp). We also observed that the compiler is able to vectorize these fused loops successfully and replace calls for expensive exponential functions with efficient vectorized implementations. In addition to the gradient and new step direction calculations, we identify loop fusion opportunities for several other operations, such as fusing axpy and norm calculations, and exploit them. In addition to loop fusion, we tune all vector operations to exploit compiler auto-vectorization. §.§ Distributed Parallelization Distributed-memory parallelization of vector operations and explicit sparse matrix vector products in MWU can be done with standard techniques. Therefore, in this section, we will focus on describing and analyzing the benefits of the implicit representation for distributed-memory communication. We use the same implicit representation described in Section <ref>. We leverage a 2D matrix distribution of the adjacency matrix to perform implicit SpMVs with the incidence matrix. A 2D data layout is communication-efficient for matrix vector products since each processor computes on only n/√(p) entries and contributes to n/√(p) outputs (for an n× n matrix on a √(p)×√(p) processor grid). They are commonly employed for parallel processing of adjacency matrices <cit.>. With a 2D layout of the adjacency matrix, we perform vertex-edge incidence products with twice the communication cost of an adjacency matrix product. To do so, we store vector information corresponding to edges in the same processor layout as the adjacency matrix A. This means that for an edge (u,v) in A, the machine owning the edge would store vector information for indices corresponding to u and to v. For simplicity, we assume a square processor grid. With this approach, the product with the vertex-edge incidence matrix, y = M x, requires only a reduction of contributions to y along rows and columns of the processor grid. While for the product y = M^T x, only a broadcast of entries of x along rows and columns of the processor grid is needed. In both cases, each processor sends or receives a subvector of size O(√(n)/p). § EXPERIMENTAL SETUP §.§ System Setup We use Intel Knights Landing (KNL) nodes on the Stampede2 supercomputer as our testbed. Each KNL node has 68 1.4 GHz cores. Each core has a 32 KB private L1 cache, and 2 neighboring cores share a 1 MB L2 cache. KNL processors also support AVX2 and AVX-512 vector instructions. Each Stampede2 node has 112 GB of memory capacity with 96 GB DRAM and 16 GB MCDRAM used in cache mode. §.§ Implementations MWU Implementations. We implement two different versions of the MWU framework presented in Algorithm <ref>: (1) , and (2) . relies on an efficient parallel BLAS library PETSc <cit.> while is our hand-optimized implementation using optimizations discussed in Section <ref>. One modification in our implementation of Algorithm <ref> is we do not drop satisfied constraints (i.e., we skip Line <ref>). This simplifies the implementation, and we did not find this affects convergence to the optimal solution. PETSc <cit.> is an efficient distributed sparse BLAS library with a Python interface (). On Stampede2, we use PETSc with MKL version 19.1.1. We use C++ for our optimized implementation and OpenMP for parallelism. We compile our code with Intel compiler version 19.1.1 and enable -O3, and -mAVX512 compiler flags. We run the implementation by binding threads to physical cores using numactl –physcpubind. For , we find that creating N processes each with 1 thread gives the best performance. MWU has two parameters: ϵ and . ϵ controls the accuracy of the solution, while controls the maximum number of MWU iterations, where an LP requiring more than MWU iterations is deemed infeasible. We set ϵ=0.1 and =5000. The choice of is ad-hoc and based on observations that MWU finds (1 + ϵ)-relative solutions in at most 1000 iterations across most LP problems and inputs. To verify correctness, we compare the solution from MWU with an exact solution, if available, and report instances where the relative error is greater than ϵ. We implement all applications discussed in Section <ref> with MWU. General LP Solvers. We compare our optimized MWU implementation to general LP solvers. We use IBM  <cit.> and  <cit.>, both in multi-threaded settings. If the problem is an ILP, we do not round the fractional solution. For , we set the run mode to opportunistic to achieve the fastest (but non-deterministic) runtime. For , we use the concurrent optimization setting, which concurrently runs primal simplex, dual simplex, and the barrier method. We report the fastest runtime out of these three methods. When the barrier method finishes first, we report its runtime before crossover (unless otherwise noted) for a more fair comparison to MWU, which outputs fractional solutions. We implement all applications discussed in Section <ref> with both and Gurobi. Finally, we limit the solve time for both and to 4 hours. All other parameters were set to defaults. Specialized Algorithms. We consider optimized custom implementations as baselines for the two implicitly integral LP problems: bipartite matching and densest subgraph. For the former, we use  <cit.>, which employs the serial Karp-Singer greedy initialization step followed by a specialized breadth-first searches to find augmenting paths. For the latter problem, we used the Graph Based Benchmark Suite's <cit.> () approximate densest subgraph algorithm, which implements Charikar's greedy 2-approximation algorithm <cit.>. Both and are implemented in C++ with OpenMP. We compile using OpenMP and the Intel compiler (version 19.1.1) with the -O2 flag. We compiled with the g++ compiler version 9.1.0. §.§ Input Graphs We select a variety of real-world and synthetic undirected graphs from the SuiteSparse Matrix Collection <cit.> and list them in Table <ref>. Our real-world graphs come from diverse domains, such as a road, social, and user-product network. We also use two sets of synthetic graphs, the first set being random geometric graphs (rgg) which have a planar-like structure, and the second set being Kronecker graphs (kron) from Graph500 which show a strong community structure. Note that none of the graphs we selected are bipartite, which is required in . To obtain bipartite graphs, we read the input adjacency matrix as a biadjacency matrix, meaning that the rows and columns of the matrix correspond to the left and right sets of vertices, respectively, where edges can only go between between vertices in different sets. § EXPERIMENTAL RESULTS In this section, first, we compare our implementation to state-of-the-art software including general LP solvers, and , to specialized parallel implementations for particular graph problems, and to the parallel implementation of another multiplicative weights update algorithm from Makari et. al  <cit.>. Then, we evaluate the effectiveness of our algorithmic improvements and software optimizations. We start by finding how the incorporation of a step size search reduces the number of MWU iterations. We then test the performance improvements from our software optimizations and its scalability by comparing performance between and . §.§ Comparison of MWU to Other Algorithms We now compare the implementation of MWU with Newton's method and all the software implementation optimizations to other state-of-the-art optimization libraries and custom implementations (for and for ). All experiments are run with 64 threads on a single KNL node. Table <ref> shows the execution times to find (1+ϵ)-relative solutions for four positive LPs on various graphs where ϵ=0.1. A “-” in a cell means that the input graph was either too large to be processed by the library, or the runtime exceeded 4 hours. Comparison with Exact LP Solvers. For all LP solvers, we do not do rounding as post-processing. Therefore, for the exact solvers, we have integral solutions to graphs problems that have implicitly integral LPs, and exact fractional solutions for the relaxed LPs with integrality gaps. For approximate solvers, we are not guaranteed an integral solution on integral LPs. Note that and return exact solutions for the target LPs, while MWU finds an (1+ϵ)-relative solution with a target value of ϵ=0.1. We find that our implementation is able to find an ϵ=0.1 solution in all cases except problem with rgg-20. However, even for this case, our error rate is 0.104. Our results show consistently outperforms and libraries. For , , , and graph LPs, is up to 2548x (rgg-21), 1482x (rgg-19), 43x (kron-19), and 2860x (kron-17) faster than , respectively. Although is generally faster than , outperforms by up to 1070x (rgg-21), 55x (rgg-19), 5x (rgg-21), and 816x (rgg-20) for , , , and graph LPs, respectively, before crossover occurs. When comparing the time when after crossover or one of the simplex methods from terminates (whichever comes first), the relative speedups are 1462x (rgg-21), 3510x (rgg-19), 10x (usroads), and 878x (rgg-20) for , , , and graph LPs, respectively. In addition to significant speedups, we also observe that MWU is capable of running much larger problems. For instance, both and can only solve kron-21 for but not for , the latter which contains three times more nonzeros than the former. Moreover, both LP solvers fail to solve any of the problems with the largest graphs, hollywood and orkut. Comparison with Custom Implementations. We compare performance to for and to for . The MS-BFS algorithm returns an exact solution. returns an approximate solution with ϵ=0.1. The MS-BFS algorithm  <cit.> initializes with a serial Karp-Singer greedy step and finds augmenting paths in parallel using specialized BFS. The performance of heavily depends on the graph structure. In general, we observed the can outperform for graphs with planar structures by 1.8-22.4x for the rgg graphs and usroads. On the other hand, for graphs which contain a strong community-structure or vertices with high degrees, outperforms MWU. For example, amongst the kron graphs, can be up to 450x faster than . For these problems, general spends much less time on the grafting process than with planar-structured graphs, which we often find to be the dominating cost of . For , implements Charikar's greedy 2-approximation algorithm for densest subgraph <cit.>, but the relative error is usually much better in practice (but worse than ϵ=0.1). Again, we run with ϵ=0.1. We observe that always outperforms MWU, achieving a maximum speedup of 63.2x and minimum speedup of 4x. Comparison with Previous Work. We compare MWU (Algorithm <ref>) and an implementation of a gradient descent algorithm with adaptive error <cit.>, which uses a less theoretically efficient multiplicative weights update algorithm but is the only other distributed study of multiplicative weights update methods on graph problems. Their paper compared their algorithm, called MPCSolver, against their implementation of Young's parallel algorithm for feasibility generalized matching, which is a mixed packing and covering LP, for an (1+ϵ)-relative solution (ϵ=0.05) and found that the implementation of MPCSolver outperformed Young's algorithm. We provide a detailed description of the problem, datasets, and gradient descent algorithm in Appendix <ref>. We run the same experiment as them with , using the same datasets as well: the Netflix and KDD datasets <cit.>. Because both algorithms solve the same LP with a multiplicative weight update approach, there are only minor differences in vector operations between the two algorithms. Consequently, for this section, we compare iteration counts rather than time between the two algorithms. The two proposed algorithms are compared in Figure <ref>. For MWU, we consider both the standard step size and the Newton's method for step size search. The gradient descent data is manually extracted from <cit.> using WebPlotDigiter <cit.>. The plot shows both MWU with Newton's method and gradient descent with adaptive error find a (1+ϵ)-relative solution in less than 2000 iterations, whereas MWU with standard step size converges much more slowly. Moreover, MWU with Newton's method incurs 10 × and 41 × fewer iterations than gradient descent for Netflix and KDD, respectively. These results highlight the effectiveness of using a step size strategy, such as Newton's method, over the standard step size. Furthermore, MWU with Newton's method converges more rapidly than gradient descent with an adaptive error. However, since both methods use heuristics to accelerate the method, testing additional positive LPs and datasets would be needed for a comprehensive understanding of the trade-offs between MWU and gradient descent. The heuristic that MPCSolver uses prematurely stops the algorithm once it detects that the per-iteration decrease in constraint violation falls below a threshold. §.§ Parallel Scalability of MWU We now analyze the strong scaling behavior of and . Figure <ref> displays the speedup with respect to single-threaded execution of the implementation. When executing on a single node, all LP problem types are run along a range of thread counts from single threaded to 68 threads, which is the maximum number of hardware threads on one machine. Here, is able to achieve speedup over 16x with 68 threads in 90% of experiments and over 32x in 50% of experiments. Overall, the implementation achieves speedups of 13-55x on 68 threads compared to its single threaded runtimes. The largest differences between and are observed on graph applications where we use specialized matrices such as , , and problems. High parallel efficiency in is achieved due to load balancing and high locality in matrix-vector multiplications of transposed specialized matrices and vector operations. On the other hand, we see that the can only achieve 2-3x speedup compared to for dominating set LP () application. Note that, for , can only make use of format selection and memory access minimization optimizations for SpMV operations. For multi-node results, we execute and with 64 MPI processes per node and 1 thread per process (this was the most performant configuration for on a single machine). We only run experiments where the total number of processes is square, as that is a requirement for our implicit representation. We run all algorithms with Newton's optimization for step size search and limit Newton's methods to 5000 iterations. On distributed memory, except for vertex cover on rgg-24, runs faster than at scale. For the distributed problem, we also observe almost linear scaling for all graphs except rgg-24 in . A matrix-vector product on the incidence matrix of banded matrices, like rgg-24, corresponds to a 1D problem, so a 1D parallelization (e.g., row-wise distribution of the matrix) is more efficient than a 2D distribution, which is the layout we use. For , we observe good scaling on , which we expect as the LP for this problem does not use implicit representation and is a 1D problem. We observe poor scaling in all other problems on all graphs except rgg-24. For rgg-24, achieves good performance due to its internal representation <cit.>, which, we believe communicates only the vector entries needed by each processor based on the sparsity pattern of rows assigned to it. However, performs extremely poorly on when we use multiple processors and does not complete in under 2 hours. In conclusion, for general graphs, the implicit 2D representation scales well compared to a explicit 1D representation. §.§ Improvements from Step Size Strategy We first evaluate the effectiveness of step size search (Section <ref>). We run using 64 MPI proccesses, each with 1 thread and list the results for rgg-18 in Table <ref>. We choose this graph since we have exact solutions for all five graph problems, and the runtime with standard step size is not too large. The speedups for other graphs are within an order of magnitude of the ones listed here. The results verify that a step size search strategy significantly reduces the number of MWU iterations compared to the standard step size prescribed in theoretical algorithms. Since an MWU iteration tends to be more expensive than a search step iteration (due to the SpMV), these results suggest that finding accurate step sizes, at the expense of a higher search cost, reduces the overall run time. The performance difference between binary search or Newton's method is relatively small. While Newton's method on average requires fewer step size search iterations than binary search, it has more MWU iterations than binary search for the two pure covering problems, and . The additional MWU iterations observed when using Newton's method may be attributed to the (1-ϵ) multiplicative decrease (where ϵ=0.1) applied to step sizes violating the bang-for-buck inequality (<ref>). §.§ Effect of Software Optimizations We now analyze the acceleration of an MWU iteration with our software optimizations. To do so, we will compare the execution times of and implementations with 68 threads. Later, we will also compare the execution times of and implementations for problems with implicit matrix vector multiplication. §.§.§ Performance Breakdown First, we consider where the cycles are spent in our implementation. Figure <ref> shows the fraction of time spent in matrix-vector products (matvec), step size search (search), and other vector operations (vec). The gradients (Line <ref>, <ref>) and new direction (Line <ref>) are included in the vec category while all other vector operations done during step size search are included in the search category. We observe that both applications and input graphs affect the distribution of execution cycles among these three components. For example, while , and problems spend most of their time during matvec, and problems spend more than 50% of their execution time for vec and search operations. matvec takes on average 75%, 78%, and 82% of the execution time for , , and problems, respectively. In contrast, for and , matvec takes only 45% and 38% of the execution time on average. Due to this variable behavior, it is crucial to optimize both matrix-vector multiplications and vector operations for MWU. §.§.§ Shared-Memory Performance Optimizations Figure <ref> shows the speedup obtained by our optimized implementation relative to the PETSc-based implementation when executing on a single node. In the rest of this section, we report geometric mean speedups when referring to average speedup across graphs. For , the speedup is obtained from using a favorable format (CSB) and minimizing memory accesses. Our optimizations accelerate matvec operations by 1.8x on average. Although vec and search operations also get speedups, their contribution to overall performance is smaller. On the other hand, for , , and problems, we can observe the benefit of specialized vertex-incidence matrix vector multiplications. In these cases, matvec operations are 3.28x, 5.06x, and 3.49x faster on average, respectively. For problem, we also see benefits of vertex-edge pair matrix and interweaved-identity matrix specializations. matvec operations are 4.64x faster on average. Moreover, and problems spend a large amount of time for vec and search operations. We see that, in these cases, implementation can obtain significant speedups for both vec (6.85x, and 4.08x on average, respectively) and search (8.92x and 5.79x on average, respectively) thanks to fusing and SIMD optimizations. §.§.§ Distributed-Memory Optimizations We record runtime improvements in the context of multi-node exeuction over in Table <ref>. In parenthesis is the ratio of matvec product time to matvec communication time for . All experiments are run with 64 OMP threads per MPI process. We observe that for 4 nodes, the use of implicit matrix-vector products accelerates matvec operations in by 1.4-3x compared to for all graphs except rgg-24, for which it is slower by 9x. As previously discussed in Section <ref>, uses a 1D communication layout, which is more efficient on banded matrices like rgg-24 than our 2D communication layout. § CONCLUSION Our work demonstrates that approximate positive LP solvers are an efficient and scalable approach for solving a wide range of graph problems. We show that with carefully chosen modifications and implementation of the MWU algorithm from Mahoney et. al. <cit.> – namely, a step size search strategy and specialized linear algebra operations that leverage shared and distributed-memory resources – the algorithm exceeds the performance of general purpose LP solvers for finding a (1+ϵ)-relative solution. Our implementation also matches the performance of hand-tuned parallel graph libraries for some graphs. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number DE-SC0022158. ACM-Reference-Format § SUPPLEMENTARY MATERIAL §.§ Further Details on Generalized Matching Experiments Let G=(V,E) be an undirected, unweighted graph. For generalized matching, a vertex v can be matched b(v) times, where lb(v) ≤ b(v) ≤ub(v) are lower and upper bounds on the number of unique vertices matching with v. More precisely, the IP formulation is ∃ x s.t. lb(v) ≤∑_e ∈inc(v) x_e ≤ub(v), ∀ v ∈ V, x_e ∈{0,1}, ∀ e ∈ E. Maximum matching is equivalent to generalized matching with lb(v)=0 and ub(v)=1, ∀ v ∈ V, as well as a (maximization) objective function of ∑_e x_e. The LP relaxation is the feasibility mixed packing and covering LP, ∃ x ∈𝐑^m s.t. Mx≥l, Mx≤u, x ≥0, where M is the vertex-edge incidence matrix, and l, u ∈𝐑^n are the vector of lower and upper bounds for each vertex. §.§ Dataset Preprocessing Now, let us describe how to pre-process the Netflix <cit.> and KDD <cit.> datasets, as detailed in <cit.>. Both datasets contain users and items (e.g., movies in Netflix, music tracks in KDD) as vertices, and edges correspond to a user rating an item. This dataset is represented as a bipartite graph, where users and items form the two partitions, and edges go only between vertices in separate partitions. For the number of matchings with each user, we enforce a lower bound of three and upper bound of five. For items, no lower bound is set, but an upper bound of 200 and 2000 is chosen for Netflix and KDD, respectively. Finally, to ensure there is a feasible matching satisfying these bounds, we exclude users with less than ten ratings from the Netflix dataset. After this pre-processing step, the two datasets have 473k and 1.6m vertices, as well as 100m and 252m edges, respectively. §.§ Gradient Descent with Adaptive Error Finally, we review the gradient descent algorithm with an adaptive error of <cit.>. In short, the algorithm minimizes the convex function via gradient descent, Γ( x) = ∑_i=1^m_P y_i( x) + ∑_i=1^m_C z_i( x), where for some μ > 0, y_i( x) = exp [μ· (P_ix-1) ] z_i( x) = exp [μ· (1-C_ix) ] . The algorithm contains two error values. There is the error bound ϵ, where the algorithm seeks to find an x that is a (1+ϵ)-relative solution. Then there is the internal error bound ϵ', which is used to specify μ as well as which coordinates of x_i to update, and by how much. The authors of <cit.> found that they can set ϵ' > ϵ. For example, when ϵ=0.05, they can choose ϵ'=1. Then they run the algorithm until it stagnates, and if x is not an (1+ϵ)-relative solution, they decrement ϵ' and warm-start the algorithm by setting the initial x_0 to the solution of the previous, stagnated algorithm. This strategy is called adaptive error, since it adaptively updates the internal error bound.
http://arxiv.org/abs/2307.01733v1
20230704140434
Discovery of H$_2$CCCH$^+$ in TMC-1
[ "W. G. D. P. Silva", "J. Cernicharo", "S. Schlemmer", "N. Marcelino", "J. -C. Loison", "M. Agúndez", "D. Gupta", "V. Wakelam", "S. Thorwirth", "C. Cabezas", "B. Tercero", "J. L. Doménech", "R. Fuentetaja", "W. -J. Kim", "P. de Vicente", "O. Asvany" ]
astro-ph.GA
[ "astro-ph.GA", "physics.chem-ph" ]
I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, 50937 Köln, Germany [email protected] Dept. de Astrofísica Molecular, Instituto de Física Fundamental (IFF-CSIC), Serrano 121, 28006 Madrid, Spain Centro de Desarrollos Tecnológicos, Observatorio de Yebes (IGN), 19141 Yebes, Guadalajara, Spain Observatorio Astronómico Nacional (OAN, IGN), Madrid, Spain Institut des Sciences Moléculaires (ISM), CNRS, Univ. Bordeaux, 351 cours de la Libération, 33400, Talence, France Laboratoire d'Astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France Instituto de Estructura de la Materia, IEM-CSIC, Serrano 123, 28006 Madrid, Spain Based on a novel laboratory method, 14 mm-wave lines of the molecular ion have been measured in high resolution, and the spectroscopic constants of this asymmetric rotor determined with high accuracy. Using the Yebes 40 m and IRAM 30 m radio telescopes, we detect four lines of towards the cold dense core TMC-1. With a dipole moment of about 0.55 Debye obtained from high-level ab initio calculations, we derive a column density of 5.4±1×10^11 cm^-2 and 1.6±0.5×10^11 cm^-2 for the ortho and para species, respectively, and an abundance ratio N(H_2CCC)/N()= 2.8±0.7. The chemistry of is modelled using the most recent chemical network for the reactions involving the formation of . We find a reasonable agreement between model predictions and observations, and new insights into the chemistry of C_3 bearing species in TMC-1 are obtained. Discovery of in TMC-1 Based on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20D15, and 21A011) and the Institut de Radioastronomie Millimétrique (IRAM) 30m telescope. The 40m radiotelescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN, Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). W. G. D. P. Silva 1 J. Cernicharo 2 S. Schlemmer 1 N. Marcelino 3,4 J.-C. Loison 5 M. Agúndez 2 D. Gupta 1 V. Wakelam 6 S. Thorwirth 1 C. Cabezas 2 B. Tercero 3,4 J. L. Doménech 7 R. Fuentetaja 2 W.-J. Kim 1 P. de Vicente 3 O. Asvany 1 Received X, 2023; accepted X, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Molecular ions are important intermediates in the chemistry of the interstellar medium (ISM). These charged species can rapidly react with neutral partners or recombine with electrons to form other ionic and neutral molecules under astrophysical conditions <cit.>. Despite their fundamental role in astrochemistry, many ions remain elusive mainly due to their highly reactive character and lack of accurate laboratory data to support astronomical detections <cit.>. Consequently, many ions in astrochemical models and theories await confirmation through spectroscopic detection in the ISM. One example is the formation of the hydrocarbons and C_3H, whose both cyclic (c-and c-C_3H) and linear (l-and l-C_3H, i.e. H_2CCC and HCCC, respectively) isomers were detected in the ISM <cit.>, and even deuterated versions were observed <cit.>. The synthesis of the cyclic and linear forms of and C_3H is thought to occur via the dissociative recombination of the respective isomers of C_3H_3^+ with electrons, c-C_3H_3^+ and <cit.>. In turn, both isomers of C_3H_3^+ would be produced through the radiative association of C_3H^+ and H_2 <cit.>. The proof of this chemical pathway for the cyclic variants is difficult because c-C_3H_3^+ is a symmetric molecule and can only be detected based on its rovibrational fingerprints in the infrared <cit.>, which might be feasible with the James Webb Space Telescope (JWST) in the near future. While the singly-deuterated version c-C_3H_2D^+ could be probed by radio astronomy, it has a predicted low dipole moment and low column densities <cit.>. This leaves only as a good candidate for radio astronomical searches. In this Letter, based on a novel experimental method, we report first laboratory mm-wave data of and its radio-astronomical detection towards the cold dark core TMC-1. We derive its column density towards TMC-1 and discuss these results in the context of state-of-the-art chemical models. § LABORATORY WORK is a closed-shell, planar and near-prolate asymmetric top molecular ion (see sketch in Fig. 1). ions were generated in the Cologne laboratory in a storage ion source via electron impact ionization (E_e ≈30 eV) of the precursor gas allene (C_3H_4). By applying a novel trap-based technique called leak-out-spectroscopy (LOS, ) in the cryogenic ion trap machine COLTRAP <cit.>, the vibrational bands ν_1 and ν_3+ν_5 were measured in the range 3180 - 3240 cm^-1 in high resolution. The vibrational measurements, whose details will be described in a forthcoming publication, enabled the ground state spectroscopic parameters of to be determined. Subsequently, pure rotational lines were detected using a vibrational-rotational double resonance (DR) method. Such methods have been reviewed by <cit.>, and the particular scheme involving LOS has only recently been demonstrated by <cit.>. An example measurement for is shown in Fig. <ref>. DR spectra were recorded in multiple individual measurements in which the mm-wave frequency (blue arrow in Fig. <ref>) was stepped in an up-and-down manner several times. Selected rovibrational lines from the ν_1 or the ν_3+ν_5 combination band were used for the IR excitation (red arrow in Fig. <ref>). The frequency steps of the mm-wave radiation were kept constant in individual experiments, and varied between 3 and 50 kHz (the larger steps typically used for searching new lines). The spectroscopic data were normalized employing a frequency switching procedure, i.e., by dividing the counts monitored while scanning the spectral range of interest by the counts at an off-resonant mm-wave reference frequency. Therefore, the baseline in Fig. <ref> is close to unity. The on-resonance signal enhancement is on the order of 15 %. Transition frequencies were determined by adjusting the parameters of an appropriate line shape function (typically a Gaussian) to the experimental spectrum in a least-squares procedure. In total, 14 rotational lines were detected in the laboratory and are summarised in Table <ref>. The frequencies and their uncertainties in the table result from the weighted average of several (up to eleven) independent line-center determinations for each transition. The fit of the assigned lines (lab + astro) was carried out using Watson's S-reduced Hamiltonian in the I^r representation as implemented in Western's PGOPHER program <cit.>. The resulting spectroscopic parameters are given in Table <ref>. As is a near-prolate asymmetric top (κ=-0.9976) with an a-type spectrum, the A_0 rotational constant is not well constrained from experiment. Overall, the experimental rotational and centrifugal distortion constants compare favorably with those calculated by <cit.> and very well with the best estimate values obtained in this study (given in the last three columns of Table <ref>). The obtained obs-calc values for the rotational lines are given in Table <ref>. In total, the weighted rms of the fit is on the order of 1.4, indicating somewhat optimistic uncertainties of our measurements. § QUANTUM CHEMICAL CALCULATIONS The molecular ion has been a subject of several quantum-chemical investigations in the past <cit.>. In the present study, complementary high-level calculations were performed at the CCSD(T) level of theory <cit.> together with correlation consistent (augmented) polarized weighted core-valence basis sets <cit.> as well as atomic natural orbital basis sets <cit.>. All calculations were performed using the CFOUR program suite <cit.>. Equilibrium rotational constants were calculated at the all-electron level of theory that is known to yield molecular equilibrium structural parameters of very high quality for molecules comprising first- and second-row elements <cit.>. Zero-point vibrational contributions 1/2∑_iα_i^A,B,C,calc (=Δ A_0, Δ B_0, Δ C_0) to the equilibrium rotational constants and centrifugal distortion parameters were calculated at the frozen core (fc-)CCSD(T)/ANO1 level. Best estimate (BE, Tables <ref> and <ref>) rotational and centrifugal distortion constants were finally obtained through empirical scaling of the calculated rotational parameters using factors (i.e., the ratios X_exp/X_calc of a given parameter) derived from isoelectronic propadienylidene, H_2CCC, the pure rotational spectrum of which is known well from previous study <cit.>. The technique of empirical scaling using structurally closely related (isoelectronic) species known from experiment may provide rotational parameters at a predictive power greatly exceeding that of high-level calculations alone <cit.>, and has been used recently to identify species not studied in the laboratory using radio astronomy (see, e.g., ). The dipole moment of is not very large. At the level of theory the (center-of-mass frame) equilibrium value μ_e amounts to 0.53 D in good agreement with earlier estimates. Zero-point vibrational effects (fc-CCSD(T)/ANO1) have an almost negligible influence resulting in μ_0=0.55 D. § OBSERVATIONS New receivers, built within the Nanocosmos[ERC grant ERC-2013-Syg-610256-NANOCOSMOS. https://nanocosmos.iff.csic.es/] project and installed at the Yebes 40 m radio telescope, were used for the observations of TMC-1 (α_J2000=4^ h 41^ m 41.9^ s and δ_J2000= +25^∘ 41' 27.0”). The observations of TMC-1 belong to the on-going QUIJOTE[Q-band Ultrasensitive Inspection Journey to the Obscure TMC-1 Environment] line survey <cit.>. A detailed description of the telescope, receivers, and backends is given by <cit.>. Briefly, the receiver consists of two cold high electron mobility transistor amplifiers covering the 31.0-50.3 GHz band with horizontal and vertical polarizations. The backends are 2×8×2.5 GHz fast Fourier transform spectrometers with a spectral resolution of 38.15 kHz providing the coverage of the whole Q-band in both polarisations. The observations, carried out during different observing runs, are performed using the frequency-switching mode with a frequency throw of 10 MHz in the very first observing runs, during November 2019 and February 2020, 8 MHz during the observations of January-November 2021, and alternating these frequency throws in the last observing runs between October 2021 and February 2023. The total on-source telescope time is 850 hours in each polarization (385 and 465 hours for the 8 MHz and 10 MHz frequency throws, respectively). The sensitivity of the QUIJOTE line survey varies between 0.17 and 0.25 mK in the 31-50.3 GHz domain. The intensity scale used in this work, antenna temperature (T_A^*), was calibrated using two absorbers at different temperatures and the atmospheric transmission model ATM <cit.>. Calibration uncertainties have been adopted to be 10 %. The beam efficiency of the Yebes 40 m telescope in the Q-band is given as a function of frequency by B_ eff= 0.797 exp[-(ν(GHz)/71.1)^2]. The forward telescope efficiency is 0.97. The telescope beam size varies from 56.7” at 31 GHz to 35.6” at 49.5 GHz. The data of TMC-1 taken with the IRAM 30 m telescope consist of a 3 mm line survey obtained with the old ABCD receivers connected to an autocorrelator that provided a spectral resolution of 40 kHz <cit.>. Some additional high sensitivity frequency windows observed in 2021 used the new 3 mm EMIR dual polarization receiver connected to four fast Fourier transform spectrometers providing a spectral resolution of 49 kHz <cit.>. All the observations were performed using the frequency switching method. The final 3 mm line survey has a sensitivity of 2-10 mK. However, at some selected frequencies the sensitivity is as low as 0.6 mK. § DETECTION OF H_2CCCH^+ IN TMC-1 We searched for one para (2_02-1_01) and two ortho (2_12-1_11, 2_11-1_10) lines of H_2CCCH^+ within the QUIJOTE line survey. The three lines are clearly detected and are shown in Fig. <ref>. In the data at 3 mm we covered the frequencies of three para and four ortho lines, with upper levels J= 4, 5, 6 and energies below 30 K. Only one line, the J=5_14-4_13, falls in one of the high sensitivity windows (σ= 0.6 mK) of our line survey, and it is also clearly detected (see Fig. <ref>). Two other lines are within frequency ranges with σ below 2 mK, and are marginally detected. The derived line parameters for all searched transitions of H_2CCCH^+ are given in Table <ref>. We checked that the detected lines cannot be assigned to lines of other species or isotopologues by exploring the spectral catalogues MADEX <cit.>, CDMS <cit.> and JPL <cit.>. To estimate the column density of H_2CCCH^+, we considered the ortho and para levels as belonging to two different species. For the dipole moment we use the value of 0.55 D calculated in this work. No collisional rates are available for this molecule. However, <cit.> have computed the collisional rates between He and H_2CCC. This cumulenic species is isoelectronic with our molecule and has a very similar structure. We therefore adopted these rates, correcting for the abundance of He with respect to H_2, to estimate the excitation temperatures of the observed transitions of H_2CCCH^+. Assuming a volume density of (1-3)×10^4 cm^-3 <cit.>, we derive excitation temperatures close to 10 K for the J= 2-1 lines, and ∼8-10 K for the lines in the 3 mm domain, the largest value corresponding to n(H_2)= 3×10^4 cm^-3. These excitation temperatures are considerably larger than those obtained for H_2CCC, which are between 4 and 5 K, due to the larger dipole moment of this species (4.1 versus 0.55 D). From the adopted rotational temperatures of 9 K, and assuming a source of uniform brightness temperature with a radius of 40” <cit.>, we derive a column density for ortho-H_2CCCH^+ of (5.4±1)×10^11 cm^-2. For the para species the estimated column density is (1.6±0.5)×10^11 cm^-2, a value that is consistent with the expected ortho to para ratio of 3/1. The computed synthetic spectra show an excellent agreement with the observed line intensities (see Fig. <ref>). Hence, the total column density of in TMC-1 is (7.0±1.5)×10^11 cm^-2. It is interesting to compare the abundance of H_2CCCH^+ to that of H_2CCC. For the latter species we detected, with an excellent signal to noise ratio, all its ortho and para lines in the frequency range of our line surveys. The derived line parameters are summarized in Table <ref> and the lines are shown in Fig. <ref>. The decline of the line intensity between the J_u= 2 and J_u= 5 lines is obvious in Fig. <ref>, which indicates that the lines are not thermalized to the kinetic temperature of the cloud for this species. Using the collisional rates of <cit.>, and adopting the same assumptions on the source size than for H_2CCCH^+, we derive a column density for the ortho and para species of (1.5±0.1)×10^12 and (0.45±0.05)×10^12 cm^-2, respectively. The best fit is obtained for a density of n(H_2)=8×10^3 cm^-3. The total column density of H_2CCC is (1.95±0.15)×10^12 cm^-2 and the ortho to para ratio for this species is 3.3±0.6. The H_2CCC/H_2CCCH^+ abundance ratio is 2.8±0.7 which is on the order of that found for C_3O/HC_3O^+ <cit.>, but much smaller than the abundance ratio found in TMC-1 for other neutral species and their protonated forms <cit.>. § DISCUSSION To describe the chemistry of , we used the Nautilus code <cit.>, a 3-phase (gas, dust grain ice surface, and dust grain ice mantle) time-dependent chemical model with a chemical network for C_3H_x^+ species very similar to the one presented in <cit.>. To describe the physical conditions in TMC-1, we use an homogeneous cloud with a density equal to 2.5×10^4 cm^-3, a temperature equal to 10 K for both the gas and the dust, a visual extinction of 30 mag and a cosmic-ray ionization rate of 1.3×10^-17 s^-1. All elements are assumed to be initially in atomic form, except for hydrogen, which is entirely molecular <cit.>. The calculated abundances relative to H_2 for H_2CCC and H_2CCCH^+, and also for C_3 and C_3H^+, which are strongly linked to H_2CCCH^+, are shown in Fig. <ref>. As can be seen in Fig. <ref>, the H_2CCC and H_2CCCH^+ abundances observed are relatively well reproduced by the model for a relatively early molecular cloud age around 2×10^5 years with, however, a smaller H_2CCC/H_2CCCH^+ ratio than the one observed. Looking in more detail at the chemistry of H_2CCCH^+ and H_2CCC (see Fig. 3 of ), it appears that H_2CCC is a product of H_2CCCH^+ (and also of c-C_3H_3^+) but that the flow of protonation of H_2CCC toward H_2CCCH^+ is a very minor pathway for the formation of H_2CCCH^+ which is almost essentially produced by the reaction C_3H^+ + H_2. This inverted link between H_2CCC and H_2CCCH^+ explains the unusually high MH^+/M ratio, compared to those cases in which the protonated form comes from the protonation of the neutral form <cit.>. Considering the link between C_3H^+, H_2CCCH^+ and C_3, it is interesting to see if the observations of H_2CCCH^+ (this work) combined to the observation of C_3H^+ <cit.> allow us to estimate the abundance of C_3, which in our dense cloud model is the second carbon reservoir, accounting for up to 15% of carbon. If C_3 does not react with atomic oxygen, as calculated by <cit.>, the protonation reactions of C_3 producing C_3H^+ are by far the main reactions of destruction of C_3 and of production of C_3H^+. As these protonation reactions control the destruction of C_3, the uncertainties on the rates affect the abundance of C_3, but does not change the flux of these reactions. The underestimation of C_3H^+ in the model is therefore not related to these protonation rates, but to the rate of the reaction C_3H^+ + H_2, which controls the destruction of C_3H^+ <cit.>. A decrease in the rate of the reaction C_3H^+ + H_2 at 10 K by a factor of 3 allows us to reproduce the abundance of C_3H^+ observed by <cit.>, as shown in Fig. <ref>. This change in the rate coefficient does not affect the flux of the C_3H^+ + H_2 reaction and therefore does not affect the abundance of H_2CCCH^+, as long as the branching ratios to H_2CCCH^+ and c-C_3H_3^+ are not varied. Indeed, since is mainly destroyed by the reaction with electrons, its abundance is controlled by the rate of this reaction, which is known only over the temperature range 172-489 K <cit.> with a temperature dependency inconsistent with theory. An increase in this rate at 10 K by a factor of 5, which is not impossible given the uncertainties, allows us to reproduce the observation for H_2CCCH^+ with a ratio between H_2CCC/H_2CCCH^+ very close to the observed one. Considering the uncertainties of the different chemical reactions linking C_3 to C_3H^+ and H_2CCCH^+, the observations of C_3H^+ <cit.> and H_2CCCH^+ (this work) validate the chemical scheme controlling the formation of cyclic and linear C_3H and C_3H_2 and the large gas-phase abundance of C_3 in TMC-1, around 10^-6 relative to H_2. It would also be very interesting to know the abundance of the more stable cyclic isomer c-C_3H_3^+, which in the chemical model is predicted to be slightly more abundant (a factor of three) than . This species has no dipole moment, but its deuterated version, c-C_3H_2D^+, has a low dipole moment of 0.225 D <cit.> and its rotational spectrum has been recently measured in the 90-230 GHz frequency range in Cologne <cit.>. We searched for c-C_3H_2D^+ in the QUIJOTE line survey but at the current level of sensitivity, this species is not detected and we derive an upper limit to its column density of 4×10^12 cm^-3. Assuming that the c-C_3H_3^+/c-C_3H_2D^+ ratio is 10, as found for the analog case of CH_3CCH with three equivalent H nuclei <cit.>, the column density of c-C_3H_3^+ is <4×10^13 cm^-2, which is not very meaningful. The work has been supported by an ERC advanced grant (MissIons: 101020583), the Deutsche Forschungsgemeinschaft (DFG) via SFB 956 (project ID 184018867), sub-project B2, and the Gerätezentrum ”Cologne Center for Terahertz Spectroscopy” (DFG SCHL 341/15-1). W.G.D.P.S. thanks the Alexander von Humboldt foundation for funding through a postdoctoral fellowship. We also acknowledge the support from the MICINN projects PID2020-113084GB-I00 and PID2019-106110GB-I00, the CSIC project ILINK+ LINKA20353, the ERC grant ERC-2013-Syg610256-NANOCOSMOS, and the Regional Computing Center of the Universität zu Köln (RRZK) for providing computing time on the DFG-funded high performance computing system CHEOPS. aa § STRUCTURAL CALCULATIONS, INTERNAL COORDINATES Bond lengths are given in Å, angles in degrees. §.§ H2CCCH+ §.§ H2CCC § LINE PARAMETERS OF H_2CCCH^+ AND H_2CCC The line parameters of H_2CCCH^+ and H_2CCC have been obtained by fitting a Gaussian line profile to the observed data. The results are given in Tables <ref> and <ref>, respectively. The observed lines of H_2CCCH^+ are shown in Fig. <ref> and those of H_2CCC in Fig. <ref>.
http://arxiv.org/abs/2307.05508v2
20230703172323
Human in the AI loop via xAI and Active Learning for Visual Inspection
[ "Jože M. Rožanec", "Elias Montini", "Vincenzo Cutrona", "Dimitrios Papamartzivanos", "Timotej Klemenčič", "Blaž Fortuna", "Dunja Mladenić", "Entso Veliou", "Thanassis Giannetsos", "Christos Emmanouilidis" ]
cs.HC
[ "cs.HC", "cs.AI", "cs.CV" ]
Human in the AI loop via xAI and Active Learning for Visual Inspection]Human in the AI loop via xAI and Active Learning for Visual Inspection [1,5]Jože M. Rož[email protected] 2]Elias Montini 2]Vincenzo Cutrona 3]Dimitrios Papamartzivanos 4]Timotej Klemenčič 5]Blaž Fortuna 1]Dunja Mladenić 6]Entso Veliou 3]Thanassis Giannetsos 7]Christos Emmanouilidis *[1]Jožef Stefan Institute, Slovenia [2]University of Applied Sciences and Arts of Southern Switzerland, Switzerland [3]Ubitech Ltd., Greece [4]University of Ljubljana, Slovenia [5]Qlector d.o.o., Slovenia [6]University of West Attica, Greece [7]University of Groningen, The Neatherlands Industrial revolutions have historically disrupted manufacturing by introducing automation into production. Increasing automation reshapes the role of the human worker. Advances in robotics and artificial intelligence open new frontiers of human-machine collaboration. Such collaboration can be realized considering two sub-fields of artificial intelligence: active learning and explainable artificial intelligence. Active learning aims to devise strategies that help obtain data that allows machine learning algorithms to learn better. On the other hand, explainable artificial intelligence aims to make the machine learning models intelligible to the human person. The present work first describes Industry 5.0, human-machine collaboration, and state-of-the-art regarding quality inspection, emphasizing visual inspection. Then it outlines how human-machine collaboration could be realized and enhanced in visual inspection. Finally, some of the results obtained in the EU H2020 STAR project regarding visual inspection are shared, considering artificial intelligence, human digital twins, and cybersecurity. [ [ ===== § INTRODUCTION Industrial revolutions have historically disrupted manufacturing by introducing automation into the production process. Increasing automation changed worker responsibilities and roles. While past manufacturing revolutions were driven from the optimization point of view, the Industry 5.0 concepts capitalize on the technological foundations of Industry 4.0 to steer manufacturing towards human-centricity <cit.>, adding resilience and sustainability among its key targets <cit.>. This change is part of a holistic understanding of the industry's societal role. In particular, the European Commission expects the industry to collaborate on achieving societal goals that transcend jobs and company growth. Human-centric manufacturing within the Industry 5.0 aims to ensure that human well-being, needs, and values are placed at the center of the manufacturing process. Furthermore, it seeks to enable collaborative intelligence between humans and machines to enable co-innovation, co-design, and co-creation of products and services <cit.>, thus allowing leveraging on their strengths to maximize individual and joint outcomes and their joint added value <cit.>. It is expected that synergies enabled within Industry 5.0 will still allow for high-speed and mass-personalized manufacturing but will shift repetitive and monotonous tasks to be more assigned to machines to capitalize more on the human propensity for critical thinking and give them to more cognitively demanding tasks <cit.>. The emerging shift in human roles goes beyond allowing them to move away from repetitive tasks to undertake other physical activities. As non-human actors, including artificial intelligence (AI) - enabled ones, undertake tasks that can be automated, humans are not necessarily excluded but may well play a higher added value and steering role, bringing their cognitive capabilities into the AI loop <cit.>. This includes active synergies between AI-enabled non-human entities and humans, resulting in novel work configurations <cit.>. Such configurations empower human actors in new roles rather than diminishing them <cit.>. As a consequence, it is increasingly recognized that involving instead of replacing the human from the AI loop not only elevates the role of humans in such work environments but significantly enhances the machine learning process, and therefore the emergent capabilities of the AI-enabled actors <cit.>. As a result, such synergies involve humans and non-human entities who jointly contribute to shaping an emergent meta-human learning system, which in turn is more capable and powerful than human and non-human entities acting alone <cit.>. A possible realization of such human-machine collaboration emerges from two sub-fields of artificial intelligence: active learning and explainable artificial intelligence (XAI). Active learning is concerned with finding pieces of data that allow machine learning algorithms to learn better toward a specific goal. Human intervention is frequently required, e.g., to label selected pieces of data and enable such learning. On the other hand, XAI aims to make the machine learning models intelligible to the human person so that humans can understand the rationale behind machine learning model predictions. While active learning requires human expertise to teach machines to learn better, XAI aims to help humans learn better about how machines learn and think. This way, both paradigms play on the strengths of humans and machines to realize synergistic relationships between them. Among the contributions of the present work are (i) a brief introduction to the state-of-the-art research on human-machine collaboration, key aspects of trustworthiness and accountability in the context of Industry 5.0, and research related to automated visual inspection; (ii) the development of a vision on how an AI-first human-centric visual inspection solution could be realized; and (iii) a description of experiments and results obtained in the field of automated visual inspection at the EU H2020 STAR project. The rest of the work is structured as follows: Section <ref> describes related work, providing an overview of human-machine collaboration, the industry 5.0 paradigm and human-centric manufacturing, state-of-the-art on automated quality inspection, and a vision of how human-machine collaboration can be realized in the visual inspection domain. In Section <ref>, relevant research contributions from the EU H2020 STAR project are outlined, offering concrete examples of humans and AI working in synergy. Finally, Section <ref> provides conclusions and insight into future work. § BACKGROUND §.§ Overview on Human-Machine Collaboration The advent of increasingly intelligent machines has enabled a new kind of relationship: the relationship between humans and machines. Cooperative relationships between humans and machines were envisioned back in 1960 <cit.>. This work defines machines in a broad sense, considering intelligent systems that can make decisions autonomously and independently (e.g., automated, autonomous, or AI agents, robots, vehicles, and instruments) <cit.>. Relationships between humans and machines have been characterized through different theories, such as the Socio-Technical Systems theory (considers humans and technology shape each other while pursuing a common goal within an organization), Actor-Network Theory (considers machines should be equally pondered by humans when analyzing a social system, considering the later as an association of heterogeneous elements), Cyber-Physical Social Systems theory (extends the Socio-Technical Systems theory emphasizing social dimensions where computational algorithms are used to monitor devices), the theory on social machines (considers systems that combine social participation with machine-based computation), and the Human-Machine Networks theory (considers humans and machines form interdependent networks characterized by synergistic interactions). The first three theories conceptualize humans and machines as a single unit, while the last two consider social structures mediated in human-machine networks. In particular, the Socio-Technical Systems theory considers humans and technology shape each other while pursuing a common goal within an organization. The Cyber-Physical Social Systems theory extends this vision, emphasizing social dimensions where computational algorithms are used to monitor and control devices. Moreover, the Actor-Network Theory conceptualizes the social system as an association of heterogeneous elements and advocates that machines should be equally pondered to humans. The theory of social machines is interested in systems that combine social participation with machine-based computation. In contrast, the Human-Machine Networks theory considers humans and machines to form interdependent networks characterized by synergistic interactions. A thorough analysis of the abovementioned concepts can be found in <cit.>. Regardless of the particular theory, the goal remains the same: foster and understand mutualistic and synergistic relationships between humans and machines, where the strengths of both are optimized towards a common goal to achieve what was previously unattainable to each of them. To that end, individual roles must either be clearly defined or allow for a clear sliding of roles when a role can be shared among different types of actors. This will ensure a dynamic division of tasks, optimal use of resources, and reduced processing time. Machines are aimed at supporting, improving, and extending human capabilities. The joint outcomes of human-machine collaboration can result in systems capable of creativity and intuitive action to transcend mere automation. Communication is a critical aspect of every social system. Therefore, emphasis must be placed on the interaction interfaces between such actors. To make such interfaces effective, the concept of shared context or situation awareness between collaborating agents becomes essential and can be seen as a form of mutual understanding <cit.>. This shared context is enabled through interaction communication of different modalities, including direct verbal (speech, text) and non-verbal (gestures, action and intention recognition, emotions recognition). On the other hand, means must be designed so that humans can understand the machine's goals and rationale for acting to reach such goals in a human-like form. In this regard, human-machine interfaces to support multi-modal interaction play a crucial role. These aspects were also identified by Jwo et al. <cit.>, who described the 3I (Intellect, Interaction, and Interface) aspects that must be considered for achieving human-in-the-loop smart manufacturing. Beyond shared context, human-machine cooperation requires adequate communication and shared or sliding control <cit.>. To realize an effective bidirectional information exchange, theory and methods must address how data and machine reasoning can be presented intuitively to humans. Frameworks and models abstracting human cognitive capabilities <cit.> are key to achieving this. Aligning the design of interactive interfaces and support tools for human-machine interactions with such concepts can be critically important for making effective human–machine interfaces. Enhancements in the interactivity, multisensitivity, and autonomy of feedback functions implemented on such interfaces allow for deeper integration between humans and machines. Shared control can be articulated at operational, tactical, and strategic levels, affecting information-gathering, information-analysis, decision-making, and action implementation. Human-machine interactions can be viewed from multiple perspectives, necessitating a thorough consideration of several factors influencing such collaborations. These factors encompass emotional and social responses, task design and assignment, trust, acceptance, decision-making, and accountability <cit.>. Notably, research indicates that machines in collaborative settings impact human behavior, resulting in a diminished emotional response toward them. Consequently, this reduced emotional response can foster more rational interactions. Moreover, studies reveal that humans perceive a team more favorably when machines acknowledge and admit their errors. Additionally, the absence of social pressure from humans can detrimentally affect overall human productivity. Furthermore, concerning accountability for decision-making, humans tend to shift responsibility onto machines. Trust, a critical aspect to consider, has been explored extensively. Studies demonstrate that trust in machines is closely linked to perceived aptness <cit.>. Instances of machine errors often lead to a loss of trust, particularly when machines act autonomously. However, if machines operate in an advisory capacity, trust can be amended over time. Additionally, research reveals that while humans value machine advice, they hesitate to relinquish decision-making authority entirely. Nevertheless, relying excessively on machines can result in sub-optimal outcomes, as humans may fail to identify specific scenarios that necessitate their attention and judgment. For further details about the abovementioned experiments and additional insights the reader may be interested on the works by Chugunova et al. <cit.>. §.§ Industry 5.0 and Human-Centric Manufacturing §.§.§ New Technological Opportunities to Reshape the Human Workforce Digital transformation in production environments demands new digital skills and radically reshapes the roles of plant and machine operators <cit.>. While Industry 4.0 emphasizes the use of technologies to interconnect different stages of the value chain and the use of data analytics to increase productivity, Industry 5.0 emphasizes the role of humans in the manufacturing context <cit.>. Furthermore, it aims to develop means that enable humans to work alongside advanced technologies to enhance industry-related processes <cit.>. An extensive review of this paradigm and its components was written by Leng et al. <cit.>. Nevertheless, two components are relevant to this work: Collaborative Intelligence and Multi-objective Interweaving. Collaborative Intelligence is the fusion of human and AI <cit.>. In the context of Industry 5.0, the fusion of both types of intelligence entails the cognitive coordination between humans and AI in machines, enabling them to collaborate in the innovation, design, and creation of tailored products and services. Complementarity between humans and AI (see Table <ref>) leads to the more effective execution of such tasks than would be possible if relegated to humans or machines only <cit.>. When analyzing complementarities, humans have the knowledge and skills to develop and train machines by framing the problems to be solved and providing feedback regarding their actions or outputs <cit.>. Furthermore, humans can enrich machine outcomes by interpreting results and insights and deciding how to act upon them <cit.>. Machines amplify workers' cognitive abilities: they can track many data sources and decide what information is potentially relevant to humans. Furthermore, machines can excel at repetitive tasks and free humans from such a burden. Such complementary is considered within the multi-objective interweaving nature of Industry 5.0, which enables optimizing multiple goals beyond process performance and social and environmental sustainability <cit.>. Moreover, research suggests that leading companies are beginning to recognize the benefit of using machines and automation systems to supplement human labor rather than replacing the human workforce entirely <cit.>. While AI was already able to tackle certain tasks with super-human capability <cit.>, it has recently shown progress in areas such as creativity (e.g., through generative models such as DALL·E 2 <cit.>) or problem-solving <cit.>, opening new frontiers of human-machine collaboration, such as co-creativity <cit.>. In addition to the direct human involvement described above, digital twins <cit.> are another way to incorporate human insights into the AI processes. By creating virtual models of human behaviour and mental processes, more profound insights into how humans interact with the world and use this information to improve AI systems. Digital twins can also support explainability and transparency in AI systems, making explaining how they arrive at their decisions easier <cit.>. Moreover, digital representations can be used to consider users' preferences in the AI system behaviours, e.g., type of support <cit.>. §.§.§ Trustworthiness and Implications for AI-driven Industrial Systems Trustworthiness for systems and their associated services and characteristics is defined according to the International Organization for Standardization (ISO) as “the ability to meet stakeholders' expectations in a verifiable way” <cit.>. It follows that trustworthiness can refer to products, services, technology, and data and, ultimately, to organizations. Therefore, the concept of trustworthiness is directly applicable to AI-driven systems, particularly to human-centric AI-enabled solutions. However, it should be understood that trustworthiness is a multifaceted concept, incorporating distinct characteristics such as accountability, accuracy, authenticity, availability, controllability, integrity, privacy, quality, reliability, resilience, robustness, safety, security, transparency, usability <cit.>. Some of these characteristics should be seen as emerging characteristics of AI-enabled systems, which are not solely determined by the AI's contribution to an overall solution. Focusing specifically on the AI components of such solutions, ethics guidelines published by the European Commission (EC) identifies seven key requirements for trustworthiness characteristics that must be addressed <cit.>. These include (i) human agency and oversight, (ii) technical robustness and safety, (iii) privacy and data governance, (iv) transparency, (v) diversity, non-discrimination, and fairness, (vi) societal and environmental well-being, and (vii) accountability. Regarding some of these characteristics, there is a direct correspondence between broader trustworthiness as documented according to ISO and the EC guidelines. Technical robustness, safety, privacy, transparency, and accountability are identified in both sources. Human agency and oversight are directly linked to controllability, and so is governance, which is also the prime focus of ISO recommendations <cit.>. Given the societal impacts that AI-induced outcomes can have, the EC has also highlighted diversity, non-discrimination, fairness, and societal and environmental well-being as key characteristics of trusted AI solutions. However, these aspects are also partly addressed as part of the broader concept of "freedom from risk", which can be defined as the extent to which a system avoids or mitigates risks to economic status, human life, health, and well-being and or the environment <cit.>. The trustworthiness of an AI system can be affected by multiple factors. Some of them relate to cybersecurity. In particular, machine learning algorithms are vulnerable to poison and evasion attacks. During poisoning attacks, the adversary aims to tamper with the training data used to create the machine learning models and distort the AI model on its foundation <cit.>. Evasion attacks are performed during inference, where the attacker crafts adversarial inputs that may seem normal to humans but drive the models to classify the inputs wrongly <cit.>. Such an adversarial landscape poses significant challenges and requires a collaborative approach between humans and machines to build defenses that can lead to more robust and trustworthy AI solutions. While human intelligence can be used for the human-in-the-loop adversarial generation, where humans are guided to break models <cit.>, AI solutions can be trained to detect adversarial inputs and uncover potentially malicious instances that try to evade the AI models <cit.>. Furthermore, human-machine collaboration can be fostered to detect such attacks promptly. Accountability refers to the state of being accountable and relates to allocated responsibility <cit.>. At the system level, accountability is a property that ensures that the actions of an entity can be traced uniquely to the entity <cit.>. However, when considering governance, accountability is the obligation of an individual or organization to account for its activities, accept responsibility for them, and disclose the results in a transparent manner <cit.>. Therefore accountability is closely linked to transparency for AI-enabled systems, which is served via XAI and interpretable AI. XAI and interpretable AI ensure that AI systems can be trusted when analyzing model outcomes that impact costs and investments or whenever their outputs provide information to guide human decision-making. Accuracy generally refers to the closeness of results and estimates to true values but, in the context of AI, further attains the meaning appropriate for specific machine learning tasks. Any entity that is what it claims to be is said to be characterized by authenticity, with relevant connotations for what AI-enabled systems claim to deliver. Such systems may furthermore be characterized by enhanced availability to the extent that they are usable on demand. Other characteristics such as integrity, privacy, and security attain additional meaning and importance in AI-driven systems and are further discussed in the next section. They can contribute to and affect the overall quality, reliability, resilience, robustness, and safety, whether the unit of interest is a component, a product, a production asset, or a service, with implications for individual workers all the way to the organization as a whole. When considering accountability for AI systems from the legal perspective, the EU AI Act <cit.> in its current form considers developers and manufacturers responsible for AI failures or unexpected outcomes. Nevertheless, the concept of accountability will evolve based on the issues found in practice and the corresponding jurisprudence that will shape the learning on how different risks, contexts, and outcomes must be considered in the industry context <cit.>. §.§ Automated Quality Inspection §.§.§ The Role of Robotics The increasing prevalence of human-robot collaboration in diverse industries showcases the efforts to enhance workplace productivity, efficiency, and safety through the symbiotic interaction of robots and humans <cit.>. In manufacturing, robots are employed for repetitive and physically demanding tasks, enabling human workers to allocate their skills toward more intricate and creative endeavors. This collaborative partnership allows for the fusion of human and robot capabilities, maximizing the overall outcomes. The successful implementation of human-robot interaction owes credit to collaborative robots, commonly called cobots <cit.>. These advanced robots have sophisticated sensors and programming that facilitate safe and intuitive human interaction. This collaboration improves productivity and fosters a work environment where humans and robots can coexist harmoniously. This approach harmoniously merges robots' precision and accuracy with human workers' adaptability and dexterity. Robotic integration in product quality control has become widespread across diverse industries and production sectors. Robots offer exceptional advantages within quality inspection processes, including precise repeatability and accurate movements <cit.>. They possess the capability to analyze various product aspects such as dimensions, surface defects, color, texture, and alignment, ensuring adherence to predefined standards. Robots' superior accuracy and efficiency make them an ideal choice for quality control applications. To facilitate quality testing, robots are equipped with a range of sensors. These sensors enable precise measurement, detection, and sorting operations. Robots with cameras utilize advanced machine vision techniques to analyze image and video streams and identify anomalies like cracks, scratches, and other imperfections <cit.>. Subsequently, defective items are segregated from conforming ones, elevating overall production quality. The industry is witnessing an increasing adoption of 3D vision systems, particularly in applications requiring object grasping and precise information about object position and orientation. Specially designed robots, such as coordinate measuring machines, are employed for dimensional and precision measurements. These robots feature high-precision axis encoders and accurate touch probes, enabling them to detect part measurements and consistently evaluate adherence to quality standards <cit.>. The active learning paradigm can be applied to enable efficient and flexible learning in robots. This can be particularly useful in resource-constrained industrial environments, where data scarcity and limited human knowledge prevail, acquiring essential data through unsupervised discovery becomes imperative <cit.>. Active learning demonstrates extensive applicability in robotics, encompassing prioritized decision-making, inspection, object recognition, and classification. Within quality control, active learning algorithms optimize machine learning models' defect detection and quality assessment training process. By actively selecting informative samples for labeling, active learning minimizes labeling efforts, augments model training efficiency, and ultimately enhances the accuracy and performance of quality control systems. An intriguing domain of investigation pertains to the advancement of intuitive and natural interfaces that foster seamless communication and interaction between humans and robots. This entails the exploration of innovative interaction modalities, encompassing speech, gestures, and facial expressions, or even using augmented reality to customize the robots' appearance and foster better interaction with humans <cit.>. Other key research areas involve developing adaptive and flexible robotic systems that dynamically adapt their behavior and actions to the prevailing context and the human collaborator's preferences, achieving low processing times <cit.>. These could be critical to enable real-time human intent recognition, situational awareness, and decision-making, all aimed at augmenting the adaptability and responsiveness of robots during collaborative tasks. §.§.§ Artificial Intelligence - Enabled Visual Inspection Visual inspection is frequently used to assess whether the manufactured product complies with quality standards and allows for the detection of functional and cosmetic defects <cit.>. It has historically involved human inspectors in determining whether the manufactured pieces are defective. Nevertheless, the human visual system excels in a world of variety and change, while the visual inspection process requires repeatedly observing the same product type. Furthermore, human visual inspection suffers from poor scalability and the fact that it is subjective, creating an inherent inspector-to-inspector inconsistency. The quality of visual inspection can be affected by many factors. See <cit.> classified them into five categories, whether they are related to the (i) task, (ii) individual, (iii) environment, (iv) organization, or (v) social aspects. To solve the issues described above, much effort has been invested in automated visual inspection by creating software capable of inspecting manufactured products and determining whether they are defective. Cameras are used to provide visual input. Different approaches have been developed to determine whether a defect exists or not. Automated optical quality control may target visual features as simple as colors, but more complex ones are involved in crack detection, the orientation of threads, defects in bolts <cit.> and metallic nuts <cit.>. Through automated optical inspection, it is also possible to detect defects on product surfaces of wide-ranging sizes <cit.>. Furthermore, it is also possible to target the actual manufacturing process, for example, welding <cit.>, injection molding <cit.>, or assembly of manufactured components <cit.>. Additionally, automated visual inspection applies to remanufacturing products at the end of their useful life <cit.>. State-of-the-art (SOTA) automated visual inspection techniques are dominated by deep learning approaches, achieving high-performance levels <cit.>. Among the many types o learning from data for visual inspection, unsupervised, weakly supervised, and supervised methods can be named. Unsupervised methods aim to discriminate defective manufactured pieces without labeled data. The weakly-supervised approach assumes that data has an inherent cluster structure (instances of the same class are close to each other) and that the data lies in a manifold (nearby data instances have similar predictions). Therefore, it leverages a small amount of annotated data and unlabeled data to learn and issue predictions. Finally, supervised methods require annotated data and usually perform best among the three approaches. Often, labeled data are unavailable in sufficient range and numbers to enable fully supervised learning and additional exemplar images can be produced through data augmentation <cit.>. In addition, multiple strategies have been developed to reduce the labeled data required to train and enhance a given classifier. Among them are active learning, generative AI, and few-shot learning. In the context of visual inspection, active learning studies how to select data instances that can be presented to a human annotator to maximize the models' learning. Generative AI aims to learn how to create data instances that resemble a particular class. Finally, few-shot learning aims to develop means by which the learner can acquire experience to solve a specific task with only a few labeled examples. To compensate for the lack of labeled data, it can either augment the dataset with samples from other datasets or use unlabeled data, acquire knowledge on another dataset, or algorithm (e.g., by adapting hyperparameters based on prior meta-learned knowledge) <cit.>. Regardless of the progress made in automated visual inspection, many challenges remain. First, there is no universal solution for automated visual inspection: solutions and approaches have been developed to target a specific product. Flexibility to address the inspection of multiple manufactured products with a single visual inspection system is a complex challenge and remains an open issue <cit.>. Second, unsupervised machine learning models do not require annotating data and may provide a certain level of defect detection when associating data clusters to categories (e.g., types of defects or no defects). Furthermore, given that no prior annotation of expected defects is required, they are suitable when various defects exist. Nevertheless, their detection rates are lower than those obtained by supervised machine learning models. Therefore, it should be examined use case by use case whether the unsupervised machine learning models are a suitable solution. Third, data collection and annotation are expensive. While data collection affects unsupervised machine learning models, data collection and annotations directly impact supervised machine learning approaches. While multiple strategies have been envisioned to overcome this issue (e.g., generative models, active learning, and few-shot learning), data collection and annotation remain an open challenge. Finally, better explainability techniques and intuitive ways to convey information to humans must be developed to understand whether the models learn and predict properly. §.§ Realizing Human-Machine Collaboration in Visual Inspection While much progress has been made in automated visual inspection, authors recognize that most solutions are custom and developed for a particular product type. Developing systems that could adapt to a broad set of products and requirements remains a challenge. In human-centered manufacturing, it is critical to rethink and redesign the role of humans in the visual inspection process. The role of humans in automated visual inspection is shifting away from repetitive and manual tasks to roles with more cognitive involvement, which can still not be replicated by machines and AI. In the simplest case, this involves humans labeling acquired image samples to guide the machine learning process <cit.>. However, the role of humans extends beyond data labeling and may involve interaction loops between humans and AI as part of the machine learning process <cit.>. In this regard, two machine-learning paradigms are particularly important: active learning and XAI. On one side, active learning is an AI paradigm that seeks the intervention of an oracle (usually a human person) to help the machine learning model learn better toward an objective. XAI, on the other side, aims to explain the rationale behind a machine learning model action or prediction. Doing so enables a fruitful dialogue between humans and machines by providing insights into the machines' rationale and decision-making process. Active learning for classification is based on the premises that unlabeled data (either collected or generated) is abundant, the data labeling is expensive, and the models' generalization error can be minimized by carefully selecting new input instances with which the model is trained <cit.>. Active learning for classification has traditionally focused on the data (selecting or generating the data without further consideration for the model at hand) and the model's learning (e.g., considering the uncertainty at the predicted scores). Nevertheless, approaches have been developed to consider both dimensions and provide a holistic solution. One of them is the Robust Zero-Sum Game (RZSG) framework <cit.>, which attempts to optimize both objectives at once, framing the data selection as a robust optimization problem to find the best weights for unlabeled data to minimize the actual risk, reduce the average loss (to achieve greater robustness to outliers) and minimize the maximal loss (increasing the robustness to imbalanced data distributions). Another perspective has been considered by Zajec et al. <cit.> and Križnar et al. <cit.>, who aim to select data based on insights provided by XAI methods and therefore benefit from direct insights into the model's learning dynamics. Regardless of the approach, Wu et al. <cit.> propose that three aspects must be considered when searching for the most valuable samples: informativeness (contains rich information that would benefit the objective function), representativeness (how many other samples are similar to it), and diversity (the samples do not concentrate in a particular region, but rather are scattered across the whole space). Strategies will be conditioned by particular requirements (e.g., whether the data instances are drawn from a pool of samples or a data stream). For a detailed review of active learning, the reader may be interested in some high-quality surveys of this domain. In particular, the works by Settles <cit.> and Rožanec et al. <cit.> can serve as an introduction to this topic. Furthermore, the surveys by Fu et al. <cit.> and Kumar et al. <cit.> provide an overview of querying strategies in a batch setting; the survey by Lughofer <cit.> give an overview of active learning in online settings, and the study by Ren et al. <cit.> describes active learning approaches related to deep learning models. While AI models have the potential to automate many tasks and achieve super-human performance levels, in most cases, such models are opaque to humans: their predictions are mostly accurate, but no intuition regarding their reasoning process is conveyed to humans. Understanding the rationale behind a model's prediction is of utmost importance, given it provides a means to assess whether the predictions are based on accurate facts and intuitions. Furthermore, it is crucial to develop means to understand the model's reasoning process given the impact such techniques have on the real world, either in fully automated settings or when decision-making is delegated to humans. Such insights enable responsible decision-making and accountability. The subfield of AI research developing techniques and mechanisms to elucidate the models' rationale and how to present them to humans is known as XAI. While the field can be traced back to the 1970s <cit.>, it has recently flourished with the advent of modern deep learning <cit.>. When dealing with XAI, it is important to understand what makes a good explanation. A good explanation must consider at least three elements <cit.>: (a) reasons for a given model output (e.g., features and their values, how strongly do features influence a forecast, whether the features at which the model looks at make sense w.r.t. the forecast, how did training data influence the model's learning), (b) context (e.g., the data on which the machine learning model was trained, the context on which inference is performed), and (c) how is the abovementioned information conveyed to the users (e.g., target audience, the terminology used by such an audience, what information can be disclosed to it). XAI can be valuable in enhancing human understanding with new (machine-based) perspectives. It can also help to understand whether the model is optimizing for one or few of all required goals and therefore identify an appropriate compromise between the different goals that must be satisfied for the problem at hand <cit.>. To assess the goodness of an explanation, aspects such as user satisfaction, the explanation persuasiveness, the improvement of human judgment, the improvement of human-AI system performance, the automation capability, and the novelty of explanation must be considered <cit.>. For a detailed review of XAI, the reader may consider the works of Arrieta et al. <cit.>, Doshi-Velez et al. <cit.>, and Schwalbe et al. <cit.>. The work of Bodria et al. <cit.> provides a comprehensive introduction to XAI black box methods, and the works of Doshi-Velez et al. <cit.>, Hoffman et al. <cit.> and Das et al. <cit.> focus on insights about how to measure the quality of explanations. Active learning and XAI can complement each other. Understanding the rationale behind a model prediction provides valuable insight to humans and can also be leveraged in an active learning setting. In the particular case of defect inspection, insights obtained by XAI techniques are usually presented in anomaly maps. Such anomaly maps highlight regions of the image the machine learning models consider to issue a prediction. The more perfect the learning of a machine learning model, the better those anomaly maps should annotate a given image indicating defective regions. Therefore, the insights obtained from those anomaly maps can be used in at least two ways. First, the anomaly maps can be handed to the oracle (human inspector), who, aided by the anomaly map and the image of the product, may realize better where the manufacturing errors are, if any. Second, anomaly maps can be used to develop novel models and active learning policies that allow for data selection, considering what was learned by the model and how the model perceives unlabeled data. This approach is detailed in Fig. <ref>, which depicts how an initial dataset is used to train machine learning models for defect classification or data generation. In the model training process, XAI can be used to debug and iterate the model until getting satisfactory results. The classification model is then deployed to perform inference on incoming product images from the manufacturing line. If the classification scores for certain classes are high enough, the product can be classified as good or defective. When the uncertainty around the predicted scores is not low enough, the case can be sent for manual revision. Insights obtained through XAI and unsupervised classification models can be used to hint to the human inspector where the defects may be located. Alternative data sources for the manual revision or data labeling process can be generative models (e.g., generative adversarial networks), which can be used to generate labeled synthetic data and validate the level of attention of a human inspector. When collecting data, active learning techniques can be used to select the most promising data instances from either generative models or incoming images from the manufacturing line, reducing the labeling effort. Finally, a separate model can monitor human inspectors to predict fatigue and performance. Such models can be a valuable tool to ensure workplace well-being and enhance work quality. Some of the results obtained within the STAR project are presented in Section <ref>. In recent years, researchers have made significant progress in understanding and quantifying fatigue and recognizing its impact on human performance and overall well-being. Through AI techniques, new approaches have emerged to accurately estimate the fatigue levels of individuals during different tasks and in different contexts <cit.>. One notable area of inquiry concerns the assessment of fatigue in the workplace. Understanding and managing worker fatigue has become essential given the increasing demands and pressures of modern work environments. AI models can consider various factors and features to assess employee fatigue levels accurately. These models can provide valuable insights for organizations looking to implement strategies and interventions to optimize productivity and ensure employee well-being or to support workflows, including quality controls, such as identifying when operators need a break. Although laboratory experiments have been conducted in this area <cit.>, industrial applications remain relatively restricted compared to other fields, such as driving <cit.>. § INDUSTRIAL APPLICATIONS This section briefly describes how some ideas presented in the previous sections have been realized within the EU H2020 STAR project. Three domains are considered: artificial intelligence for visual inspection, digital twins, and cybersecurity. §.§ Machine Learning and Visual Inspection In the domain of visual inspection, multiple use cases were considered. The datasets were provided by two industrial partners: Philips Consumer Lifestyle BV (Drachten, The Netherlands) and Iber-Oleff - Componentes Tecnicos Em Plástico, S.A. (Portugal). The Philips Consumer Lifestyle BV manufacturing plant is considered one of Europe's most important Philips development centers and is devoted to producing household appliances. They provided us with three datasets corresponding to different products. The first one corresponded to logo prints on shavers. The visual inspection task required understanding whether the logo was correctly printed or had some printing defect (e.g., double printing or interrupted printing). The second one corresponded to decorative caps covering the shaving head's center, and it required identifying whether the caps were correctly manufactured or if some flow lines or marks existed. Finally, the third dataset was about toothbrush shafts transferring motion from the handle to the brush. It required identifying whether the handles were manufactured without defects or if big dents, small dents, or some stripes could be appreciated. Iber-Oleff - Componentes Tecnicos Em Plástico, S.A. provided us with another dataset about automobile air vents they manufacture. The air vents have three components of interest: housing, lamellas (used to direct the air), and plastic links (which keep the lamellas tied together). The visual inspection task required us to determine whether (a) the fork is leaning against the support and correctly positioned, (b) the plastic link is present, (c) the lamella 1 is present, and the link is correctly assembled, and (d) the lamella 3 is present, and the link is correctly assembled. Through the research, the researchers aimed to develop a comprehensive AI-first and human-centric approach to automated visual inspection. In particular, they (i) developed machine learning models to detect defects, (ii) used active learning to enhance the models' learning process while alleviating the need to label data, (iii) used XAI to enhance the labeling process, (iv) analyzed how data augmentation techniques at embeddings and image level, along with anomaly maps can enhance the machine learning discriminative capabilities, (v) how human fatigue can be detected and predicted in humans, and (vi) how to calibrate and measure models' calibration quality to provide probabilistic predictive scores. Research at the EU H2020 STAR project confirmed that active learning could alleviate the need for data labeling and help machine learning models learn better based on fewer data instances <cit.>. Nevertheless, the effort saved depends on the pool of unlabeled images, the use case, and the active learning strategy. Data augmentation techniques at an image or embedding level have increased the models' discriminative performance <cit.>. Furthermore, complementing images with anomaly maps as input to supervised classification models has substantially improved discriminative capabilities <cit.>. The data labeling experiments showed decreased labeling accuracy by humans over time <cit.>, which was attributed to human fatigue. While the future labeling quality can be predicted, it requires ground truth data. This can be acquired by showing synthetically generated images. Nevertheless, more research is required to devise new models that would consider other cues and predict human fatigue in data labeling without the requirement of annotated data. Finally, predictive scores alone provide little information to the decision-maker: predictive score distributions differ across different models. Therefore, performing probability calibration is paramount to ensure probability scores have the same semantics across the models. The research compared some of the existing probability calibration techniques and developed metrics to measure and assess calibration quality regardless of ground truth availability <cit.>. §.§ Human Digital Twins in Quality Control In the context of STAR, significant advancements have been made in developing human-digital twins (HDTs). In particular, the project has developed an infrastructure (Clawdite Platform <cit.>) that allows the effortless creation of replicas of human workers through instantiating their digital counterparts. These HDTs have diverse features, encompassing static characteristics, dynamic data, and behavioral and functional models <cit.>. To ensure a comprehensive representation of the human worker, STAR's HDT incorporates two crucial data types. Firstly, it assimilates physiological data collected from wearable devices. Secondly, it utilizes quasi-static data, which encapsulates characteristic attributes of the human, offering a holistic perspective on their traits. Central to STAR's HDT is an AI model designed to detect mental stress and physical fatigue. By leveraging physiological and quasi-static data, this AI model effectively gauges the stress and fatigue levels experienced by the human worker. This breakthrough in automated quality control holds remarkable significance, manifesting in two distinct ways: * During user manual inspection, the HDT continuously monitors the quality control process, actively identifying instances where the worker may be under significant mental or physical stress. In such cases, the system promptly suggests the worker take a break, ensuring their well-being and preventing any potential decline in performance. * During the training of automatic quality assessment models, as the worker evaluates and labels pictures during the data set creation, the system periodically assigns a confidence score to each label provided by the user. This confidence score is computed based on evaluating the worker's mental and physical stress levels estimated through the HDT's AI model. By considering these stress levels as an integral part of the quality evaluation process, the HDT provides valuable insights into the worker's state of mind and physical condition, allowing one to consider these features during the training of AI models for quality assessment and control. The integration of the HDTs, supported by the Clawdite Platform, in STAR's operations signifies a significant step forward in human-AI collaboration. This innovative approach prioritizes human workers' well-being and empowers automated quality control systems, ensuring optimal productivity and efficiency in various industrial settings. §.§ Making AI Visual Inspection Robust Against Adversarial Attacks In the context of the STAR project, an AI architecture was created for evaluating adversarial tactics and defense algorithms intended to safeguard, secure, and make the environments of manufacturing AI systems more reliable. More specifically, it was focused on AI-based visual inspection and tackled multiple use cases provided by two industrial partners: Philips Consumer Lifestyle BV (Drachten, The Netherlands) and Iber-Oleff - Componentes Tecnicos Em Plástico, S.A. (Portugal). Current production lines are often tailored for the mass production of one product or product series in the most efficient way. Given its many advantages, AI is being increasingly adopted for quality inspection. Such models are usually trained considering some convolutional neural network (CNN), which then classifies whether a product is defective through inference upon receiving images captured by the inspection cameras. Nevertheless, such models can be attacked through adversarial data, leading AI models to wrongly classify the products (e.g., not detecting defects). For instance, the adversary may exploit a vulnerability in the visual inspection camera and compromise the integrity of the captured data by manipulating the operational behavior of this business resource. Among the various experimental testbeds built in the context of the STAR project, the ones created with soother cherries provided by Philips Consumer Lifestyle BV (see Fig. <ref>) were the most challenging. The cherry is the upper part of the soother. The high quality of the cherry must be guaranteed to avoid any harm to the babies. Therefore, detecting any adversarial attack is of primary importance, given the consequences of the attack can directly impact children's health. The goal of the testbed was to quantify the impact of adversarial attacks on classification models performing a visual inspection and evaluate how effective the defenses against such attacks were. To build the testbed, the Adversarial Robustness Toolbox <cit.> was used. In the experiments, the following adversarial methods were used: Fast Gradient Sign Attack (FGSM) <cit.>, DeepFool <cit.>, NewtonFool <cit.>, and Projected Gradient Descent (PGD) <cit.>. The aim was to utilize these well-documented adversarial methods to derive crafted instances that can be used to attack the baseline classification model. An example of a perturbed image using the Deepfool method is given in Fig. <ref>B. Experiments were performed with defence strategies, namely FeatureSqueezing <cit.>, JpegCompression <cit.>, SpatialSmoothing <cit.>, TotalVarMin <cit.>, and Adversarial Training <cit.>. To gather insights regarding adversarial tactics and defenses, they were evaluated pairwisely. This enabled us to identify adversarial training as the best defense strategy to enhance the robustness of the CNN models. The basic idea behind adversarial training is to create examples that will be used later in the training process, creating a model aware of adversarial vectors launched against the quality control system. The results of the pairwise evaluation of the attacks and defenses are summarized in Fig. <ref>. The results are grouped into four sets based on the attack strategy. A baseline classifier was initially trained for each of the four experiments (see tag "Training") to get the perception of the accuracy level that the quality inspection algorithm can achieve. The baseline model achieved an accuracy between 93% and 98%. The "Attack" bar indicates the accuracy of the classifier when posed against the adversarial attack. The DeepFool, FGSM, and PGD attacks strongly affected the classifier, causing the model's accuracy to drop below 30%. This was not the case for the NewtonFool attack, where the classifier's accuracy dropped to 84%. When considering defense strategies, Feature Squeezing, JPEG Compression, and Spatial Smoothing can defend against the DeepFool attack: for the given dataset, they led to an accuracy of 98%. However, TotalVarMin failed to defend the model. All the defenses failed against the FGSM and the PGD attacks. Based on the acquired results of the pairwise evaluations, it became clear that no clear mapping exists between types of attacks and defenses. Therefore it can be challenging for defenders to plan a strategy to cope against any attack successfully. This outcome advocates the criticality and the challenge of defending against adversarial AI attacks. While the off-the-shelf and state-of-the-art defenses cannot perform in a stable manner under different adversarial methods, the Adversarial Training approach seems robust. The results agree with the literature, advocating that Adversarial Training can be a robust solution that can cope with adversaries despite its simplicity. A more detailed description of the abovementioned work can be found in <cit.>. § CONCLUSION This work has briefly introduced state-of-the-art research on human-machine collaboration, perspectives on human-centric manufacturing, and the key aspects of trustworthiness and accountability in the context of Industry 5.0. It described research on automated quality inspection, considering the role of robotics, AI approaches, and solutions to visual inspection and how a fruitful human-machine collaboration can be developed in the visual inspection domain. Finally, it described the experience and results obtained through research performed in the EU H2020 STAR project. The converging view from the literature analysis is that human-machine cooperation requires adequate communication and control realized through effective bidirectional information exchange. Studies have been performed to understand peoples' emotional and social responses in human-machine interactions, understand task design, and how humans' trust, acceptance, decision-making, and accountability are developed or impacted in the presence of machines. In the field of visual inspection, much research was invested in automating the task of visual inspection by developing machine learning models to detect product defects. Furthermore, many research efforts targeted the development of techniques for XAI related to machine vision. Visual aids and hints derived by XAI are conveyed to humans through heat maps. Similarly, insights obtained from unsupervised machine learning models are conveyed to humans as anomaly maps. While such approaches solve particular problems, little research describes how a human-in-the-loop approach could be developed for visual inspection in manufacturing settings. This research aims to bridge the gap by implementing existing and researching novel active learning techniques for data selection to enhance the learning of machine learning algorithms. It also explores how labeling requirements could be reduced by employing few-shot learning and active learning techniques. Furthermore, research was conducted to understand how XAI and unsupervised classification methods can be used to generate heatmaps and anomaly maps to facilitate data labeling in the context of manual revision or data annotation tasks. Moreover, predictive models were developed to predict how heat maps and anomaly maps should be adapted over time to bridge the gap between the information conveyed by machine learning algorithms and explainability techniques and human perception. In addition, experiments were performed to gain insights related to human-fatigue monitoring in the context of visual inspection. The present work described a complete and modular infrastructure developed to instantiate HDT, and different AI models for perceived fatigue exertion and mental stress have been trained to derive relevant features for human-centered production systems. Finally, it describes some research on adversarial attacks and defenses to enhance the understanding of protecting visual inspection setups in manufacturing environments. While the research presented above advances the understanding of developing a human-in-the-loop approach for visual inspection in manufacturing, many open issues remain to be solved. Further research is required to understand how adaptive humans perceive hinting and how the many solutions described above contribute to building trust between humans and machines. Furthermore, effort must be invested to quantify the benefits such solutions bring to a manufacturing plant when implemented. Future research will encompass the integration of these solutions, aiming to achieve a comprehensive and synergistic implementation. The research will aim to develop new approaches that interleave active learning and XAI. Furthermore, novel few-shot learning solutions will be considered to allow for greater flexibility of the visual inspection while reducing data labeling requirements to a minimum. Finally, integrating AI visual inspection models and the HDT is expected to significantly augment the efficacy of quality inspection processes during user manual assessment and AI model training. Acknowledgments This work was supported by the Slovenian Research Agency and the European Union's Horizon 2020 program project STAR under grant agreement number H2020-956573.
http://arxiv.org/abs/2307.01813v1
20230704163952
Structural Balance and Random Walks on Complex Networks with Complex Weights
[ "Yu Tian", "Renaud Lambiotte" ]
cs.SI
[ "cs.SI", "cs.LG", "math.DS", "math.SP", "physics.soc-ph", "05C22, 05C50, 05C81, 37E25, 39A06, 91D30, 94C15" ]
Market Making of Options via Reinforcement Learning [ August 1, 2023 =================================================== Complex numbers define the relationship between entities in many situations. A canonical example would be the off-diagonal terms in a Hamiltonian matrix in quantum physics. Recent years have seen an increasing interest to extend the tools of network science when the weight of edges are complex numbers. Here, we focus on the case when the weight matrix is Hermitian, a reasonable assumption in many applications, and investigate both structural and dynamical properties of the complex-weighted networks. Building on concepts from signed graphs, we introduce a classification of complex-weighted networks based on the notion of structural balance, and illustrate the shared spectral properties within each type. We then apply the results to characterise the dynamics of random walks on complex-weighted networks, where local consensus can be achieved asymptotically when the graph is structurally balanced, while global consensus will be obtained when it is strictly unbalanced. Finally, we explore potential applications of our findings by generalising the notion of cut, and propose an associated spectral clustering algorithm. We also provide further characteristics of the magnetic Laplacian, associating directed networks to complex-weighted ones. The performance of the algorithm is verified on both synthetic and real networks. § INTRODUCTION Networks have been popular models in representing complex systems over the past few decades, and can provide valuable insights into various fields, such as physics, biology, economy and social sciences <cit.>. At its core, network science questions the relations between the structure of a system and the dynamics taking place on it, investigating how certain type of patterns may either slow down or accelerate diffusing dynamics, and using dynamical processes to extract important information from the underlying structure <cit.>. A majority of the research focuses on unweighted networks, where each pair of nodes may be connected or not, hence being encoded as a binary variable, but several methods and models have later been generalised to situations when edges are equipped with a real-value weight, often considered to be positive (e.g., an intensity or a frequency) but also more recently with an arbitrary sign (e.g., to encode a conflict). This paper focuses on networks where the weight of an edge is not a real number, but a complex one. Networks with complex weights find applications in a variety of scientific areas, including quantum information, computational social science, and machine learning <cit.>. Complex numbers are widely used in applied mathematics and physics, such as to represent periodically varying signals in signal processing and to describe potential flow in two dimensions in fluid mechanics. The complex field is also intrinsic to quantum mechanics, in terms of complex-valued equations, wave functions, operators and Hilbert spaces <cit.>. Further, the fundamental role of complex numbers in quantum theory are verified in various experiments <cit.>. Meanwhile, in social network analysis, the use of complex adjacency matrices can store more information in the context of asymmetric weighted communication <cit.>. Artificial neural networks have also been extended to consider complex weights, which outperform their real-valued counterparts <cit.>. In many applications, the weight matrices are assumed to be Hermitian <cit.> – which naturally generalises symmetric real matrices to the complex domain. Specifically, for an isolated quantum system, we can think of the Hermitian matrices associated with the Hamiltonian as an adjacency matrix, where it has complex-valued off-diagonal terms which are indicative of the amplitude for transition from one state to another. This can also be extended to quantum networks based on entangled states and physical connectivity <cit.>. As an example, originated from quantum mechanics, the magnetic Laplacian (investigated further in the last part of this paper), being Hermitian, has been considered as an efficient representation for directed networks in various downstream applications, including community detection <cit.>. In machine learning, it has been shown that the representation of a dataset can be dramatically improved by attaching to each edge not only a weight but also the transformation from one to another <cit.>. If rotational transformation is considered, then from its connection with complex numbers, the weight matrix of the graph is necessarily Hermitian. The theoretical and mathematical understanding of networks with complex weights is relatively preliminary. Among the few works in the literature, Böttcher and Porter have generalised several network measures to the complex domain, including random walk centralities <cit.>. Lange et al. have proved extensions of the Cheeger inequality for several versions of magnetic Laplacian, and as byproducts, they have considered the notion of balance on graphs with complex weights (or signature values) <cit.>. Note that the concept of balance has initially been motivated by problems in social psychology <cit.>, and has mainly been considered in the study of signed networks where the edge weights can be positive or negative but still in the real domain <cit.>. Within the literature of signed networks, works on signed networks that are not balanced <cit.> are relatively limited, especially in relation to dynamics. In that direction, let us point <cit.> where the researchers extended random walks to signed networks, and investigated their behaviour in different types of signed networks. In this paper, we have developed further the idea of balance in complex networks, to consider the whole range of situations when the networks with complex weights may be balanced, antibalanced, and strictly balanced. Our first contribution is to characterise each type of the complex-weighted networks in terms of the spectral properties of the complex weight matrix. In particular, we show that the spectral radius of this matrix is smaller than its real counterpart where each element is replaced by its magnitude, if and only if the underlying network is strictly unbalanced. Then we extend an important dynamics, random walks, to the complex domain, and our second contribution is to characterise its variety of behaviour on complex-weighted networks. Finally, we have applied our analysis to two applications: spectral clustering and the magnetic Laplacian. We generalise the notion of cut for complex weights, and our third contribution is to propose a spectral clustering algorithm on complex-weighted networks. It is the first algorithm of this type, to the best of our knowledge. Our fourth contribution is to provide further characterisation for the eigenvalues and eigenvectors of the magnetic Laplacian, thereby helping us to build a spectral clustering algorithm to detect communities in directed networks. Through the paper, the results are numerically verified in both synthetic and real networks. This paper is organised as follows. In section <ref>, we first introduce the main notations in section <ref>, then propose the classification of complex-weighted networks based on the notion of balance in section <ref>, and further characterise the spectral properties in each type in section <ref>. Then in section <ref>, after illustrating the random walk dynamics on networks with complex weights in section <ref>, we further characterise the distinct behaviour of the dynamics on each type of complex-weighted networks in section <ref>. An example is given in section <ref> where the potential phases uniformly divide the unit circle. Based on the results we have developed, we consider the clustering problem on complex-weighted networks, and propose a spectral clustering algorithm in section <ref>. We also provide further characteristics of the magnetic Laplacian in section <ref>. The results in both sections are verified in synthetic and real networks. Finally, we conclude with potential future directions in section <ref>. § NETWORKS WITH COMPLEX WEIGHTS §.§ Notations We consider networks in the form of weighted graphs G=(V,E,𝐖), being connected, directed and complex-weighted, where V={v_1, v_2, …, v_n} is the node set, E is the edge set, and 𝐖 = (W_ij) is the complex weight matrix, with W_ij∈ℂ characterising the edges between nodes. We assume that 𝐖 is Hermitian where 𝐖^* = 𝐖. Specifically, we further decompose each element in the complex weight matrix as W_ij = r_ije^φ_ij, where r_ij≥ 0 indicates the magnitude of the value while φ_ij∈ [0, 2π) is the phase, and then we have r_ij = r_ji and φ_ij = -φ_ji + 2π. For example, if each node represents a figure and the edge weights characterise the similarity between them, r_ij can store the maximum similarity value between the two figures subject to rotations, while the extra freedom introduced by φ_ij can be used to record the rotation value associated with the maximum. Clearly, classic networks can be retrieved if all phases are 0, while signed networks can be obtained if all phases can only be 0 or π. Furthermore, corresponding to the sign of paths (cycles) in signed networks, we also define the phase of a path (cycle)[Note that the term “directed paths (cycles)" are generally used in directed graphs, but since we assume that 𝐖 is Hermitian, the existence of a path (cycle) without direction is equivalent to the existence of a path (cycle) with either direction in G. Hence, we directly use “paths (cycles)" in this paper, and specify the direction when necessary.] as the sum of phases of composing edges, and we still restrict the phase to lie in [0,2π). The degree of a node v_i sums over the magnitudes of edges incident on this node, d_i = ∑_j W_ij = ∑_j r_ij, and we define the complex Laplacian matrix as 𝐋 = 𝐃 - 𝐖, where the complex degree matrix 𝐃 is the diagonal matrix with 𝐝 = (d_i) on its diagonal. As the graph Laplacian in classic networks, we will show later in section <ref> that the complex Laplacian can also be used to perform clustering on complex-weighted networks. Meanwhile, we define the complex random walk Laplacian as 𝐋_rw = 𝐈 - 𝐃^-1𝐖, where 𝐈 is the identity matrix. Later in section <ref>, we will show that it relates to a type of random walks on complex-weighted networks. In the case of r_ij = 1, ∀ (v_i,v_j)∈ E, i.e., only uniform magnitude 1, we define the network as being “unweighted". In this case, 𝐖 = 𝐀, where 𝐀 is the adjacency matrix and only records the phase information, i.e., A_ij = e^φ_ij if r_ij > 0 and 0 otherwise, ∀ v_i,v_j∈ V. §.§ Structural balance In this section, we propose a classification of graphs with complex weights. The classification is based on extending the important notions of balance and antibalance to complex-weighted graphs, motivated by the special case of signed networks <cit.>. Let G = (V, E, 𝐖) be a complex-weighted graph. * Structural Balance. G is structurally balanced if the phase of every cycle is 0. * Structural Antibalance. G is structurally antibalanced if the phase of every cycle, after adding π to the phase of each composing edge, is 0. * Strict Unbalance. G is strictly unbalanced if it is neither balanced nor antibalanced. Since 𝐖 is Hermitian, if one cycle has phase θ, the one with the reversed direction will have phase 2π-θ. The correctness of the definition relies on the fact that the only two cases where the cycle and its reversed one have the same phase is when θ = 0 or π. A complex-weighted graph G which is both balanced and antibalanced has to be bipartite. From G being balanced, the phase of every cycle is 0. Then if we add π to each composing edge in a cycle, we add π n_c to the phase of the cycle, where n_c is the length of the cycle. From G being antibalanced, π n_c = 2kπ for some k∈ℤ^+, and this is true for every cycle in G. Hence, there is no odd cycle in G, which indicates that G is bipartite. We now consider the structural theorems for balanced and antibalance in complex-weighted graphs, in order to provide the graph decomposition based on the phase of edges. See section <ref> in the Appendix for more details of the proofs. A complex-weighted graph G is balanced if and only if there is a partition {V_i}_i=1^l_p s.t. (i) any edges within each node subset have phase 0, (ii) any edges between the same pair of node subsets have the same phase, and (iii) if we consider each node subset as a super node, then the phase of any cycle is 0. If such partition exists, G is balanced by definition. If G is balanced, we can show that such partition exists by construction: we start with an arbitrary node, group its neighbours by the phase of edges between them, and repeat the process for other nodes until all nodes have been grouped. A complex-weighted graph G is antibalanced if and only if there is a partition {V_i}_i=1^l_p s.t. (i) any edges within each node subset have phase π, (ii) any edges between the same pair of node subsets have the same phase, and (iii) if we consider each node subset as a super node and add phase π to each super edge, then the phase of any cycle is 0. The result follows from the relationship between balance and antibalance, and Theorem <ref>. §.§ Spectral characterisation Now, we further characterise the spectral properties of complex weight matrix. Specifically, we aim to explore the different effects from adding the phase information to the classic weighted graphs only with the magnitude information. Throughout this section, we consider the complex weight matrix 𝐖, and its relations with the one ignoring the phase information 𝐖̅ = (W̅_ij), where W̅_ijW_ij = r_ij, encodes the weights in the classic weighted graph G̅ = (V, E, 𝐖̅). We first show that when G is either balanced or antibalanced, the full spectrum of the complex weight matrix 𝐖 can be obtained from 𝐖̅ in Theorem <ref>, and further characterise its leading eigenvalues and eigenvectors in Proposition <ref>. Finally if G is strictly unbalanced, we demonstrate a general property that its spectral radius will be smaller than 𝐖̅'s in Theorem <ref>. We defer more details of the proofs to section <ref> in the Appendix. If G is balanced, then ∀ v_i,v_j∈ V, all paths from v_i to v_j have the same phase. We can show by contradiction that if there are two paths of different phases between the same nodes, G is not balanced. Let 𝐖 = 𝐔Λ𝐔^* and 𝐖̅ = 𝐔̅Λ̅𝐔̅^* be the unitary eigendecompositions of 𝐖 and 𝐖̅, respectively, where 𝐔𝐔^* = 𝐈 and 𝐔̅𝐔̅^* = 𝐈. Let {V_i}_i=1^l_p denote the corresponding partition for either balanced or antibalanced graphs, and 𝐈_1 denote the diagonal matrix whose (i,i) element is exp(θ_1σ(i)), where σ(·) returns the node subset that a node is associated with, and θ_hl is the phase of a path from nodes in V_h to nodes in V_l in the balanced case and is the phase of the path after adding π to each composing edge in the antibalanced case. * If G is balanced, Λ = Λ̅, 𝐔 = 𝐈_1^*𝐔̅. * If G is antibalanced, Λ = -Λ̅, 𝐔 = 𝐈_1^*𝐔̅. We first note that such 𝐈_1 matrix exists in the balanced case since, if we consider each node subset as a super node, all paths from V_1 to V_i have the same phase, by Lemma <ref>. Such 𝐈_1 matrix exists in the antibalanced case since the graph becomes balanced after adding phase π to each edge. If G is balanced, 𝐖 = 𝐈_1^*𝐖̅𝐈_1, by Theorem <ref>. Then 𝐖 = 𝐈_1^*𝐖̅𝐈_1 = 𝐈_1^*𝐔̅Λ̅𝐔̅^*𝐈_1 = (𝐈_1^*𝐔̅)Λ̅(𝐈_1^*𝐔̅)^*. It is the unitary decomposition of 𝐖 by the uniqueness. Hence, Λ = Λ̅ and 𝐔 = 𝐈_1^*𝐔̅. While if G is antibalanced, 𝐖 = -𝐈_1^*𝐖̅𝐈_1, by Theorem <ref>. Then 𝐖 = -𝐈_1^*𝐖̅𝐈_1 = -𝐈_1^*𝐔̅Λ̅𝐔̅^*𝐈_1 = (𝐈_1^*𝐔̅)(-Λ̅)(𝐈_1^*𝐔̅)^*. It is the unitary decomposition of 𝐖 by the uniqueness. Hence, Λ = -Λ̅ and 𝐔 = 𝐈_1^*𝐔̅. Suppose G is not bipartite, or is aperiodic. Let λ_1≥λ_2 ≥…≥λ_n denote the eigenvalues of 𝐖 with the associated eigenvectors 𝐮_1, 𝐮_2, …, 𝐮_n, λ̅_1≥λ̅_2 ≥…≥λ̅_n denote the eigenvalues of 𝐖̅ with the associated eigenvectors 𝐮̅_1, 𝐮̅_2, …, 𝐮̅_n, and ρ(·) denotes the spectral radius. Let {V_i}_i=1^l_p denote the corresponding partition for either balanced or antibalanced graphs. * If G is balanced, λ_1 = ρ(𝐖) > 0, and this eigenvalue is simple and the only one of the largest magnitude, where λ_i < λ_1, ∀ i 1. * If G is antibalanced, λ_n = -ρ(𝐖) < 0, and this eigenvalue is simple and the only one of the largest magnitude, where λ_i < -λ_n, ∀ i n. Meanwhile, the associated eigenvector, 𝐮_1 for balanced graphs and 𝐮_n for antibalanced graphs, is the only one of the following pattern: it has phase 2π - θ_1i if the underlying node is in node subset V_i, where θ_1i is as defined in Theorem <ref>. The results follow from Theorem <ref> and Perron-Frobenius theorem. If G is strictly unbalanced, then ∃ v_i,v_j∈ V and z∈ℤ^+ s.t. there are two walks of length z between nodes v_i, v_j of different phases. When G is periodic, thus bipartite, we can clearly see that if all walks between the same nodes of the same length have the same phase, G is balanced. When G is aperiodic, we can still prove by contradiction, where if all walks between the same nodes of the same lengths have the same phase, then all walks between the same nodes of even lengths have the same phase, and then G is either balanced or antibalanced. G is strictly unbalanced if and only if ρ(𝐖) < ρ(𝐖̅). If ρ(𝐖) < ρ(𝐖̅), G is strictly unbalanced by Theorem <ref>. If G is strictly unbalanced, we can find two walks between the same nodes of the same length but different phases by Lemma <ref>, which further contracts the spectral radius by the definition of matrix 2-norm. § RANDOM WALKS Characterised by distinct structural properties, the classifications we have introduced also provide a way to understand the dynamics happening on complex-weighted networks. Specifically here, we consider random walks as an important example. §.§ Definition In this section, we extend random walks to networks with complex weights. A natural way to define this process is to consider a population made of different types of walkers, each having its own phase associated to it. The dynamics is then driven by the jump of the walkers and by the change of their phases, depending on the complex value of the edge that has been traversed. In general, the system is described by the densities on node v_i, x_i^θ, for each possible θ∈[0, 2π). For the simplicity of illustration, we first consider the unweighted case where r_ij = 1, ∀ (v_i,v_j)∈ E. As we will see, to describe the random walk dynamics, it is convenient to rewrite the adjacency matrix as 𝐀 = ∫_0^2πe^θ𝐀^θ dθ, where the integration is applied elementwise, and 𝐀^θ = (A^θ_ij) with A^θ_ij = A_ijδ(θ - φ_ij) encoding the presence of an edge with phase θ from v_i to v_j. Here, δ(·) denotes the direct delta function. Note also that 𝐀 = ∫_0^2π𝐀^θ dθ, as each edge is endowed with one single phase. To define the random walk process in case of complex weights, one key step is to determine how different walkers interact with edges with different phases. Here, we assume that after traversing an edge, walkers add the phase of the edge to the phase they originally have. That is, a walker of phase θ after going through an edge of phase φ has phase θ + φ. For illustrative purposes, we also incorporate the periodicity of phases into the density, where x_i^θ + 2kπ = x_i^θ, ∀ k∈ℤ. Hence, for walkers to have phase θ on node v_j, x^θ_j(t+1) = ∑_i1/d_i∫_0^2πA_ij^φ x_i^θ - φ(t)dφ𝒜_j(θ; t+1). The whole dynamics is thus governed by the above set of operators, {𝒜_j}_v_j∈ V. Now we know the densities of walkers of different phases, and we can think of different quantities to summarise the dynamics. For example, we can ignore the phase of walkers and only consider the number of walkers on each node v_j, n_j, which can be obtained by directly integrating Eq. (<ref>) over all possible phases θ, n_j(t+1) = ∫_0^2π x_j^θ(t+1)dθ = ∫_0^2π∑_i1/d_i∫_0^2πA_ij^φ x_i^θ - φ(t)dφ dθ = ∫_0^2π∑_i1/d_iA_ij x_i^θ - φ_ij(t)dθ = ∑_iA_ij/d_i∫_0^2πx_i^θ - φ_ij(t)dθ = ∑_iA_ij/d_i n_i(t). This retrieves the classic random walk where only the existence matters and the transition matrix is 𝐏̅ = 𝐃^-1𝐀. In contrast, we can take into account the phase information by averaging the phases of the random walkers on each node by their densities, i.e., to integrate over all possible phases θ multiplying by their density as in Eq. (<ref>), x_j(t+1) = ∫_0^2π e^θ x_j^θ(t+1)dθ = ∑_i1/d_i∫_0^2π(∫_0^2πe^θA_ij^φ dθ) x_i^θ - φ(t)dφ = ∑_iA_ij/d_i∫_0^2πe^ (θ - φ)x_i^θ - φ(t)dφ = ∑_iA_ij/d_i x_i(t). This gives the transition matrix for complex-weighted networks 𝐏 = 𝐃^-1𝐀. We note that the random walks we have discussed here can be extended to weighted networks similarly, and also to the continuous-time setting <cit.>, by assuming that walkers jump at continuous rate or at a rate proportional to the node degree. §.§ Dynamical properties We further characterise the properties of random walks on complex-weighted networks via the complex transition matrix 𝐏. Note that 𝐏 = 𝐃^-1𝐖 is not necessarily Hermitian. However, it is similar to a Hermitian matrix 𝐏_h = 𝐃^-1/2𝐖𝐃^-1/2, where 𝐏 = 𝐃^-1/2𝐏_h𝐃^1/2, and for each eigenpair (λ, 𝐃^1/2𝐱) of 𝐏_h, (λ, 𝐱) is also an eigenpair of 𝐏. Hence, the results in section <ref> can still be applied, but indirectly through 𝐏_h. We denote the eigenvalues by λ_1≥…≥λ_n with the associated (right) eigenvectors of 𝐏 by 𝐮_1, …, 𝐮_n, thus the eigenvectors of 𝐏_h are 𝐃^1/2𝐮_1, …, 𝐃^1/2𝐮_n. For illustrative purposes, in this section, we only assume 𝐃^1/2𝐮_1, …, 𝐃^1/2𝐮_n to be orthonormal. We denote the counterparts ignoring the phase information as 𝐏̅ and 𝐏̅_h, and their eigenvalues as λ̅_1≥…≥λ̅_n. We refer to section <ref> in the Appendix for more features of the random walks. Balanced networks. We start with the case when the complex-weighted network is structurally balanced, and will show that a steady state is achievable. The complex transition matrix 𝐏 has eigenvalue 1 if and only if G is balanced. When G is balanced, by Theorem <ref>, 𝐏_h shares the same spectrum as 𝐏̅_h, thus λ_1 = λ̅_1 = 1, and 𝐏 has eigenvalue 1. We then consider the case when 𝐏 has eigenvalue 1, i.e., λ_1 = 1. Suppose G is strictly unbalanced, then by Theorem <ref>, ρ(𝐏) = ρ(𝐏_h) < ρ(𝐏̅_h) = ρ(𝐏̅) = 1, which leads to contradiction. Suppose G is antibalanced, then by Theorem <ref>, λ̅_n = -λ_1 = -1, thus G̅ is bipartite and so is G. From Proposition <ref>, G is also balanced. Hence, G is balanced. We also note that 𝐋_rw = 𝐈 - 𝐏, hence 1 being an eigenvalue of 𝐏 is equivalent to 0 being an eigenvalue of 𝐋_rw, and further of 𝐋. The latter has been shown as a sufficient and necessary condition for the complex-weighted graph to be balanced <cit.>. If G is balanced and is not bipartite, then the steady state is 𝐱^* = (x_j^*) where x_j^* = exp(θ_1σ(j))(𝐱(0)^*1̃_1)d_j/(2m) where 𝐱(0) = (x_i(0)) is the initial state vector with ∑_ix_i(0) = 1, 2m = ∑_jd_j, 𝐈_1, θ_hl and σ(·) are as defined in Theorem <ref>, and 1̃_1 is the diagonal vector of 𝐈_1^*. By Proposition <ref>, λ_1 = 1. By Proposition <ref>, λ_i < 1, ∀ i 1. Hence, lim_t→∞𝐏_h^t = lim_t→∞∑_i=1^nλ_i^t(𝐃^1/2𝐮_i)(𝐃^1/2𝐮_i)^* = (𝐃^1/2𝐮_1)(𝐃^1/2𝐮_1)^*, where the eigenvectors 𝐃^1/2𝐮_i are orthonormal to each other, and 𝐮_i is the eigenvector of 𝐏 associated with the same eigenvalue. By 𝐮̅_1 being all-one vector, the relationships between 𝐏 and 𝐏_h, and Theorem <ref>, we have 𝐮_1 = c1̃_1 for some nonzero constant c∈ℝ. WOLG, we assume c > 0. Then since 𝐃^1/2𝐮_i has 2-norm 1, c = 1/√(2m). Hence, 𝐱^* = lim_t→∞𝐱(0)^*𝐏^t = lim_t→∞𝐱(0)^*𝐃^-1/2𝐏_h^t𝐃^1/2 = 𝐱(0)^*𝐃^-1/2(𝐃^1/2𝐮_1)(𝐃^1/2𝐮_1)^*𝐃^1/2 = 𝐱(0)^*(c1̃_1)(c1̃_1)^*𝐃 = 𝐱(0)^*1̃_1/(2m)1̃_1^*𝐃. Hence, x^*_j = exp(θ_1σ(j))(𝐱(0)^*1̃_1)d_j/(2m). Hence, from Proposition <ref>, consensus can be obtained asymptotically within each part corresponding to the balanced structure. The steady state now depends on the initial condition, which deviates from the random walks defined on classic networks with positive-valued weights. The dependence can be partially removed if the initialisation agrees with the balanced structure s.t. 𝐱(0)^T1̃_1 = 1, while the phase of the steady state still depends on 𝐱(0). Antibalanced networks. We then continue to the case when the complex-weighted network is antibalanced, and will show that a steady state cannot be achieved generally, but one for odd times while another for even times. The complex transition matrix 𝐏 has eigenvalue -1 if and only if G is antibalanced. A graph G = (V, E, 𝐖) is antibalanced if and only if the graph constructed by adding phase π to each edge G_n = (V, E, -𝐖) is balanced. By Proposition <ref>, G_n is balanced if and only if its complex transition matrix 𝐏_n = - 𝐏 has eigenvalue 1, which is equivalent to that 𝐏 has eigenvalue -1. If G is antibalanced and is not bipartite, the random walks have different steady states for odd or even times, denoted by 𝐱^*o = (x_j^*o) and 𝐱^*e = (x_j^*e), respectively, with x_j^*o = -exp(θ_1σ(j))(𝐱(0)^*1̃_1)d_j/(2m), x_j^*e = exp(θ_1σ(j))(𝐱(0)^*1̃_1)d_j/(2m), where 𝐱(0) is the initial state, 2m = ∑_jd_j, 𝐈_1, θ_hl and σ(·) are as defined in Theorem <ref>, and 1̃_1 is the diagonal vector of 𝐈_1^*. By Proposition <ref>, λ_n = -1. By Proposition <ref>, λ_i < 1, ∀ i n. Hence, for odd times lim_t→∞𝐏_h^2t-1 = lim_t→∞∑_i=1^nλ_i^2t-1(𝐃^1/2𝐮_i)(𝐃^1/2𝐮_i)^* = -(𝐃^1/2𝐮_n)(𝐃^1/2𝐮_n)^*, and similarly for even times, lim_t→∞𝐏_h^2t = (𝐃^1/2𝐮_n)(𝐃^1/2𝐮_n)^*, where the eigenvectors 𝐃^1/2𝐮_i are orthonormal to each other, and 𝐮_i is the eigenvector of 𝐏 associated with the same eigenvalue. By 𝐮̅_1 being all-one vector, the relationships between 𝐏 and 𝐏_h, and Theorem <ref>, 𝐮_n has the specific structure where 𝐮_n = c1̃_1 for some nonzero constant c∈ℝ. WOLG, we assume c > 0. Then since 𝐃^1/2𝐮_i has 2-norm 1, c = 1/√(2m). Hence, for odd times 𝐱^*o = lim_t→∞𝐱(0)^*𝐏^2t-1 = lim_t→∞𝐱(0)^*𝐃^-1/2𝐏_h^2t-1𝐃^1/2 = -𝐱(0)^*𝐃^-1/2(𝐃^1/2𝐮_n)(𝐃^1/2𝐮_n)^*𝐃^1/2 = -𝐱(0)^*(c1̃_1)(c1̃_1)^*𝐃 = - 𝐱(0)^*1̃_1/(2m)1̃_1^*𝐃. Similarly, for even times, 𝐱^*e = 𝐱(0)^*1̃_1/(2m)1̃_1^*𝐃. Hence, from Proposition <ref>, consensus can still be obtained asymptotically within each part of the antibalanced partition, if we consider odd times and even times separately. Here, the “steady state" depends on not only the initialisation but also odd and even times. Strictly unbalanced networks. Finally, we consider all the remaining complex-weighted networks, the strictly unbalanced ones. Interestingly, a steady state is actually achievable in this case, which relates to global consensus. If G is strictly unbalanced, then the steady state is 0, where 0 is the vector of zeros. When G is strictly unbalanced, by Theorem <ref>, ρ(𝐏_h) < ρ(𝐏̅_h) = 1. Hence, lim_t→∞𝐏_h^t = lim_t→∞∑_t=1^nλ_i^t(𝐃^1/2𝐮_i)(𝐃^1/2𝐮_i)^* = 𝐎, where 𝐎 is the matrix of zeros. Hence, 𝐱^* = lim_t→∞𝐱(0)^*𝐏^t = lim_t→∞𝐱(0)^*𝐃^-1/2𝐏_h^t𝐃^1/2 = 𝐱(0)^*𝐃^-1/2𝐎𝐃^1/2 = 0. We first note that if the potential phases can only be 0 or π, i.e., the complex-weighed network is reduced to be a signed network, the results derived in this section are consistent with the ones in <cit.>. Then inspired by the Cheeger inequalities extended for complex weights <cit.>, we propose the following measures to further characterise strictly unbalanced networks: (i) d_b(G) = 1 - λ_1 for the dissimilarity to balance, and (ii) d_b(G) = 1 + λ_n for the dissimilarity to antibalance. We leave the detailed exploration of these measures to future work. §.§ Example: Special choice of phases When considering all possible phases in [0, 2π), we build a map between the edges and the unitary group of dimension 1, U(1), or the circle group, consisting of all complex numbers of modulus 1. Here, we consider a special choice of its subgroups, cyclic groups S_k^1 {ξ^j|j = 0, 1, …, k-1} where ξ = e^2π/k is the primitive k-th root of unity. This is a reasonable choice in various applications, such as in the magnetic Laplacian <cit.>. In this case, we effectively restrict the choice of phases to be in the set {zφ_0}_z=0^k-1, where φ_0 = 2π/k. We can then write the adjacency matrix as 𝐀 = ∑_z=0^k-1e^zφ_0𝐀^zφ_0, where A_ij^zφ_0 = 1 if zφ_0 = φ_ij and 0 otherwise. Hence, Eq. (<ref>) can be effectively simplified as x_j^θ(t+1) = ∑_i1/d_i∑_z=0^k-1A_ij^zφ_0x_i^θ-zφ_0, and the whole dynamics is governed by the following nk× nk matrix that can be interpreted as the adjacency matrix of a larger graph where each node appears k times, 𝐀^(k) = [ 𝐀^0 𝐀^φ_0 ⋯ 𝐀^(k-1)φ_0; 𝐀^(k-1)φ_0 𝐀^0 ⋯ 𝐀^(k-2)φ_0; ⋮ ⋮ ⋱ ⋮; 𝐀^φ_0 𝐀^2φ_0 ⋯ 𝐀^0 ]. We note that ∑_jA^(k)_ij = d_i, thus the system characterised by Eq. (<ref>) has the coupling matrix 𝐏^(k) = 𝐃^(k)-1𝐀^(k), where 𝐃^(k) is the diagonal matrix with 𝐝 = (d_i) on the diagonal but appearing k times. When k=2, we recover the matrix 𝐀^(2) in signed networks <cit.>. The partitions corresponding to the balanced structure are relatively predictable in this case: {V_i}_i=1^k where any edge within each node subset has phase 0, any edge from V_i to V_i+1 has phase 2π/k (i=1,…, k, with the convention that V_k+1 = V_1), and other edges should be consistent with the requirement that all cycles have phase 0; see Figs. <ref> and <ref> for full description with all potential edges[We notice that if we restrict to phases being multiples of 2π/k, the antibalanced structure when k is odd is not valid in the current S_k^1 but in S_2k^1]. The case for the antibalanced structures is similar. The steady states can be expressed in a more explicit format in this special choice of phases; see section <ref> in the Appendix for more details. § APPLICATION: SPECTRAL CLUSTERING When considering spectral clustering on graphs, we refer to the notion of cut on graphs. For classic graphs where the weights are positive, cuts can be defined via the sum of weights of edges between different parts of a partition. However, in complex-weighted networks, there are two dimensions of information encoded in each edge, both the magnitude and the phase, and we can think of, e.g., the absolute difference between two figures after optimally aligning them by rotations and the optimal rotation angle. Hence, it is necessary to extend graph cuts to incorporate both dimensions in an appropriate manner. §.§ General cut As in the figure clustering setting, one may first aim for communities of figures that share high similarity values, and further extract communities of figures that are aligned similarly within each community. Accordingly, we consider two levels of communities in the context of complex weights, thus two types of cuts as follows. In the first level, we only consider the magnitude, and define the following absolute cut, cut(X, X^c) = ∑_v_i∈ X, v_j∈ X^cW_ij = ∑_v_i∈ X, v_j∈ X^cr_ij, where X⊂ V and X^c = V\ X. Note that inside each community at the current level, edges can have very different phases, hence in the next level, we incorporate the phase information to further group the nodes inside. Specifically, inspired by dynamical equivalence, the target community structure should have the following features: (i) edges inside each subcommunity have phase 0, (ii) edges between each pair of subcommunities have the same phases (as others between the same pair), and (iii) if the subcommunities form any cycles by considering them as supernodes, the phase of such cycles is 0. Features (i) and (ii) are also expected from structural equivalence, while feature (iii) is specific to dynamical equivalence or structural balance, which guarantees the paths to some nodes from the same nodes have the same phase. Hence, we can assign each subcommunity X_a an phase θ_X_a∈[0,2π) where ((θ_X_a - θ_X_b) 2π) is the expected phase of the edges from subcommunity X_a to X_b, and define the following complex cut for each pair of subcommunities X_a, X_b, ccut(X_a, X_b) = ∑_v_i∈ X_a, v_j∈ X_b (1 - cos(φ_ij - (θ_X_a - θ_X_b)))r_ij, where the choice of cosine function is motivated by the absolute difference between exp(θ_X_a) and exp( (φ_ij+θ_X_b)). Overall, the general cut is defined as the sum of the two parts, gcut(X^(h), X^(h)c) = cut(X^(h), X^(h)c) + ∑_a=1^l_h∑_b=1^l_h ccut(X^(h)_a, X^(h)_b), where X^(h) = ⋃_a=1^l_hX^(h)_a⊆ V. As special examples, if all phases are 0, i.e., it is a classic graph, the complex cut is 0 and the general cut will be reduced to the classic cut; while if the potential phases can only be 0 or π, i.e., it is a signed graph, the complex cut will be reduced to the signed cut <cit.>. Specifically in the latter, edges between subcommunities can only have phase π, and then from feature (iii) of the target community structure discussed above, there can be at most two subcommunities in each community. Hence, gcut(X^(h), X^(h)c) = cut(X^(h), X^(h)c) + ccut(X^(h)_1, X^(h)_2) = cut(X^(h), X^(h)c) + cut^-(X^(h)_1, X^(h)_1) + cut^-(X^(h)_2,X^(h)_2) + 2cut^+(X^(h)_1,X^(h)_2), where cut^-(X, Y) = ∑_v_i∈ X, v_j∈ Y: W_ij < 0r_ij, and cut^+(X, Y) = ∑_v_i∈ X, v_j∈ Y: W_ij > 0r_ij. This recovers the signed bipartiteness ratio as defined in <cit.> (subject to appropriate normalisation) and is closely related to the signed Cheeger constant. Further, if the connectivity of the graph is almost uniformly distributed so that the whole graph is considered as X^(1), then the general cut will also be reduced to the signed cut. Hence, the general ratio cut is defined as, grcut({X^(1)_a}_a=1^l_1, …, {X^(k)_a}_a=1^l_k) = ∑_h=1^kgcut(X^(h), X^(h)c)/X^(h), where X^(h) is the size of the set. Hence, the following minimisation problem can solve the clustering problem in complex-weighted networks, min_{X^(1)_a}_a=1^l_1,…, {X^(k)_a}_a=1^l_kmin_Θ grcut({X^(1)_a}_a=1^l_1, …, {X^(k)_a}_a=1^l_k), where Θ contains all the phases that need to be assigned to different subcommunities. Complex graph Laplacian. Now, we show that the problem can be reformulated in terms of the complex Laplacian. We start from its general characteristics of being positive semi-definite, as in the case of classic graph Laplacian. The complex Laplacian 𝐋 is positive semi-definite. We write the complex Laplacian matrix as a sum over the edges of G 𝐋 = ∑_(i,j), (j,i)∈ E𝐋^{i,j}, where 𝐋^{i,j}∈ℂ^n× n contains the four following nonzero entries (𝐋^{i,j})_ii = (𝐋^{i,j})_jj = W_ij = r_ij (𝐋^{i,j})_ij = (𝐋^{i,j}*)_ji = -W_ij = -r_ijexp(φ_ij). For all 𝐱∈ℂ^n, 𝐱^*𝐋^{i,j}𝐱 = x_i^*r_ijx_i + x_j^*r_ijx_j - x_i^*r_ijexp(φ_ij)x_j - x_j^*r_ijexp(-φ_ij)x_i = r_ij(x_i^*x_i + x_j^*x_j - exp(φ_ij)x_i^*x_j - exp(-φ_ij)x_j^*x_i) = r_ij(x_i - exp(φ_ij)x_j)^*(x_i - exp(φ_ij)x_j) = r_ijx_i - exp(φ_ij)x_j^2 ≥ 0. Hence, 𝐱^*𝐋𝐱 = ∑_(i,j), (j,i)∈ Er_ijx_i - exp(φ_ij)x_j^2 ≥ 0. With the bilinear form of the complex Laplacian, we can show that the objective of the minimisation problem (<ref>) can be retrieved with specific choice of vectors corresponding to the partition, as illustrated in Proposition <ref>. For communities {X^(1)_a}_a=1^l_1, …, {X^(k)_a}_a=1^l_k in complex-weighted networks, grcut({X^(1)_a}_a=1^l_1, …, {X^(k)_a}_a=1^l_k) = (𝐗^*𝐋𝐗) where (·) returns the matrix trace, 𝐗∈ℂ^n× k is the matrix containing the complex indicator vectors {𝐱^(h)} as columns with x^(h)_i = exp(θ_X^(h)_a)/√(X^(h)), if v_i∈ X^(h)_a, a∈{1,2,…,l_h}, 0, otherwise. The result follows from the definition of matrix trace, with the specific form of vectors 𝐱^(h). See section <ref> in the Appendix for details. Hence, the minimisation problem (<ref>) can be rewritten as min_{X^(1)_a}_a=1^n_1,…, {X^(k)_a}_a=1^n_kmin_Θ (𝐗^*𝐋𝐗) s.t. 𝐗^*𝐗 = 𝐈, 𝐗 as defined in Eq. (<ref>). Similar to the classic case, we now relax the problem so that 𝐗 can take any complex values, then the problem becomes, min_𝐗∈ℂ^n× k (𝐗^*𝐋𝐗) s.t. 𝐗^*𝐗 = 𝐈, which gives us the standard format of a trace minimisation problem. By the Rayleigh-Ritz theorem (e.g., see section 5.2.2 (5) of <cit.>), the solution is to choose 𝐗 as the matrix whose columns are the first k eigenvectors (corresponding to the k smallest eigenvalues). We note that the analysis can also be applied to the normalised cut <cit.>. Spectral clustering algorithm. We propose the following spectral clustering algorithm on complex-weighted networks: (i) we first obtain the first k eigenvectors of the complex Laplacian to construct an initial 𝐗, and then (ii-a) clustering nodes into level-one communities by the magnitude while (ii-b) further clustering nodes into level-two communities together with the phase; see Algorithm <ref> for details. It follows similarly to the classic version, apart from the fact that after obtaining 𝐗, further clustering methods are applied twice for communities in two levels. Note that the numbers of communities in the two levels, although given as parameters in the algorithm, can be inferred from the data. For example, k can be estimated from the number of eigenvalues that are almost zero, while l_h can be inferred from the distribution of complex values within the h-th level-one community, h=1,…, k. Further, with the bilinear form of the complex Laplacian (<ref>), we can see that in the ideal case where the level-one communities are disconnected and each of them is structurally balanced, the spectral clustering algorithm can recover the target community structure. §.§ Experiments In this section, we examine the performance of the spectral clustering method on synthetic networks, and leave the exploration of real networks to later. Specifically, we consider a type of stochastic block model (SBM) where each edge can have different phases (rather than only 0 as in the classic case). The CSBM is constructed by the following components: (i) a planted SBM, SBM(p_in, p_out), where the probabilities of an edge to occur inside each community and between the two communities are p_in and p_out, respectively; (ii) within the i-th community, (ii-a) nodes are further grouped into l_i sub-communities, where we consider (almost) equally division in our current setting, and (ii-b) an initial balanced configuration, where edges inside each sub-community have phase 0 while those from sub-community a to b have phase ((b-a)2π/l_i 2π), i=1,…, k; (iii) a mixing probability η∈[0,1] to change the phase of edges within each community to others that occur in the same community uniformly at random. We denote this planted CSBM by CSBM(p_in, p_out, η, {l_i}_i=1^k). In the experiments, we consider networks whose level-one communities are of size n_p = 60, while varying the probability p_in from 0.4 to 0.8, p_out from 0 to 0.3, and also η from 0 to 0.3; see Fig. <ref> for example networks. For each set of parameters, we generate n_s=20 graphs, run the proposed spectral clustering algorithm as in Algorithm <ref>, and report the average (mean) normalised mutual information (NMI) scores and the standard deviation (std) <cit.>. Specifically, to build upon state-of-the-art development of spectral clustering, we run spectral clustering algorithm in on the graphs ignoring the phase information to detect level-one communities <cit.>. For level-two communities, instead of another k-means algorithm, we represent each point as (Re(𝐱_i), Im(𝐱_i)), where Re(·), Im(·) return the real and imaginary part, respectively, of a complex number and are applied element-wise, and apply the k-means to clustering these points. Note that the proposed algorithm is the only one that is designed for detecting communities in networks with complex weights, to the best of our knowledge. Level-two communities. For the first set of experiments, we generate networks with only one level-one community, while vary the number of level-two communities, l_1. Specifically, we consider two cases here: l_1 = 2, where the edge weights only have phases in {0,π} and the graph is reduced to be a signed graph, and l_1 = 3, where the edge weights can take complex values. Our experimental results demonstrate that the proposed spectral clustering method can successfully recover the level-two communities, with an NMI above 0.8, in almost all cases as we vary the edge probability p_in and mixing probability η; see Fig. <ref>. We also observe a drop in performance as we increase η in the case of l_1=2, as one may expected, but this is not evident in the case of l_1=3, which implies that the method is relatively more robust in the latter with complex weights. Two-level communities. For the following, we generate networks with both level-one and level-two communities. Specifically, we consider two level-one communities, and different combinations of level-two communities: (i) l_1 = 2, l_2 = 2, which corresponds to signed graphs, (ii) l_1 = 2, l_2 = 3, and (iii) l_1 = 3, l_2 = 3; see Fig. <ref> for examples. As in the case of level-two communities only, the proposed algorithm can also successfully retrieve the two-level community structure, with an NMI greater than 0.8, as we vary the mixing probability η and the edge probabilities p_in, p_out, with p_in being sufficiently larger than p_out, in networks with all three different types of communities; see Fig. <ref>. As one may expect from the previous experiments, the performance of the algorithm drops when we increase p_out and η, and this effect is relatively more evident in cases (i) and (ii) where there are level-one communities that are effectively signed. § APPLICATION: MAGNETIC LAPLACIAN The magnetic Laplacian characterises each directed network as a Hermitian matrix where the imaginary part carries the directional information. It is motivated by the dynamics of a free quantum mechanical particle on a graph under the influence of magnetic fluxes passing through the cycles in the network. Specifically, for a given directed network H = (V,E_H), where V = {v_1, …, v_n} is the node set and E_H is the edge set with w_ij>0 encoding the weight of each directed edge (v_i,v_j) (w_ij = 0 if there is no edge), the magnetic Laplacian 𝐋^θ = (L_ij^θ) is L_ij^θ = ∑_hw_s(i,h), if i=j, - w_s(i,j)T_i→ j^θ, if {(v_i,v_j), (v_j,v_i)}∩ E∅, 0, otherwise, where w_s(i,j) = (w_ij + w_ji)/2 is the symmetrised weight, and T_i→ j^θ = expθ a(i,j). θ∈ [0, 2π) is an extra parameter introduced and a(i,j) = 1, if (v_i,v_j)∈ E, (v_j,v_i)∉ E, -1, if (v_i,v_j)∉ E, (v_j,v_i)∈ E, 0, if (v_i,v_j)∈ E, (v_j,v_i)∈ E. Within the context of complex-weighted networks, we can see that the magnetic Laplacian is effectively the graph Laplacian of a complex-weighted network, denoted G^θ hereafter, where each edge can only have phases in {0, θ, 2π-θ}. We will show in this section how the results we have developed can further explain the features of the magnetic Laplacian. §.§ Eigenvalues: lengths of cycles In this section, we focus on the eigenvalues of the (normalised) magnetic Laplacian, and demonstrate that they are closely related to the effective lengths of cycles ignoring the direction (which we refer to as cycles hereafter in this section) in the original graph H; see Propositions <ref> and <ref>. We define the effective length of a cycle by the absolute difference between the number of edges towards one direction and those towards the opposite direction in the cycle; see Fig. <ref> for examples. We also group the case where there is no cycles in H into the one where all cycles have effective length 0. Magnetic Laplacian and normalised magnetic Laplacian have eigenvalue 0 if and only if θ∈Θ_0 where Θ_0 = {2π/cd: cd ∈ C_d}, if C_l∅, [0, 2π), otherwise, C_l contains all nonzero effective lengths of the cycles in H, and C_d contains all common divisors of elements in C_l. The magnetic Laplacian 𝐋^θ has eigenvalue 0 ⇔ the normalised magnetic Laplacian 𝐋^θ_rw = 𝐃^θ -1𝐋^θ, where 𝐃^θ is the complex degree matrix of G^θ, has eigenvalue 0 ⇔ the complex transition matrix 𝐏^θ = 𝐈 - 𝐋^θ_rw has eigenvalue 1 ⇔ G^θ is balanced by Proposition <ref>. The condition can be obtained by Definition <ref> of structural balance. Normalised magnetic Laplacian has eigenvalue 2 if and only if θ∈Θ_2 where Θ_2 = {(2π/cd + π 2π): cd ∈ C_d}, if C_l∅, [0, 2π), otherwise. The normalised magnetic Laplacian 𝐋^θ_rw has eigenvalue 2 ⇔ the complex transition matrix 𝐏^θ has eigenvalue -1 ⇔ G^θ is antibalanced by Proposition <ref>. The condition can be obtained by Definition <ref> of structural antibalance. Hence, only if θ = 2π/r with r being a positive integer, can the smallest eigenvalue be 0 when there are cycles of nonzero effective lengths, and we will consider θ in this specific form hereafter. Further, the change of the smallest/largest eigenvalues of the normalised magnetic Laplacian while sweeping over all possible values of r can exhibit different behaviour, if the underlying graphs are different w.r.t. the effective length of cycles. Hence, it also provides an approach to characterise a graph, and accordingly, quantify the difference between graphs. In the following, we numerically explore this feature in different types of graphs, where we sweep over integer values of r=2π/θ between 1 and 100. Directed cycles. One straightforward example to examine the feature is when the directed graph is simply a directed cycle of size n, where each divisor of n is also a common divisor of all effective lengths. Our experimental results verify that when r is a divisor of n, the smallest eigenvalue of the normalised magnetic Laplacian is 0, and when (2π/(2π/r-π)) where r>2 is a divisor of n, the largest eigenvalue of the normalised magnetic Laplacian is 2; see Fig. <ref>. We also include more examples in section <ref> in the Appendix. Furthermore, we found that the change points are the same in the curve describing the smallest eigenvalues and the one for the largest eigenvalues. However, for directed cycles of odd lengths, they change in the same direction, while for directed cycles of even lengths, they change in the opposite direction. This is expected, since G^θ is bipartite in the latter, and from Proposition <ref>, if G^θ is balanced, G^θ is also antibalanced. Tree of directed cycles. We now consider the case when there are more than one directed cycle in the graph. There are various ways to combine cycles, and here we consider two particular cases: (i) tree of directed cycles, where the cycles do not share a single edge, and (ii) nested directed cycles, where the cycles share at least one edge; see Figs. <ref> and <ref>, respectively, for examples. Apart from verifying the theoretical results we have developed for the occurrence of 0 (2) as the smallest (largest) eigenvalue of the normalised magnetic Laplacian, our experiments also indicate the following interesting characteristics. Firstly, when the graph is composed of a tree of directed cycles of length n, the change of smallest/largest eigenvalue of the normalised Laplacian over r is the same as the change from a directed cycle of length n; see the first rows in Figs. <ref> and <ref>. Secondly, when two graphs have the same set of common divisors but different sets of effective lengths, even though they have the same minimum (maximum) points over r for the smallest (largest) eigenvalues, the maximum (minimum) points are different; see Fig. <ref>. Thirdly, for asymptotic behaviour of the largest eigenvalue, if the graph only contains directed cycles of odd lengths, it will maintain the same trend as when there is only one directed cycle of odd length, i.e., decreasing as r rises; while if the graph contains directed cycles of both odd and even lengths, it will combine the behaviours of single directed cycles, where it could increase but not to 2. Nested directed cycles. Finally, we construct nested cycles by adding one directed edge within a directed cycle of length 6; see Fig. <ref>. Our experimental results again confirm the theoretical relationship we have developed between the common divisors and when the smallest (largest) eigenvalue of the normalised magnetic Laplacian becomes 0 (2). Hence, the three graphs already exhibit different behaviours from these minimum (maximum) points. Also, we note that the right-most graphs in Figs. <ref> and <ref> both contain cycles of effective length 3 and 6, with 3 and 1 being all common divisors. However, the overall change curves exhibit different behaviours, where the one corresponding to the nested directed cycle has relatively more similar trend to the directed cycle of length 3's. This is expected since the nested directed cycle has two cycles of effective length 3, while the tree version only has one. §.§ Eigenvectors: role structure In the case when the smallest eigenvalue of the (normalised) magnetic Laplacian is 0 or when the largest eigenvalue of the normalised magnetic Laplacian is 2, the corresponding eigenvector can be used to obtain the role structure following the same procedure as detecting the level-two communities as in Algorithm <ref>; see Fig. <ref>. The extracted role structure indicates either (i) the global cyclic structure in the presence of a cycle of nonzero effective length, or (ii) a hierarchical structure, to some extent, otherwise. Similarly, if each node is replaced by a group of nodes where the connections within them are bidirectional, thus the corresponding elements in the magnetic Laplacian have phase 0, the algorithm can also detect the role structure of these groups. Real directed network. We consider real directed networks, and extract the role structure by applying our spectral clustering method to the magnetic Laplacian. The data was obtained from NeuroData's open source data base[<https://neurodata.io/project/connectomes/>, accessed on 20 June 2023. The database hosts animal connectomes produced using data from a multitude of labs, across model species, using different modalities.], and characterises the connectomes of a set of different animal species. A connectome is a comprehensive, cell-to-cell mapping of the interaction between neurons, created from cellular data obtained, for instance, via electron microscopy. Here specifically, we consider two networks constructed from mouse (“Mouse 1" and “Mouse 2") <cit.>. Table <ref> provides summary statistics for the real networks in our study, including the average (in- or out-) degree d, standard deviation of in-degree σ(d_in) and that of out-degree σ(d_out). For comparison, we also apply the Louvain algorithm to the symmetrised graph using w_s(i,j) = (w_ij + w_ji)/2 as edge weights, where we run it for 100 times, construct the co-occurrence matrix of the clustering results, and then run the algorithm one more time on the graph constructed by the co-occurence matrix, in order to obtain consistent communities. Our experimental results indicate that the eigenvector(s) of the magnetic Laplacian contains information about the role structure; see Fig. <ref> where we choose θ=2π/3 in constructing the magnetic Laplacian. From another perspective, they also verify that the performance of our spectral clustering algorithm on real networks. § DISCUSSION In this paper, we have analysed networks with complex weights, with applications in various scientific and engineering fields. Specifically, we have first introduced a classification of complex-weighted networks into balanced, antibalanced and strictly unbalanced ones, and further characterised the spectral properties of the complex weight matrix in each type. There is some work in the literature in analysing the balanced structure in networks with complex weights, but very little for the other types and their dynamical implication. We then applied the results to understand the dynamics of random walks on complex-weighted networks, and showed interesting while distinct behaviour of the dynamics in different types of networks. Finally, based on the structural and dynamical characterisation of complex-weighted networks, we further analysed the applications for spectral clustering and the study of the magnetic Laplacian. Corresponding to the information encoded in complex weights, we first defined the general cut problem in the complex-weighted networks, and then proposed a spectral clustering algorithm, whose efficiency has been verified through both synthetic and real networks. For the magnetic Laplacian, we provided further characteristics of its eigenvalues and eigenvectors in terms of cycles in the directed networks, which further extends the spectral clustering algorithm to detect the role structure in directed networks. There are also several venues for future investigation. In the analysis of complex-weighted networks, structural balanced and antibalanced ones only correspond to two switching equivalent classes <cit.>, and we have grouped all remaining networks, potentially more than the previous two types, into one category. Hence, it would be interesting to consider more switching equivalent classes, and provide finer characterisation for strictly unbalanced networks. In analysing the dynamics on complex-weighted networks, we only consider discrete-time random walks for now, which leaves the investigation of more dynamics, potentially with mechanisms inspired by real applications, to future work. Also, the spectral clustering algorithm that has been developed in this paper, although already exhibiting promising performance, is still in the vanilla format, and there are many other techniques that can be incorporated to further improve the performance <cit.>. Last but not the least, we have considered exclusively Hermitian weight matrices for now, however, the weight matrix can be non-Hermitian <cit.>, thus it may be necessary to consider more general settings in certain applications. § ACKNOWLEDGMENTS Y.T. is funded by the Wallenberg Initiative on Networks and Quantum Information (WINQ). R.L. acknowledges support from the EPSRC Grants EP/V013068/1 and EP/V03474X/1. plain § FURTHER DETAILS §.§ Networks with complex weights Suppose that such partition exists. Then there are two possible cases for each cycle: it either completely lies in one of the node subset, thus has phase 0, or contains one or more cycles by considering each node subset as a super node and edges within each node subset, thus the overall phase is still 0. Hence, the graph is balanced by Definition <ref>. Now, suppose the graph is balanced. (i) Starting from a node v_1, we group its neighbours together in the same node subset if the corresponding edges with v_1 have the same phase, and we also group v_1 to the node subset with nodes corresponding to phase 0. (ii) Then for one of v_1's neighbours, say v_2, we repeat the same process to group the nodes that are only neighbours to v_2. For nodes that are neighbours of both v_1 and v_2, say nodes v_3 and v_4, by Definition <ref> of structural balance, we have (φ_12 + φ_23 + φ_31) 2π = 0 (φ_12 + φ_24 + φ_41) 2π = 0. If they are in the same community in the previous step, we have φ_31 = φ_41, and then since φ_23, φ_24∈ [0, 2π), we have φ_23 = φ_24. If further φ_23 = 0, then we have φ_12 = φ_13. Hence, combining with nodes that are only neighbours of v_2, this step is effectively repeat the same process as in (i) for node v_2, to group neighbours together in the same node subset if the corresponding edge with v_2 have the same phase, and also group nodes corresponding to phase 0 to the same node subset as v_2. (iii) Hence, we can simply repeat the process in (i) to all v_1's neighbours, and then the neighbours of v_1's neighbours, which can further propagate to the whole network. (iv) Finally, it is straightforward to check that the resulting partition has the feature that any edges within each node subset has phase 0 and if we consider each node subset as a super node, then the phase of any cycle is 0. Since a complex-weighted graph G = (V, E, 𝐖) is antibalanced if and only if the graph after adding phase π to each edge, denoted G_n = (V, E, -𝐖), is balanced. From Theorem <ref>, G_n is balanced if and only if there is a partition of the node set {V_i}_i=1^l_p s.t. any edges (in G_n) within each node subset has phase 0 and if we consider each node subset as a super node, then the phase of any cycle (in G_n) is 0. Hence, G is antibalanced if and only if there is a partition of the node set {V_i}_i=1^l_p s.t. any edges within each node subset has phase π and if we consider each node subset as a super node and add π to each super edge, then the phase of any cycle is 0. Suppose for contradiction that there are two paths from node v_i to v_j that are of different phases, denoted P_1 and P_2 with phases φ_1 and φ_2, respectively. WLOG, we can assume there is no overlapping nodes in P_1 and P_2 apart from v_i and v_j, since otherwise we can consider a pair of overlapping nodes to be the start and end points with this feature. Then if we denote the path reversing each edge in P_2 by P_2', P_2' has phase 2π - φ_2. Further P_1 + P_2' is a cycle and has phase ((φ_1 + 2π - φ_2) 2π) 0, contradicting G being balanced. Since 𝐖̅ is an non-negative matrix, and G̅ is irreducible and aperiodic, then by Perron-Frobenius theorem, (i) ρ(𝐖̅) is real positive and an eigenvalue of 𝐖̅, i.e., λ̅_1 = ρ(𝐖̅), (ii) this eigenvalue is simple s.t. the associated eigenspace is one-dimensional, (iii) the associated eigenvector, i.e., 𝐮̅_1, has all positive entries and is the only one of this pattern, and (iv) 𝐖̅ has only 1 eigenvalue of the magnitude ρ(𝐖̅). Then, if G is balanced, from Theorem <ref>, (i) 𝐖 and 𝐖̅ share the same spectrum, and (ii) 𝐔 = 𝐈^*_1𝐔̅, where 𝐔 = [𝐮_1, 𝐮_2, …, 𝐮_n] and 𝐔̅ = [𝐮̅_1, 𝐮̅_2, …, 𝐮̅_n] containing all the eigenvectors, and 𝐈_1 is the diagonal matrix whose (i,i) element is exp(θ_1i). Hence, λ_1 = λ̅_1 = ρ(𝐖̅) = ρ(𝐖), and this eigenvalue is simple and the only one of the largest magnitude. Meanwhile, 𝐮_1 = 𝐈^*_1𝐮̅_1, thus it has the pattern as described and is the only one of this pattern. The results of antibalanced graphs follow similarly. If G is periodic, and ∀ v_i, v_j∈ V, all walks of the same length from v_i to v_j have the same phase, then ∀ v_i, v_j∈ V, all walks of even lengths from v_i to v_j have the same phase. We prove it by contradiction. Suppose that there exist two even length walks P_1 and P_2 from v_i to v_j with different phases. Since G is aperiodic, by Proposition 4 in Appendix in <cit.>, there exist a walk from v_j to v_i of even length, denoted by P_e. Then C_1 = P_1 + P_e forms a closed walk at node v_i with even length, and C_2 = P_2 + P_e forms another closed walk at node v_i with even length. The two closed walks C_1, C_2 carry different phases, denoted by θ_1, θ_2, respectively. Hence, at least one of the phases is nonzero, and WLOG, suppose θ_1 0. Then within the closed walk C_1, we can find two nodes v_h, v_l s.t. the part from v_h to v_l is of the same length as v_l to v_h, denoted by P_3, P_4 respectively. Then we can find two walks of the same length from v_h to v_l, P_3 and the one by reversing each edge in P_4, denoted by P_4', with different phases since θ_1 0. If G is aperiodic, and ∀ v_i, v_j∈ V, all walks of even lengths from v_i to v_j have the same phase, then G is either balanced or antibalanced. From Proposition 4 in Appendix in <cit.>, there exists an even length walk between any two nodes. Hence, we can partition V into {V_i}_i=1^l_p based on the phases of even length walks originated from a node v_i∈ V, where nodes to which all even length walks from v_1 have phase θ_i are grouped into node subset V_i, for all i. We argue that (a) within each node subset, all edges have phase 0 or π, (b) between two node subsets, all edges have the same phase, and (c) if there is any cycle C after considering each node subset as a super node, the phase of the cycle is either 0 when edges within each node subset is 0 or πC when edges within each node subset is π. It follows from Theorems <ref> and <ref> that G is either balanced or antibalanced. For (a), we consider two edges ab and cd in the same node subset, say V_1. Then we can construct two even length walks from v_i to c and v_i to d as follows. P_e(v_i, c) = P_e(v_i, b) + P_e(b, c), P_e(v_i, d) = P_e(v_i, a) + ab + P_e(b, c) + cd, where P_e(x,y) represents the constructed even length walk from node x to node y. We also denote the phase of a walk P by θ(P). Since nodes a,b,c,d are in the same node subset, θ(P_e(v_i, a)) = θ(P_e(v_i, b)) = θ(P_e(v_i, c)) = θ(P_e(v_i, d)). Hence, ∃ k∈𝐍, s.t. θ(ab) + θ(cd) = 2kπ, and this is true for any two edges within the node subset. Hence, θ(ab) = θ(cd) = kπ, for some k∈ℕ. Therefore, all edges have the same phase, either 0 or π. For (b), we consider two edges ab and cd between two different node subsets, say V_1 and V_2. Then we can still consider the two even length walks P_e(v_i, c) and P_e(v_i, d) as before. Since a,c are in the same node subset, while b,d are in the same node subset, θ(P_e(v_i, a)) = θ(P_e(v_i, c)) θ_a, θ(P_e(v_i, b)) = θ(P_e(v_i, d)) θ_b. Hence, ∃ k∈𝐍, s.t. θ(ab) + θ(cd) = 2(θ_b - θ_a) + 2kπ, and this is true for any two edges between the two different node sets. Hence, θ(ab) = θ(cd) = (θ_b - θ_a) + kπ, for some k∈ℕ. Therefore, all edges have the same phase, either (θ_b - θ_a) or (θ_b - θ_a) + π. For (c), we consider a cycle C=V_i_1V_i_2⋯ V_i_CV_i_1 after treating each node subset as a super node. Then for each j, ∃ a_j, a'_j∈ V_i_j s.t. a_j-1a'_j and a_ja'_j+1 are two edges in the graph (with the convention that C + 1 = 1 for the index j). (i) If C is even, then we can construct an even length walk from v_i to a'_1 as follows. P_e(v_i, a_1') = P_e(v_i, a_1) + a_1a'_2 + P_e(a'_2, a_2) + a_2a'_3 + ⋯ + a_Ca'_1. Since a_1, a'_1 ∈ V_i_1, θ(P_e(v_i, a_1')) = θ(P_e(v_i, a_1)). By (a), θ(P_e(a_j, a_j')) = 0, ∀ j. Hence, θ(C) = θ(a_1a'_2 + a_2a'_3 + ⋯ + a_Ca'_1) = 0. (ii) If C is odd, then we can construct an even length walk from v_i to a neighbour of a_1', denoted by b, as follows. P_e(v_i, b) = P_e(v_i, a_1) + a_1a'_2 + P_e(a'_2, a_2) + a_2a'_3 + ⋯ + a_Ca'_1 + a'_1b. Then for the same reasons as before, θ(a_1a'_2 + a_2a'_3 + ⋯ + a_Ca'_1 + a'_1b) = 0. Hence, by (a), θ(C) = 0 if all edges within the node subset have phase 0 and π if all edges within the node subset have phase π. (i) If G is periodic, then G is bipartite, because of the presence of length-2 cycle(s). Then all cycles have even length, and for each cycle C, we can find node v_i, v_j∈ C, s.t. the part starting from v_i to v_j has the same length as the remaining part from v_j back to v_i. Since 𝐖 is Hermitian, it means that we can find two walks of the same length from v_i to v_j. Then, suppose the statement is not true, i.e., all walks of the same length between each pair of nodes v_h,v_l∈ V have the same phase, then all cycles have phase 0, noting that the phases of the two edges connecting the same pair of nodes sum to 2π, thus G is balanced, which leads to contradiction. (ii) Otherwise, G is aperiodic, then G is strictly unbalanced by Lemmas <ref> and <ref>. We first note that if ρ(𝐖) < ρ(𝐖̅), then G is strictly unbalanced, since the spectral radius will be the same if G is balanced or antibalanced by Theorem <ref>. For the other direction, if G is strictly unbalanced, by Lemma <ref>, ∃ v_i,v_j∈ V and z_1∈ℤ^+ s.t. there are two walks of length z_1 between nodes v_i, v_j of different phases. Then (𝐖^z_1)_ij < (𝐖̅^z_1)_ij, where (𝐖)_ij indicates the (i,j) element of a matrix 𝐖. Hence, for sufficiently large z_2, the walks between each pair of nodes will be able to go through the two walks of different phases between nodes v_i, v_j, thus ∀ v_h,v_l∈ V, (𝐖^z_2)_hl < (𝐖̅^z_2)_hl. Then for each vector 𝐱 = (x_h)∈ℝ^n and 𝐱_2 = 1, we can find 𝐱̅ = (x_h) s.t. 𝐱̅_2 = 1 and (𝐖^z_2𝐱)_h = ∑_l(𝐖^z_2)_hlx_l≤∑_l(𝐖^z_2)_hlx_l < ∑_l(𝐖̅^z_2)_hlx_l = (𝐖̅^z_2𝐱̅)_h, where (𝐱)_h indicates the h-th element of an vector 𝐱. Therefore, 𝐖^z_2𝐱_2 < 𝐖̅^z_2𝐱̅_2. Hence, by definition, 𝐖^z_2_2 = max_𝐱_2 = 1𝐖^z_2𝐱_2 < max_𝐲_2 = 1𝐖̅^z_2𝐲_2 = 𝐖̅^z_2_2, then ρ(𝐖)^z_2 = ρ(𝐖^z_2) < ρ(𝐖̅^z_2) = ρ(𝐖̅)^z_2, and finally ρ(𝐖) < ρ(𝐖̅). §.§ Random walks If G is balanced, then 𝐏^t is still a complex transition matrix, and has the following phase pattern: (𝐏^t)_ij = exp(θ_σ(i)σ(j))(𝐏̅^t)_ij, where σ(·) and θ_hl are as defined in Theorem <ref>, and 2m = ∑_jd_j. If G is balanced, then 𝐏 = 𝐈_1^*𝐏̅𝐈_1, where 𝐈_1 is the diagonal matrix whose (i,i) element is exp(θ_1σ(i)), by Theorem <ref>. Then 𝐏^t = (𝐈_1^*𝐏̅𝐈_1)^t = 𝐈_1^*𝐏̅^t𝐈_1. Since 𝐏̅^t is still a transition matrix, 𝐏^t is still a complex transition matrix. Meanwhile, (𝐏^t)_ij = (𝐏̅^t)_ij(𝐈_1^*)_ii(𝐈_1)_jj = exp(-θ_1σ(i) + θ_1σ(j))(𝐏̅^t)_ij = exp(θ_σ(i)σ(j))(𝐏̅^t)_ij, where the last equality is by G being balanced. If G is antibalanced, then 𝐏^t is still a complex transition matrix, and has the following phase pattern: (𝐏^t)_ij = (-1)^texp(θ_σ(i)σ(j))(𝐏̅^t)_ij, where σ(·) and θ_hl are as defined in Theorem <ref>, and 2m = ∑_jd_j. If G is antibalanced, then 𝐏 = -𝐈_1^*𝐏̅𝐈_1, where 𝐈_1 is the diagonal matrix whose (i,i) element is exp(θ_1σ(i)), by Theorem <ref>. Then 𝐏^t = (-𝐈_1^*𝐏̅𝐈_1)^t = (-1)^t𝐈_1^*𝐏̅^t𝐈_1. Since 𝐏̅^t is still a transition matrix, 𝐏^t is still a complex transition matrix. Meanwhile, (𝐏^t)_ij = (-1)^t(𝐏̅^t)_ij(𝐈_1^*)_ii(𝐈_1)_jj = (-1)^texp(-θ_1σ(i) + θ_1σ(j))(𝐏̅^t)_ij = (-1)^texp(θ_σ(i)σ(j))(𝐏̅^t)_ij, where the last equality is by G being antibalanced. If G is balanced and is not bipartite, with A_ij∈ S_k^1 if there is an edge, then the steady state is 𝐱^* = (x_j^*) where x_j^* = exp((σ(j)-1)2π/k)(𝐱(0)^*1̃_1)d_j/(2m). The steady state in this case can be obtained by the partition corresponding to the balanced structure, and Proposition <ref>. If G is antibalanced and is not bipartite, with A_ij∈ S_k^1 if there is an edge, then the random walks have different steady states for odd or even times, denoted by 𝐱^*o = (x_j^*o) and 𝐱^*e = (x_j^*e), respectively, where x_j^*o = -exp((σ(j)-1)2π/k)(𝐱(0)^*1̃_1)d_j/(2m), x_j^*e = exp((σ(j)-1)2π/k)(𝐱(0)^*1̃_1)d_j/(2m). The steady state in this case can be obtained by the partition corresponding to the antibalanced structure, and Proposition <ref>. §.§ Application: Spectral clustering By definition, (𝐗^*𝐋𝐗) = ∑_h𝐱^(h)*𝐋𝐱^(h). From the bilinear form of the complex Laplacian in Eq. (<ref>), 𝐱^(h)*𝐋𝐱^(h) = ∑_(i,j), (j,i)∈ Er_ijx^(h)_i - exp(φ_ij)x^(h)_j^2 = ∑_v_i∈ X^(h), v_j∉ X^(h)r_ij/X^(h) + 1/2∑_a=1^l_h∑_v_i, v_j∈ X^(h)_ar_ijexp(θ_X^(h)_a) - exp((φ_ij + θ_X^(h)_a))/X^(h) + ∑_a < b∑_v_i∈ X^(h)_a, v_j∈ X^(h)_br_ijexp(θ_X^(h)_a) - exp((φ_ij + θ_X^(h)_b))/X^(h) = 1/X^(h)(cut(X^(h), X^(h)c) + 1/2∑_a=1^n_l∑_v_i, v_j∈ X^(h)_a(2 - 2cos(φ_ij))r_ij. . + ∑_a < b∑_v_i∈ X^(h)_a, v_j∈ X^(h)_b(2 - 2cos(φ_ij - (θ^(h)_a - θ^(h)_b)))r_ij) = cut(X^(h), X^(h)c) + ∑_a=1^n_l∑_b=1^n_lccut(X_a^(h), X_b^(h))/X^(h) = gcut(X^(h), X^(h)c)/X^(h). Then from Eq. (<ref>), (𝐗^*𝐋𝐗) retrieves grcut({X^(1)_a}_a=1^n_1, …, {X^(k)_a}_a=1^n_k). § MAGNETIC LAPLACIAN Here, we include more results from examining the changes in the smallest and largest eigenvalues of the normalised magnetic Laplacian while varying integer values of r=2π/θ between 1 and 100. We observe that the results still hold for directed cycles of larger sizes; see Fig. <ref>.
http://arxiv.org/abs/2307.02828v1
20230706075242
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks
[ "Xu Han", "Anmin Liu", "Chenxuan Yao", "Yanbo Fan", "Kun He" ]
cs.CV
[ "cs.CV", "cs.CR", "cs.LG" ]
Han et al.: S-FGRM for Highly Transferable Adversarial Attacks Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks Xu Han^1, Anmin Liu^11, Chenxuan Yao^1, Yanbo Fan^2, Kun He^12 0000-0001-7627-4604, Senior Member, IEEE Xu Han, Anmin Liu, Chenxuan Yao and Kun He are with School of Computer Science and Technology, Huazhong University of Science and Technology; and Hopcroft Center on Computing Science, Huazhong University of Science and Technology, Wuhan 430074, China. The first two authors contribute equally. Corresponding author: Kun He. E-mail:[email protected]. Yanbo Fan is with Tencent AI Lab, Shenzhen, China. Manuscript received June 13th, 2023 This work was supported by National Natural Science Foundation (62076105,U22B2017). ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Deep neural networks are known to be vulnerable to adversarial examples crafted by adding human-imperceptible perturbations to the benign input. After achieving nearly 100% attack success rates in white-box setting, more focus is shifted to black-box attacks, of which the transferability of adversarial examples has gained significant attention. In either case, the common gradient-based methods generally use the sign function to generate perturbations on the gradient update, that offers a roughly correct direction and has gained great success. But little work pays attention to its possible limitation. In this work, we observe that the deviation between the original gradient and the generated noise may lead to inaccurate gradient update estimation and suboptimal solutions for adversarial transferability. To this end, we propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM). Specifically, we use data rescaling to substitute the sign function without extra computational cost. We further propose a Depth First Sampling method to eliminate the fluctuation of rescaling and stabilize the gradient update. Our method could be used in any gradient-based attacks and is extensible to be integrated with various input transformation or ensemble methods to further improve the adversarial transferability. Extensive experiments on the standard ImageNet dataset show that our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines. Adversarial examples, adversarial attack, gradient optimization, fast gradient rescaling, depth first sampling. § INTRODUCTION Along with their incredible success in computer vision tasks, the robustness of deep neural networks (DNNs) has also raised serious concern. DNNs have been found to be vulnerable to adversarial examples <cit.> crafted by adding human-imperceptible perturbations to the benign input. Worse still, adversarial examples have been demonstrated to be transferable <cit.>, indicating that adversarial examples crafted for current target model can be used to fool other black-box models, allowing for real-world black-box attacks <cit.>. Numerous works have been proposed to investigate the adversarial robustness of deep learning models, including gradient-based single-step methods such as fast gradient sign method (FGSM) <cit.> and its randomized variant <cit.>, multi-step methods such as the basic iterative method (BIM) <cit.> and the projected gradient descent (PGD) attack <cit.>. These adversarial attack methods can achieve high success rates in white-box setting, in which the attacker has access to the architecture, model parameters and other information of the target model. However, they often exhibit low attack success rates in black-box setting for transfer attacks. To address the above challenge, numerous efforts have been devoted to improve the transferability of adversarial examples, including gradient optimization attacks <cit.>, input transformation attacks <cit.> and model ensemble attacks <cit.>. Gradient optimization attacks attempt to boost the black-box performance by advanced gradient calculation, while input transformation attacks aim to find adversarial perturbations with higher transferability by applying various transformations to the inputs. By fusing the logit outputs, model ensemble attacks generate adversarial examples on multiple models simultaneously. The latter two categories are both based on existing gradient-based attacks to further boost the transferability. Among these schemes, little attention has been paid to the possible limitation of the widely used basic gradient update using the sign function. The sign function guarantees an l_∞ bound on the gradient update, as well as sufficient perturbations and a generally proper update direction, making it successful and being widely adopted in most gradient optimization methods. We observe that, however, the sign function could not offer an accurate approximation on the gradient update direction, resulting in just two values of ± 1 in most cases (the value of 0 rarely happens for the real value of gradients). For instance, if two dimensions of the actual gradient are (0.8, 10^-8), the sign gradient will be (1,1) but (1,0) is a more accurate approximation. For another example, as illustrated in Fig. <ref> (a) two dimensions of the gradient, where ϵ is the perturbation budget for the current update, the update direction provided by sign deviates from the direction of both the original gradient g and the actual gradient g' of the black-box model more than rescale. We argue that such inaccuracy of gradient might mislead the optimization path as shown in Fig. <ref> (b) (the color shade represents the values of the function). The iterative process might fall into suboptimum compared to rescale and weaken the adversarial transferability. Based on the above analysis, we propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to have a more accurate gradient approximation and boost the transferability of gradient-based attacks. Specifically, we design a data rescaling method for the gradient calculation, that directly exploits the distribution of gradient values in different dimensions and maintains their disparities. Moreover, as most gradient values are extremely small, as shown in Fig. <ref> (a), we further introduce a sampling method, dubbed Depth First Sampling, to stabilize the update direction. Our sampling method eliminates the local fluctuation error and further boosts the transferability of the crafted examples. Note that our proposed method is generally applicable to any gradient-based attack and can be combined with a variety of input transformation or ensemble methods to further enhance the adversarial transferability. Extensive experiments and analysis show that our integrated methods mitigate the overfitting issue and further strengthen various gradient-based attacks. The rest of this paper is organized as follows. Section <ref> presents related adversarial attack methods, including gradient optimization attacks, input transformation attacks, and model ensemble attacks. Section <ref> introduces our proposed , including typical gradient optimization attacks and our motivation, the details of our proposed two components, i.e., the fast gradient rescaling method (FGRM) and the depth first sampling method (DFSM), and the final algorithms that can be combined with either MI-FGSM or NI-FGSM. Section <ref> presents extensive experimental comparisons, parameter study and ablation study. In the end, the paper summarizes with concluding remarks in Section <ref>. § RELATED WORKS Due to the lack of model information, attacks in the black-box setting are more challenging. Researchers find that adversarial examples exhibit good transferability across the models <cit.>, and a prevailing stream of black-box attacks is to transfer adversarial examples generated in substitute models into the target model. This section provides a brief overview of the three main categories of transfer-based attacks. §.§.§ Gradient Optimization Attacks Gradient optimization methods <cit.>, which are the most critical category that also serves as the base for other attacks, focus on improving the attack transferability through advanced gradient calculation. FGSM <cit.> is a single-step gradient-based attack to maximize the loss function to the input. I-FGSM <cit.> extends FGSM by update the gradient iteratively. Momentum Iterative attack (MI) <cit.> and Nesterov Iterative attack (NI) <cit.> introduce momentum to the iterative gradient attacks to boost the attack transferability. Smooth gradient attack <cit.> enhances the adversarial transferbility by smoothing the loss surface, such that the crafted examples are robust to Gaussian perturbation. Variance Tuning attack <cit.> uses the gradient variance of the previous iteration to tune the current gradient calculation so as to have a more stable update direction. Different from the above methods that design advanced strategies upon FGSM, our focuses on how to overcome the limitation of the sign function in FGSM. Fast Gradient Sign Method(FGSM). FGSM <cit.> is the first gradient-based attack to generate adversarial examples through a single step to maximize the loss function. x^adv=x + ϵ·sign(∇_xJ(x,y)), where ∇_xJ(x,y) is the gradient of the loss function x. Iterative Fast Gradient Sign Method(I-FGSM). I-FGSM <cit.> iteratively executes FGSM using a smaller step size. x_t+1^adv = x_t^adv + α·sign(∇_x_t^advJ(x_t^adv,y^true)), where x_0^adv=x,α=ϵ/T denotes a small step size. Momentum Iterative Fast Gradient Sign Method (MI-FGSM). MI-FGSM <cit.> introduces the momentum into I-FGSM to boost the adversarial attacks: g_t+1=μ g_t + ∇_x_t^advJ(x_t^adv,y)/||∇_x_t^advJ(x_t^adv,y)||_1, x_t+1^adv=x_t^adv+α·sign(g_t+1), where g_0=0,x_0^adv=x and μ is a decay factor. Nesterov Iterative Fast Gradient Sign Method (NI-FGSM). NI-FGSM <cit.> integrates Nesterov's accelerated gradient into the iterative attack so as to effectively look ahead and further improve the transferability: x_t^nes=x_t^adv+α·μ· g_t, g_t+1 = μ· g_t + ∇_x_t^nesJ(x_t^nes,y^true)/||∇_x_t^nesJ(x_t^nes,y^true)||_1, x_t+1^adv=x_t^adv + α·sign(g_t+1). Variance Tuning Gradient-based Attacks(VMI-FGSM). Wang  <cit.> propose Variance Tuning (VT), using the previous gradient variance to tune the current gradient calculation at each iteration. They both try to reduce the variance of the gradient to have a more stable update direction. §.§.§ Input Transformation Attacks The second category of boosting the adversarial transferability is to adopt various input transformations. Diverse Input Method (DIM) <cit.> applies random resizing and padding to the input to alleviate the overfitting issue of I-FGSM, resulting in a high attack success rate under both white-box and black-box settings. Scale-Invariant Method (SIM) <cit.> introduces the scale-invariant property to calculate the gradient over the input image scaled by a factor 1/2^i to generate transferable adversarial examples, where i is a hyper-parameter. Holding the assumption of translation-invariant property, Translation-Invariant Method (TIM) <cit.> adopts a set of translated images to calculate the gradient. Admix <cit.> uses the original label of the input and calculates the gradient on the input image admixed with a small portion of each add-in image to improve the attack transferability. Different input transformation methods can be integrated with gradient-based attacks naturally. Our proposed can also be combined with these input transformations to further improve the attack transferability. Diverse Input Method (DIM). DIM <cit.> applies random resizing and padding to the input to alleviate the overfitting trend of I-FGSM, resulting in high attack success rate under both white-box and black-box settings. Scale-Invariant Method (SIM). SIM <cit.> introduces the scale-invariant property to calculates the gradient over input image scaled by factor 1/2^i to generate transferable adversarial examples,where i is a hyper-parameter. Translation-Invariant Method (TIM). Holding the assumption of translation-invariant property,TIM <cit.> adopts a set of translated images to calculate the gradient.TIM uses a pre-defined kernel accomplish the translation as well as reduce the computational cost greatly. Admix. Admix <cit.>uses the original label of the input to create more transferable adversaries by calculating the gradient on the input image admixed with a small portion of each add-in image,which greatly improves the attack transferbility. Different input transformations like DIM, TIM, SIM can be intergrated with gradient-based attack naturally. In this work, the proposed sampling-based fast gradient rescaling method (S-FGRM) can be also combined with those input transfomations to improve the attack transferability further. §.§.§ Model Ensemble Attacks Model ensemble attacks hold the hypothesis that if an adversary can fool multiple models, it will capture the intrinsic transferable information to fool the targeted model. Liu  <cit.> propose to use an ensemble of multiple models, finding that the generated adversarial examples show better transferability. Dong  <cit.> also use ensemble models by fusing their logit activations to further improve the transferability. Recently, apart from the above methods that attack a fixed set of models, multi-stage ensemble attacks have drawn attention <cit.>. Hang  <cit.> propose two types of model ensemble attacks, SCES and SPES. The former adopts a boosting structure while the latter employs the bagging structure. Their results show that the richer diversity of the ensemble model leads to better transferability. EnsembleFool <cit.> dynamically selects the models according to their confidence output at the last iteration. SVRE <cit.> is a recently proposed method that treats the iterative ensemble attack as a stochastic gradient descent optimization process and reduces the gradient variance of the ensemble models to take fully advantage of the ensemble attack. § METHODOLOGY This section first introduces four typical gradient-based attack methods, then illustrates our motivation and introduces the two key components of our method, followed by the detailed description of the proposed S-FGRM. §.§ Typical Gradient Optimization Attacks Gradient optimization methods are the mainstream of adversarial attacks, which also serve as the base for other categories of adversarial attacks. ∙ Fast Gradient Sign Method (FGSM) <cit.> is the basic single-step method for generating adversarial examples: x^adv=x + ϵ·sign(∇_xJ(x,y)), where ∇_xJ(x,y) is the gradient of the loss function x. ∙ Iterative Fast Gradient Sign Method (I-FGSM) <cit.> iteratively executes FGSM using a smaller step size: x_t+1^adv = x_t^adv + α·sign(∇_x_t^advJ(x_t^adv,y^true)), where x_0^adv=x, α=ϵ/T is a small step size. ∙ Momentum Iterative Fast Gradient Sign Method (MI-FGSM) <cit.> introduces momentum into I-FGSM to boost the adversarial attacks: g_t+1=μ g_t + ∇_x_t^advJ(x_t^adv,y)/||∇_x_t^advJ(x_t^adv,y)||_1, x_t+1^adv=x_t^adv+α·sign(g_t+1), where g_0=0,x_0^adv=x and μ is a decay factor. ∙ Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) <cit.> integrates Nesterov's accelerated gradient into the iterative attack to look ahead and further improve the transferability: x_t^nes=x_t^adv+α·μ· g_t, g_t+1 = μ· g_t + ∇_x_t^nesJ(x_t^nes,y^true)/||∇_x_t^nesJ(x_t^nes,y^true)||_1, x_t+1^adv=x_t^adv + α·sign(g_t+1). §.§ Motivation of our Method In the process of generating adversarial examples, gradient-based methods typically use the sign function to estimate the gradient. Since the success of the first gradient-based method of FGSM <cit.>, numerous studies have concentrated on attack strategies to generate more efficient and transferable adversarial examples based on FGSM, but take its basic sign function for granted. Though the sign function guarantees an l_∞ bound, sufficient perturbations and a generally proper update direction, making it successful in various attacks, we observe that it also has some side effects, especially on the direction of the gradient update. The synthetic directions associated with each gradient value are limited since the output of the sign function is either ± 1 or 0, and 0 rarely happens for the real gradient values. Using sign will result in an imprecise estimate on the gradient. As can be seen intuitively from Fig. <ref> (a), the angle between sign(g) and black-box model g' is much larger than rescale(g). Based on the above analysis, we attempt to directly employ the distribution of the initial gradient by simply introducing the data rescaling method to the last step. It aims at retaining the difference between the gradient values as compared to the original sign function. Our Fast Gradient Rescaling Method is straightforward but effective, with little computational overhead analyzed below. One property of sign is to produce enough perturbations. Without sign, even tiny numerical inaccuracy might create big difference in the rescaling results due to the minimal values of the gradient. To compensate for this deficiency, we propose our Depth First Sampling Method. By sampling around the input image, the negative influence of small value fluctuations will be alleviated. Unlike typical sampling methods, starting with the second sampling, we sample around the previous sampled image rather than the original input image. This strategy is similar to depth-first search. It not only mitigates the rescaling uncertainty but also investigates the model decision boundaries, which reduces overfitting to some extent and improves the attack transferability. To sum up, we propose the Sampling-based Fast Gradient Rescaling Method (S-FGRM) to further enhance the adversarial transferability by combining the above two strategies. §.§ Fast Gradient Rescaling Method Typical gradient-based adversarial attack methods, like I-FGSM, MI-FGSM, NI-FGSM, , are inextricably linked to the sign function. There is no doubt that the sign function provides sufficient perturbation for the gradient update, allowing crafted adversarial examples to approaching their targeted class more easily. Nevertheless, it is not the optimal or near-optimal solution. The synthetic directions produced by sign are limited, as we have illustrated in Fig. <ref> (b), which influences the speed of gradient convergence as well as the transferability. Gao  <cit.> also observe the limitation of sign, and they thereby sort all the gradient units and assign them with values designated for intervals. Their method sharply slows down the generation of adversarial examples due to the percentile calculation, which is not applicable in most complex scenarios. To develop a simple yet more efficient method, we introduce data rescaling into the gradient update and define the gradient rescaling function as follows: rescale(g) = c * sign(g)⊙ f( norm(log_2| g|)), norm(x) = x-mean(x)/std(x),  f(x)=σ=1/1+e^-x. The notation | g| means element-wise absolute value of gradient g. such design is to make the values within the scope of log function. And it is also meaningful based on our observation that the distribution of original gradient is almost symmetric with 0 as the center. We utilize the logarithm of | g| to base 2 to represent the fractional part of the gradient values in binary, thus the closely distributed values will be scattered. The original gradient is linearly transformed through the normalization. To further improve the transferability, data smoothing is performed at the end of the equation by mapping the intermediate result on f(x). After several initial attempt, we find that the sigmoid function is efficient and effective for data smoothing within the scope between [0,1] and c is the rescale factor that controls the final scope between [0,c]. Since we operate on the absolute value and scatter them further, at last we will add their sign back. Our rescale method leverages the value's difference and retain more accurate direction of the original gradients, , for the actual 2D gradient (0.8,10^-8), the sign gradient will be (1,1) while the resale gradient will be (1.46,0.54) which still maintains the relative magnitude of the original gradients. log_2(x) is the key function that scatters the possible gradient values, e.g. (1/8, 1/4, 1/2) will be scattered to (-3,-2,-1). And other outer wrapper functions work as norm or smooth function to make the scattered gradient smoother and more controllable, while maintaining monotonicity and producing consistent and enough perturbations. Compared with Staircase <cit.>, our gradient rescaling method is more straightforward and more effective that do not have their expensive overhead on the sorting. The detailed analysis is shown in the next section. §.§ Depth First Sampling Method In order to stabilize the effect of gradient rescaling, we adopt sampling to reduce the local fluctuation. Wu et al. <cit.> substitute each gradient with an averaged value during the iterations to alleviate the shattering on gradients. Inspired by the logic of depth-first search, we propose Depth First Sampling Method. We define the potential samples around the current point in the input space as their neighbors. Instead of just sampling around the input images, we also take their neighbors into consideration. As illustrated in Fig. <ref>, when a sampling operation is completed, the next sampling center will shift to the point just sampled. Specifically, given a classifier f with loss function J and the input image x ∈𝒳, our depth first sampling strategy is designed as follows: g_t = 1/N+1∑_i=0^N∇ J(x_t^i,y;θ),     x_t^i+1 = x_t^i+ξ_i. Here x_t^0=x, ξ_i ∼ U[-(β·ϵ)^d,(β·ϵ)^d]. N is the sampling number. ϵ is the maximum perturbation in the current iteration and β is a hyperparameter that determines the sampling range. By accumulating the gradient of sampled examples and benign image, the fluctuation effect is expected to be weakened and the transferability of the crafted examples will be enhanced. §.§ The Final S-FGRM Algorithms To gain a better grasp of our entire methodology, we incorporate the proposed method into MI-FGSM, denoted as Sampling-based Momentum Iterative Fast Gradient Rescaling Method (SMI-FGRM). Specific details are described in Algorithm <ref>. Similarly, we could incorporate the proposed method into NI-FGSM, and obtain an enhanced method SNI-FGRM. In addition, almost any gradient-based attacks can integrate with our S-FGRM to gain more transferable attacks. § EXPERIMENTS This section starts with the experimental setup, then makes a comprehensive comparison with typical gradient-based attacks in a variety of attack settings, including the performance comparisons on attacking a single model, attacking with input transformation, attacking an ensemble of models, attacking advanced defense models. Parameter and ablation studies are reported subsequently. In the end, we do further comparison with two attack methods, that are related to the two components in our method, respectively to show the superiority of our method. §.§ Experimental Setup Dataset  The dataset used for evaluation is from the ImageNet containing 1000 images randomly picked from the ILSVRC 2012 validation set <cit.>, which is widely used in recent gradient-based attacks <cit.>. Models  We conduct experiments on seven popular models, including four normally trained models: Inception-v3 (Inc-v3) <cit.>, Inception-v4 (Inc-v4), Inception-Resnet-v2 (IncRes-v2) <cit.>, Resnet-v2-101 (Res-101) <cit.>, and three adversarially trained models: Inc-v3_ens3, Inc-v3_ens4 and IncRes-v2_ens <cit.>. We also include nine advanced defense models: HGD <cit.>, R&P <cit.>, NIPS-r3, Bit-Red <cit.>, JPEG <cit.>, FD <cit.>, ComDefend <cit.>, RS <cit.> and NRP <cit.>. Baselines  We select two popular gradient-based attack methods as our baselines, MI-FGSM <cit.> and NI-FGSM <cit.>. We also integrate our methods with up-to-date input transformations, including DIM <cit.>, TIM <cit.>, SIM <cit.> and their combination, CTM. Our boosted methods are denoted as SM(N)I-DI-FGRM, SM(N)I-TI-FGRM, SM(N)I-SI-FGRM, SM(N)I-CT-FGRM, respectively. Hyperparameters  We follow the prior settings for hyper-parameters <cit.>. We set the maximum perturbation ϵ=16 with the pixel values normalized to [0,1], the number of iterations T=10 and step size α=1.6. We adopt the default decay factor μ=1.0 for MI-FGSM and NI-FGSM. The transformation probability of DIM is set to 0.5. We adopt the Gaussian kernel with kernel size 7 × 7 for TIM. The number of scale copies is set to 5 for SIM. For our method, the parameters for sampling are set to N=12, β=1.5 and c=2. §.§ Attack a Single Model We first perform four adversarial attacks, including MI-FGSM, NI-FGSM, the proposed SMI-FGRM and SNI-FGRM on a single model to test the attack transferability. We generate adversarial examples on the first four normally trained networks respectively and evaluate them on the seven neural networks mentioned above. The attack success rates are presented in Table <ref>. SMI-FGRM and SNI-FGRM maintain nearly 100% success rates in white-box setting. For black-box attacks, they surpass MI-FGSM and NI-FGSM by a large margin. For example, on the first and second rows, we craft adversarial examples on Inc-v3 model. Our proposed SMI-FGRM achieves 82.0% success rate on Inc-v4 and 44.8% success rate on Inc-v3_ens while MI-FGSM only gets 44.3% and 13.8% corresponding success rates. The results demonstrate that S-FGRM boosts the two advanced gradient-based methods significantly and enhances the attack transferability. §.§ Attack with Input Transformations Lin  <cit.> have demonstrated that the Composite Transformation Method (CTM), combined by the most advanced input transformations of DIM, TIM and SIM, improve the adversarial transferability significantly. Thus, we integrate our S-FGRM with CTM to further boost the transferability of gradient-based attacks. As depicted in Table <ref>, our SMI-CT-FGRM and SNI-CT-FGRM consistently outperform the baseline methods by 10%∼37%. Here we also show the results of SMI-CT-FGSM and SNI-CT-FGSM that use FGSM with our DFS sampling. We can observe that adding the DFS samlping could help boost the transferbility, and replacing FGSM with FGRM could further improve the performance. §.§ Attack an Ensemble of Models We continue to follow the ensemble attack settings <cit.>, and integrate our S-FGRM with the ensemble attack method to show that S-FGRM is capable of improving the transferability under multi-model setting. We craft adversarial examples on the ensemble of four normally trained models, , Inc-v3, Inc-v4, IncRes-v2 and Res-101, with averaged logit outputs. As summarized in Table <ref>, our S-FGRM has better performance than the baseline attacks. It is worth noting that the proposed method can improve the attack transferability of the baselines by more than 48% for MI-FGSM and 52% for NI-FGSM. In addition, when combined with CTM, S-FGRM boosts MI-CT-FGSM and NI-CT-FGSM on adversarially trained models by 6.9% and 8.4% on average, respectively. §.§ Attack Advanced Defense Models To further validate the effectiveness, we evaluate our methods on nine models with advanced defenses, including the top-3 defense solutions in the NIPS competition: HGD (rank-1) <cit.>, R&P (rank-2) <cit.>, NIPS-r3 (rank-3)) and six recently proposed defense models: Bit-Red <cit.>, JPEG <cit.>, FD <cit.>, ComDefend <cit.>, RS <cit.> and NRP <cit.>. The results are organized in Table <ref>. Notably, in the single model setting, our methods yield an average attack success rate of 74.0% for SMI-CT-FGRM and 75.0% for SNI-CT-FGRM on Inc-v3 model, outperforming the baselines for more than 18% and 23%. In the multi-model setting, our methods achieve an average attack success rate of 91.1% for SMI-CT-FGRM and 92.5% for SNI-CT-FGRM. The consistent improvement shows great effectiveness and generalization of our S-FGRM for gradient-based attacks to achieve higher transferability. §.§ Parameter Study We conduct parameter study on the hyper-parameters of our method. To remove the influence of other factors, we tune N and β respectively and test the adversarial transferability on the Inc-v3 model without input transformations. §.§.§ On the number of sampled examples N We first analyze the effectiveness of the sampling number N on the attack success rate of S-FGRM, as illustrated in Fig. <ref>. We integrate S-FGRM with MI-FGSM and NI-FGSM, respectively. The parameter of sampling range β is fixed to 1.5, and we tune N = 0,1,2,...,22. When N=0, SMI-FGRM (SNI-FGRM) degrades to MI-FGRM (NI-FGRM). When the value of N increases, the black-box attack success rate increases rapidly and reaches the peak at about N=12. We take N = 12 as the relatively optimal parameter in experiments since a bigger value of N leads to higher computational overhead. Besides, we could also choose a smaller number of N like 6, 8 or 10, the results are still significantly better than the baselines. §.§.§ On the size of sampling range β We then study the impact of the sampling range β. Similarly, we apply our S-FGRM to MI-FGSM and NI-FGSM, respectively, fix N=12, and let β range from 0 to 3. β=0 denotes the corresponding FGRM. As shown in Fig. <ref>, with the increment on the sampling range, the black-box success rate increases and remains stable after β exceeds 1.5. When β>1.5, the performance decays gradually. Thus, we set the sampling range β=1.5 in experiments. §.§.§ On the size of rescale factor c We also test the influence of rescale factor c in our rescale function. It can be observed that the improvements of our S-FGRM are generally good for c = 2, as illustrated in Fig. <ref>. For adversarially trained models, the stability will decline if c increases. We infer that a larger value of rescale factor may cause more pixels to be clipped, making it more difficult to generate effective perturbation. Hence, we choose c = 2 in our final experiments. §.§ Ablation Study We conduct ablation studies to validate the two components of our method, , Fast Gradient Rescaling Method and Depth First Sampling Method. We test our method in the multi-step case to compare with I-FGSM, and the results are shown in Table <ref>. We can observe that our I-FGRM exhibits clearly better results as compared with I-FGSM. After adding the depth first sampling method, our SI-FGRM gains further improvement on I-FGRM. The results show that our rescale function exhibits better performance than the sign function, and the DFS sampling further stabilizes the effects and boosts the performance. §.§ Further Comparison We further compare the two components of S-FGRM with SGA (Smooth Gradient Attack) <cit.> and Staircase <cit.> respectively to show the superiority of our method. §.§.§ DFSM vs. SGA SGA samples around the image by adding Gaussian noise to the input and substitutes each gradient with an averaged value. We set the sampling number N=20 to gain SGA's best performance and set N=12 for our method. As depicted in Table <ref>, our SMI-CT-FGSM consistently outperforms sgMI-CT-FGSM as the same for SNI-CT-FGSM over sgNI-CT-FGSM. DFSM can take fewer samples and achieve better better adversarial transferability. The great performance stated above convincingly validates the effectiveness of proposed DFSM. §.§.§ FGRM vs. Staircase To further investigate the effectiveness of our rescale function, we compare with the staircase sign method, termed S^2M, which is a recent work that also focuses on the improvement of the sign function. This method heuristically divides the gradient sign into several segments according to the sorted values of the gradient units, and then assigns each segment with a staircase weight for better crafting adversarial perturbation. Specifically, it assigns the staircase weights according to the p-th percentile of the gradient units at each iteration where p depends on the number K of staircases. We set K=64 to gain Staircase's best performance and set the sampling number N=12 for our method. As shown in the results, our FGRM can achieve comparable performance to Staircase. Note that our method is more efficient than Staircase because we do not need to sort the gradient values to calculate the percentile. Considering the batch input whose size is S = B× H× W× C, where B is the batch size and H, W, C represent the height, width and channel of an image, the Staircase method needs O(Slog S) computational cost per iteration while our rescale method only needs O(S). § CONCLUSION In this work, we revisited the basic operation of the sign function in gradient-based adversarial attack methods, and discussed its limitation caused by the inaccurate approximation of the true gradients. Based on our observation, we proposed a simple yet more efficient method called the Sampling-based Fast Gradient Rescaling Method (S-FGRM) to enhance the adversarial transferability for black-box attacks. Specifically, we introduced a data rescaling method to the gradient update and removed the local fluctuation by the depth first sampling method. In this way, S-FGRM can further improve the adversarial transferability. We then incorporated our S-FGRM method into typical gradient-based attacks. Our methods surpassed the advanced attack methods, MI-FGSM and NI-FGSM, by a large margin. Extensive experiments validated the efficacy and broad applicability of our method. S-FGRM is generally applicable to any gradient-based attack and can be combined with other input transformation or ensemble attacks to enhance the adversarial transferability, which deserves more exploration in future work. IEEEtran
http://arxiv.org/abs/2307.02206v1
20230705111242
VINTERGATAN-GM: How do mergers affect the satellite populations of MW-like galaxies?
[ "Gandhali D. Joshi", "Andrew Pontzen", "Oscar Agertz", "Martin P. Rey", "Justin Read", "Florent Renaud" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
firstpage–lastpage Spin-1 Thermal Targets for Dark Matter Searches at Beam Dump and Fixed Target Experiments [ August 1, 2023 ========================================================================================== We investigate the impact of a galaxy's merger history on its system of satellites using the new vintergatan-gm suite of zoom-in hydrodynamical simulations of Milky Way-mass systems. The suite simulates five realizations of the same halo with targeted `genetic modifications' (GMs) of a z ≈ 2 merger, but resulting in the same halo mass at z=0. We find that differences in the satellite stellar mass functions last for 2.25-4.25 Gyr after the z ≈ 2 merger; specifically, the haloes that have undergone smaller mergers host up to 60% more satellites than those of the larger merger scenarios. However, by z=0 these differences in the satellite stellar mass functions have been erased. The differences in satellite numbers seen soon after the mergers are driven by several factors, including the timings of major mergers, the masses and satellite populations of the central and merging systems, and the subsequent extended history of minor mergers. The results persist when measured at fixed central stellar mass rather than fixed time, implying that a host's recent merger history can be a significant source of scatter when reconstructing its dynamical properties from its satellite population. galaxies: dwarf – galaxies: formation – galaxies: evolution – galaxies: interactions § INTRODUCTION The ΛCDM cosmological model predicts that structure formation in the Universe is hierarchical – smaller dark matter (DM) haloes are formed at earlier times and eventually coalesce to form larger structures, subsequently resulting in the merging of the galaxies formed within such haloes <cit.>. The merger history of a galaxy plays a vital role in determining several of its properties such as its star-formation history, stellar composition and morphology. It follows that the large diversity of galaxy properties found in the Universe is at least partly driven by the vast range of merger histories galaxies undergo. The majority of galaxies in the Universe are found in dense environments, either as part of groups and clusters of similar or more massive galaxies, or surrounded by their own system of lower mass (i.e. dwarf) galaxies <cit.>. The number and properties of such satellite galaxies within a system are expected to be intrinsically tied to the precise assembly history of the overall dark matter (DM) halo it is embedded in <cit.>, as well as the interactions between the satellite galaxies and their environment. While environmental processes play a crucial role in determining satellite properties such as colour, star-forming status and morphology <cit.>, the overall census of the satellite population in a system and the kinematics of the satellite ensemble is primarily dependant on the mass of the host halo and its assembly and merger history <cit.>. Dwarf galaxies in particular are an interesting mass regime in which to test models of galaxy formation. Their low masses and shallow potentials make them more responsive to both internal evolutionary processes as well as environmental factors, although their faintness presents significant challenges to their observations. Nonetheless, over the last few decades the list of observed dwarfs has grown from those around the Milky Way (MW) and Andromeda (M31) and more broadly within the Local Group (LG) <cit.> to other nearby galaxies <cit.>, as well as broader surveys such as SAGA <cit.> and ELVES <cit.>. These observations have shown that while MW-mass hosts exhibit a large diversity in satellite populations, the MW satellite populations are broadly consistent with those of other similar-mass hosts. Satellite abundances around MW-mass hosts can provide an important benchmark for models of structure formation as well as galaxy formation. In the past decades, several observational studies have aimed at understanding the expected satellite abundances and their diversity in such hosts. Within the SAGA survey (Stage II), the 36 MW-mass hosts have a wide range in richness and luminosity functions (LFs) and the MW satellite LF is shown to be consistent within this range <cit.>. Additionally, they find that the total number of satellites (with M_r,0<-12.3) appears to be positively correlated with the K-band luminosity of the central galaxy, as well as the r-band luminosity of the most massive satellite in the system. The former correlation likely reflects the increased number of satellites in more massive systems, while the latter may hint towards more satellites being found in hosts which have had more recent merger histories, under the assumption that due to dynamical friction, more massive satellites coalesce with the central galaxy faster than less massive ones, resulting in larger magnitude gaps at longer time intervals after a merger event. The <cit.> study of 30 MW-like hosts in the ELVES survey finds a similar correlation between satellite abundance and the host's K-band luminosity, and <cit.> show a wide diversity in satellite mass functions (MFs) with the same dataset. The merger history of a central galaxy will inevitably influence its satellite accretion history, which in turn may affect the present day properties of the satellite populations around such systems. Precisely how the merger histories affect the satellites however remains to be understood. <cit.> investigated the impact of merger histories on the satellites of MW mass systems with a compilation of observations of satellites around the MW, M31 and six other MW-mass galaxies. They also find a surprisingly tight positive correlation between the number of satellites in a system (with M_V<-9) and the stellar mass of of the most dominant merger experienced by the galaxy, which they define as the larger of the total accreted stellar mass or the mass of the most massive satellite within the system. With extended data from the SAGA survey (xSAGA), <cit.> show that the number of satellites in a system increases with decreasing r-band magnitude gap between the central galaxy and its most luminous satellite, implying that hosts with earlier accretion histories have fewer satellites at present day, again due to similar arguments as above for the correlation with the luminosity of the brightest satellite. One of the key challenges to obtaining such results from observations is the uncertainty in reconstructing the merger and accretion histories of the hosts. Simulations on the other hand allow us direct access to this information and can provide important insights into the process of satellite accretion. There have been several efforts to study dwarf satellites around MW-like systems in simulations that have broadly been able to reproduce the satellite MFs consistent with that of the MW, with the exception of the most massive MW satellites that are not always recovered. These include zoom-in hydrodynamical simulations of MW-, M31- and LG-like hosts e.g. using the FIRE galaxy model <cit.>, the LATTE simulation <cit.>, the APOSTLE suite <cit.>, the NIHAO simulations <cit.>, the ARTEMIS simulations <cit.> and the DC Justice League simulations <cit.>, along with large-volume simulations e.g. IllustrisTNG50 <cit.> and ROMULUS25 <cit.>, as well as using semi-analytical models <cit.>. Furthermore, <cit.> all find positive correlations (to varying degrees) between the number of satellites and one or more host properties including halo mass, central stellar mass or central K-band luminosity, that are at least qualitatively consistent with observational results. These latter results indicate that satellite populations may contain signatures of the host's formation and merger histories. Exploring the impact of merger histories on the satellite populations of galaxies would typically require either a cosmological simulation of a large enough volume to encompass a sample of merger histories representative of the Universe, or a suite of zoom-in simulations covering a range of merger histories. Furthermore, the simulations must have high-enough resolution to accurately model the dwarf population while simultaneously modelling the more massive host system. All of these factors incur significant computational costs. Additionally, in either method, it is not possible to control for other factors that would affect the satellite population along with the merger history itself. The vintergatan-gm suite of simulations <cit.> attempts to circumvent the latter of these limitations by using the GenetIC algorithm <cit.> to perform targeted modifications to the initial conditions (ICs) evolved by the simulation. The modifications change the mass ratio of a specified merger, while preserving the z=0 halo mass of the system and, to the greatest extent possible, the cosmological environment and surrounding structures. This allows us to isolate the impact of a single merger, while controlling for most other factors that would affect the evolution of the central galaxy and its satellites. As shown in <cit.>, the modifications drastically alter the properties of the central galaxy, but its surrounding outer stellar halo is largely unaffected except in the case where the GMs result in a bulge-dominated central galaxy. In this work, we focus on the satellite populations around the central galaxy at various times to understand their response to the central's merger history. We present a brief description of the simulations and methods in Section <ref> and our results in Section <ref>. In Section <ref>, we discuss the physical mechanisms by which the merger histories affect the satellite populations and what this implies for techniques that make use of satellites to infer properties such as the host mass as well as its merger history. Our conclusions are summarized in Section <ref>. § METHODS §.§ Simulations This paper analyses the vintergatan-gm suite of simulations, which is comprised of five zoom-in cosmological hydrodynamical simulations, performed with the code ramses <cit.>. The fiducial simulation of the suite focuses on a MW-mass system, whereas the other four simulations are variations where the ICs were genetically modified (GM) to alter the mass ratio of a z ≈ 2 major merger, while maintaining the z=0 halo mass of the system, through the use of the GenetIC code <cit.>. This suite of genetically modified ICs was first introduced in <cit.>, where the authors performed multiple DM-only zoom-in simulations of two MW-mass hosts. The target halo for the fiducial vintergatan-gm simulation was selected from these DMO zooms, themselves based on an initial uniform resolution DMO simulation of a (50 Mpc)^3 volume resolved by 512^3 particles, with a DM particle mass resolution of 1.2 × 10^8. The halo was chosen to be in the MW mass range of 200c≈ 10^12, with no other massive neighbours within 5. The GM simulations alter the mass ratio of the target merger by altering the overdensity field at the position of the halo to 90, 95, 110 and 120 per cent of the fiducial value. Throughout the paper, we use the terms smaller/larger merger scenarios to refer to this decrease/increase in the z ≈ 2 merger mass ratio and the labels Smallest/Smaller/Larger/Largest Merger B to refer to the corresponding simulations (the label `Merger B' is explained below in Section <ref>. Each of the simulations have a mass resolution of m_DM=2.0× 10^5 and an initial minimum gas cell mass of m_gas=3.6× 10^4. The precise setup used for the simulations is described in detail in <cit.>; we provide a brief description here. ramses is an Adaptive Mesh Refinement (AMR) based code which uses a particle-mesh algorithm to solve Poisson's equation and an HLLC Riemann solver for fluid dynamics assuming an ideal gas equation of state with γ=5/3. The AMR strategy allows us to reach spatial resolutions of 20 pc throughout the ISM. We employ the galaxy formation model of the vintergatan simulations <cit.>, which includes prescriptions for star-formation, feedback from SNeIa and SNeII, and stellar winds from O, B, and AGB stars. Stars are formed from cold dense gas (ρ>100 cm^3 , T<100 K) generating stellar particles with an initial mass of 10^4, modelled with a <cit.> initial mass function. Feedback is injected in the form of thermal energy when the cooling radius is resolved by at least 6 gas cells, and in the form of momentum otherwise <cit.>. All simulations assume a flat Λ CDM cosmology with h=0.6727, Ω_m,0=0.3139, Ω_b,0=0.04916, σ_8=0.8440 and n_s=0.9645 <cit.> and linearly span cosmic time between z=99 and z=0. §.§ Halo finding and satellite selection Haloes and subhaloes are identified within the simulation using the ahf halo finder <cit.>, and only (sub)haloes consisting of at least 100 particles (of any type) are retained. Merger trees are constructed using the pynbody <cit.> and tangos <cit.> packages. Unless otherwise specified, (sub)halo properties are measured using all particles within the `halo radius', r_halo. ahf determines an initial r_halo which is defined as either (in the case of haloes) or the subhalo-centric distance to the local minimum in the density field. It then iteratively removes unbound particles and defines a new radius enclosing the bound particles at each iteration, resulting in the final r_halo provided in the halo catalogues. From the halo catalogues, we select satellites around the central galaxy with the following criteria: * The stellar mass must satisfy > 10^6 (i.e. resolved by more than ∼100 stellar particles), unless otherwise specified. * The satellite should be found at a distance of d/ > 0.15 and d/ <1 unless specified otherwise. The central region is avoided to remove any unphysical clumps of stellar/gas matter belonging to the central galaxy from the final list of satellites. * The satellite should have a baryon fraction f_ bar=(*+gas)/halo<0.8. This further removes any unphysical haloes with little or no DM, while still allowing for satellites that may have experienced significant tidal mass loss, which is likely to preferentially remove DM from the outer regions of the galaxy. * We ignore subsubhaloes i.e. only retain haloes and their subhaloes. This ensures that we do not include small stellar/gas clumps within galaxies. § RESULTS The genetic modifications applied to the initial conditions were designed to alter the mass ratio of the z ≈ 2 merger experienced by the central galaxy in the fiducial simulation. Panel (a) of Fig. <ref> shows the resultant evolution of the central galaxies' halo (solid curves) and stellar (dashed curves) masses over cosmic time for each of the five GM simulations. Additionally, Table <ref> provides some key properties of the central galaxies at z=0. As seen from Fig. <ref>(a) and Table <ref>, by design, the final halo mass only varies by a maximum of 6%, and the virial radius by at most 2%, compared to the fiducial simulation. The stellar mass can vary by up to 14%, showing that modifications to early merger histories result in only a small amount of scatter in the stellar mass-halo mass relation. Due to the correlations implicit in a ΛCDM cosmology, genetic modifications by construction alter not only the mass ratio of the z ∼ 2 major merger (hereafter referred to as merger B) but also its timing, as well as the properties of two other significant mergers, one each before and after the target merger <cit.>. The three mergers were identified in the fiducial simulation as those that (i) bring in at least 10^8 of stellar mass at the time of infall and (ii) represent a merger mass ratio of at most 1:30. The merging systems are then cross-matched in the GM simulations. The start and end times of each of the three mergers are provided in Table <ref> along with the stellar mass ratios of merger B. The start time is defined as the final time that the merging galaxy is outside the virial radius of the primary galaxy, whereas the end is defined as the time of coalescence. The merger mass ratios were measured at the time of infall of the merging galaxy as detailed in <cit.>. Overall, our results reflect the combined impact of (i) modifying the target merger (ii) the necessary compensations in other mergers to reach the final halo mass within a ΛCDM cosmology and (iii) the non-linear interactions between these resulting from gravitational and baryonic evolution. Such interactions and cosmology-induced correlations are to be expected across all populations of observed or simulated galaxies, and must be taken into account when interpreting observational constraints on merger histories <cit.>. §.§ Satellite mass functions at z=0 We first compare the satellite mass functions around each of the central galaxies at z=0 in Fig. <ref>. Solid lines show the satellites within and dashed lines within 3. We include the latter since the effects of the merger may extend beyond the virial radius itself. The host merger history does not have a significant correlation with either the shape or the normalization of the satellite MFs within either spatial extent. Several previous studies have examined the correlation between satellite abundances and host halo ages/assembly times and have found that hosts that formed at later times on average contained more satellites <cit.>, suggesting that in earlier forming haloes, satellites had had more time to merge with the centrals. Although our results seem at odd with those results, note that these previous studies considered satellites that are significantly more massive that those in our sample (≳ 10^9) and do not consider the number of satellites per halo, but rather average satellite populations at a given halo mass. Furthermore, as we show in later sections, we do find similar results when considering the mass functions at earlier times. On the other hand, studies such as <cit.> have found the opposite trend i.e. that later forming haloes had fewer ultrafaint satellites (≲ 10^5), when including orphaned galaxies (i.e. DM haloes that have been disrupted below the detection limit in DM-only simulations, but that are tracked beyond this point based on their orbital properties). These results highlight that the connection between halo assembly and satellite abundance is not fully established and that satellite masses are key when making such comparisons. While comparisons between simulations may be performed in this way, comparisons to observations such as from the SAGA survey will require careful consideration of a number of different factors including: (i) surface brightness limits affecting both the sample selection and the measured masses/luminosities, (ii) the precise selection criteria used to define satellites, which may be different in simulations and observations, and (iii) the impact of cosmic variance, i.e. variations from galaxy to galaxy based on their local environment and aspects of their history beyond those systematically varied in our study. This variance will be especially important at the high mass end, where numbers are small. We will tackle a comparison to observations in future work, but preliminary analyses indicate that a significant fraction of the low-mass satellites included in our current sample have surface brightness values too low for them to be detected in observations and would therefore not be included in observed samples. The results are also dependent on the precise aperture and filter used to measure mock luminosities. As such, we defer conclusions about direct comparisons between our simulations and observations to a future paper. §.§ The evolution of satellite populations We next turn our attention to the time evolution of the satellite MFs. Fig. <ref>(b) shows the evolution of the total number of satellites around the central galaxy as a function of cosmic time. Note that the raw data are noisy, partly due to some subhaloes not being detected by ahf especially during mergers or in very high-density regions, and have therefore been smoothed with a rolling average. While the number of satellites is similar at z=0 for each of the GM simulations, this is not the case at earlier times. At the beginning of merger A (orange shaded region), each of the GM simulations have approximately the same numbers of satellites. The start time of this merger varies by at most ∼ 100 Myr across the GM simulations (see Table <ref>). The start time of merger B is more variable, varying by up to 130 Myr across four simulations and in one exceptional case, the simulation, occurring 350 Myr earlier than in the fiducial case. Although it appears that the end of merger A overlaps with the beginning of merger B in Fig. <ref>, this is merely the result of combining the start and end times of all five simulations and in fact, within each simulation, there is a time interval of ∼ 200-500 Myr between the two events. The combined impact of these differences in the merger timing and mass ratios is evident in Fig. <ref>(a), where the largest merger scenario exhibits an earlier build up of halo and stellar mass at z ∼ 2 compared to the other simulations, and in Fig. <ref>(b), where the evolution of the number of satellites is markedly different, especially at z ≲ 1.5. To show the effects of merger B more clearly, in Fig. <ref>, we show the same evolution of number of satellites, but now as a function of time relative to the end of merger B (i.e. the time of coalescence of the merging system). The start and end of merger A are indicated as open and filled circle markers, and the start of merger B as diamond markers. During merger A, the central halo rapidly accumulates several satellites. Despite some differences in timing, all five simulations have similar numbers of satellites by the end of merger A (Δ N (t_end,mergerA) ∼ 2). In all cases except for the simulation, the number of satellites then shows a sharp decline, starting shortly before the end of merger A and continuing till the beginning of merger B. During merger B, the hosts further accumulate additional satellites, such that by the end of merger B, they again have similar numbers of satellites (Δ N (t_end,mergerB) ∼ 7). Roughly 2-3 Gyr later however, there is a stark difference in the number of satellites (Δ N (t_end,mergerB) ∼ 11-12), with the smaller merger scenarios resulting in a higher number of satellites. In fact, while the smaller merger scenarios continue to accumulate satellites, the number of satellites is seen to decrease in the case of the fiducial simulation and larger merger scenarios. We discuss the likely causes for these trends in Sec. <ref>. In order to determine to what extent the previous results are due to differences in the mass of the central at a given time, in Fig. <ref>, we show the evolution of the number of satellites as a function of the central stellar mass. Diamond markers indicate the mass and number of satellites at z ∼ 1 for comparison. The figure shows that even after controlling for the mass of the central, there are significant differences in the numbers of satellites, with the smallest merger scenario having ∼ 10 more satellites than the largest merger scenario, although this trend is not monotonic with the mass ratio of merger B. We have confirmed that the results are similar when considering the hosts' halo masses instead of stellar mass. The trends are somewhat weaker when considering satellites out to 3, but qualitatively similar, indicating that the scatter is not due to satellites simply travelling beyond the virial radius. Hence, the differences in numbers of satellites soon after the merger persist even when controlling for the central mass, which implies that the central merger history may be a significant source of uncertainty when recovering the properties of a group/cluster, e.g. host mass, from its satellite populations. §.§ Variation of satellite MFs over time Fig. <ref> shows that there are significant differences in the number of satellites that persist for several Gyr. We now examine the satellite MFs over this time period to understand the mass dependence of the previous results. Fig. <ref> shows the stellar MFs of satellites within 1 at and 1-5 Gyr after the end of merger B. The MFs remain similar for the first Gyr in all cases except the simulation, which shows a noticeable loss of low- and intermediate-mass (≲ 10^7.5) satellites. By 2 Gyr after the merger however, we find significant differences between the smaller and larger merger scenarios, with the former (latter) having gained (lost) several low-mass satellites. These differences are seen to persist for approximately 3 Gyr; by ∼ 5 Gyr after the end of merger B however, the satellite MFs in all the GM simulations are once again indistinguishable from one another. Thus, the differences in satellite numbers found in previous results are largely driven by the loss/gain of low-mass satellites, while the number of more massive satellites remain approximately constant, changing by at most a few over a ∼ 4-5 Gyr interval after the end of the major merger. To quantify more concretely the timescale over which the impact of the merger is noticeable, we compare the satellite cumulative MFs (cMFs) in 0.25 Gyr increments after the end of merger B and record the maximum difference between the simulation and each of the other simulations in any mass bin, normalized by √( cMF) of the simulation. The merger impact timescale can then be measured as the time over which this maximum difference remains greater than 2√( cMF), which ranges from 2.25-4.25 Gyr across the five simulations. This can be compared to the dynamical time of the system at the virial radius at the end of merger B, which is 1.17-1.33 Gyr across the five simulations; thus the satellite MFs respond to the target merger over ∼2-4 dynamical times as expected. While increasing numbers of low-mass satellites can easily be tied to accretion of associated subhaloes, the mechanisms for declining numbers is harder to pinpoint and can include a) travelling beyond the virial radius of the host i.e. becoming backsplash galaxies <cit.>, b) merging with the central, or c) being disrupted below the detection limits of the halo finder or losing enough mass due to tidal stripping to fall below the stellar mass limit we have imposed. The first of these suggests that the splashback radius is a more physically motivated definition of the host halo boundary rather than the virial radius when determining satellite membership, as has been proposed by several previous studies <cit.>. However we have conducted the same analysis including satellites out to 2 and 3 and while there are indeed differences in the numbers of satellites, especially between mergers A and B, neither choice changes the broad conclusions of this paper. Ideally one would quantify the contribution of these different effects to shaping the satellite population by identifying the fate of each individual subhalo. However, the currently available merger trees are not robust enough to be able to track individual subhaloes through the mergers in order to determine precisely which of these pathways they follow. While ahf can detect most subhaloes around the central halo, during close pericentric passages, we find that subhaloes are often identified as merging into the central galaxy, partly due to physical processes but partly due to misidentification of subhalo particles. This is especially true during mergers, when there are two (and sometimes more) central galaxies, making it challenging to automatically track the satellite galaxies through the mergers to determine which of these pathways are most important. We therefore instead use a manual inspection process to understand how the merger history shapes the satellite population in Section <ref>. § DISCUSSION §.§ Satellite accretion histories The results described above show that altering the mass ratio of a merger experienced by a MW-mass galaxy also changes how it accretes its satellite population. The genetic modification approach highlights the non-linearities implicit in galaxy formation physics and the correlations required by a ΛCDM cosmology, implying that any single merger (whether observed or simulated) cannot be interpreted in isolation from other accretion events. While the target for modification is merger B, in order to achieve the same halo mass at z=0, the modifications must indirectly affect several other mergers, notably the major merger that occurs immediately before (merger A). An increase/decrease in the mass ratio of merger B is partially compensated for by a decrease/increase in the mass ratio of merger A <cit.>. As mentioned in Section <ref>, the limitations of the halo finder and the merger trees prevent us from reliably tracking individual satellites to quantify the role of different physical effects, especially when considering how satellites are eventually destroyed. Here we discuss the various ways in which the genetic modifications affect the merging systems, which can qualitatively shed light on this issue. We first look at the commonalities between the merger histories of the five simulations and then highlight the differences between them. Some of the key features of the merger histories are illustrated in Fig. <ref>, which shows the projected distribution of stellar particles within 200 kpc from the central halo for each of the GM simulations (columns) at the beginning and end of mergers A (rows 1 & 2) and B (rows 3 & 4). The main halo's virial radius is shown by the blue solid circle, while the merging systems A, B and C are shown in dashed brown circles (lightest to darkest). In general, the merger histories are composed of the following sequence of events: * Prior to merger A (the first ∼ 2 Gyr of the simulation), the galaxy's major progenitor and the secondary merging system both experience a chaotic fast accretion period undergoing several close interactions, often involving multiple simultaneously interacting galaxies. Some, if not most, of these interacting galaxies do not have enough time to merge with the corresponding central galaxy, instead becoming part of the satellite populations of the two merging central galaxies. * Merger A proceeds, beginning with the secondary central galaxy crossing the virial radius of the primary system, followed by a first pericentric and then first apocentric passage. Eventually, the secondary central galaxy coalesces with the primary central after a few orbits, while the two satellite systems become mixed. Between the beginning of merger A and the first pericentre, the number of satellites increase, resulting in the first peak in satellite numbers at z ∼ 2.5 in Fig. <ref>(b). * Following the first pericentric passage and before the beginning of merger B, the satellites follow three different pathways: (a) some have velocities large enough to travel beyond the virial radius of the merged system, i.e. become backsplash galaxies, (b) some are temporarily discounted since they are within 0.15 (or indeed are not detected by the halo finder at all within the high density central region), or (c) some merge with the central galaxy. All three of these together result in a decrease in the number of satellites at z ∼ 1.9. Note that we find that satellites can have a wide range of merger timescales, with some satellites (usually low mass and/or on radial trajectories) merging within <1 Gyr, while some satellites (usually more massive and/or on circular trajectories) can remain for several Gyrs. This confirms that satellite merging cannot account for the entire decrease in satellite numbers at z ∼ 1.9. * Between mergers A and B, the system can also interact with several smaller infalling galaxies, which do not necessarily merge immediately, but instead add to the satellite population. * Merger B proceeds, with the secondary system bringing in some satellites of its own, leading to an increase in the number of satellites beginning at z ∼ 1.9 * The post-merger-B phase begins, consisting of several further minor mergers, the largest of which is highlighted as merger C. Given this sequence of events which is in common between all simulations, we can now discuss in detail where the merger histories differ, and how this gives rise to differences in the satellite population. * The secondary system involved in merger A assembles during step (i) above, and is most responsible for compensating for the increased/decreased mass budget of merger B in the GM simulations. Thus, it brings in more satellites in the simulation (17 with >10^6) and fewer in the simulation (7), which goes to explain the relative sizes of the peaks seen at z ∼ 2.5 in Fig. <ref>(b). This is also evident in Fig. <ref> (top row), where the size of the merging system A gets progressively smaller as the importance of merger B increases (left to right). This reinforces how even the observed effect of a major merger within a ΛCDM cosmology will not be fully separable from the events leading up to that merger; an observed set of galaxies with different ongoing or recent mergers, even if controlled for fixed central galaxy mass, will exhibit both direct and indirect consequences of the ongoing merger. * During step (iii), as the satellites brought in with merger A settle onto new orbits, two key factors differ between the scenarios: * The numbers of satellites that are travelling beyond the virial radius decrease from the to to simulations, due to the increased kinetic energy of A and therefore its satellite system in the former cases. * The interval between the first pericentric passage during merger A and the beginning of merger B, is shorter for the scenario and longer for the one. In fact, merger B begins 400-450 Myr earlier for the simulation than for the other four simulations. * Between merger A and B, the system further accretes smaller galaxies: 3 in the simulation, 10 in the simulation (very shortly after merger A itself) and 8 in the simulation (approximately evenly spread out between mergers A and B). These can also be seen in the top row of Fig. <ref>, as several smaller galaxies spanning the region between systems A and B, with the fewest seen in the simulation and most in the simulation. These are then directly responsible for the increase in satellite numbers seen shortly after the end of merger A, and in fact, in the case of the simulation, for completely erasing the decrease expected after merger A. In effect, the satellites delivered with merger A in the simulation are instead delivered more gradually between mergers A and B in the case. * The secondary system in merger B contains at most 5 satellites with >10^6 in each of the GM simulations. From the scenario to the one, the central mass of B grows significantly (by construction), and the number of small galaxies in its vicinity also grows as is evident from Fig. <ref> (rows 2 and 3). However, these small accreted systems are not formally considered satellites of B, and so can be seen as an extension of the enhanced satellite capture rate already described between the two mergers. * Finally, after merger B, the system continues to accrete smaller galaxies resulting in minor mergers. The number of such events is mildly smaller in the scenario than in the one and they occur at slightly later times; more importantly, the galaxies are noticeably lower mass, on average, in the former case than the latter. The overall picture that emerges from examining these merger histories in detail is that the mass ratio of any merger cannot be fully separated from the overall environment that the system is embedded in. Note that the genetic modification algorithm constructs the closest possible sets of ΛCDM ICs subject to the desired change, and so give us a lower bound on how environmental factors become intertwined with merger constraints; the interrelationships will become even harder to separate in volume simulations or observations. Furthermore, while there are noticeable impacts of increasing/decreasing the mass ratio of the target merger as described here, the trends are not always monotonic w.r.t. to the GMs applied. This is evident in the initial increase of satellite numbers during merger A in Figs. <ref>(b) and <ref>, where the simulation has more satellites than the one, or in the satellite mass functions in Fig. <ref>(c-e), where the () scenario may have more (fewer) satellites than the () one. §.§ Summary of observational effects Our results show that the impact of varying the mass ratios of an early merger (i.e. several Gyrs ago) at a fixed z=0 halo mass is unlikely to be detectable on the satellite MFs at z=0. In future work, we will consider whether the satellite population retains a stronger memory of its merger history through additional factors such as the star-formation histories, quenched fractions and metallicities of the satellites. However, the merger histories do affect the satellite demographics in the system over timescales of up to ∼ 5 Gyr, which implies that the recent merger histories of host systems may be an important source of scatter in the total number of satellites and their mass functions. In our simulations, the timescale over which this effect is important is ∼ 5 Gyr. The results of Fig. <ref> also indicate that the differences in numbers of satellites are not simply the result of different central masses at a given cosmic time or time interval after a major merger. Considering the reverse proposition, the recent merger histories may also be a significant source of uncertainty when using the (dynamical) properties of the satellites in reconstructing the properties of the central galaxy <cit.>. As previously mentioned, direct comparisons with observational quantities requires careful consideration of selection effects and measurement biases, so we leave an investigation into the impact of our results on such techniques to a future paper. § CONCLUSIONS We have explored the impact of a galaxy's merger history on its population of dwarf satellites with the vintergatan-gm suite of simulations. vintergatan-gm is a set of five zoom-in simulations of a MW-mass system consisting of a fiducial simulation exhibiting a major merger at z ≈ 2, and four variations in which the ICs have been modified with the GenetIC algorithm to vary the mass ratio of this z ≈ 2 merger, while attaining the same halo mass at z=0. This targeted modification allows us to isolate, to the maximum extent possible, the impact of varying the merger history of the host system on its satellite abundance and mass function. We summarize our main conclusions here. * The number and mass functions of satellites around the central galaxy at early times are significantly impacted by the hosts' merger history; the smaller (larger) merger scenarios result in more (fewer) satellites being present after the end of the target merger. However, these differences are then compensated for at later times. * The mass functions of the satellites are noticeably impacted by the target merger for ∼ 2.25-4.25 Gyr after it ends (which correspond to ∼ 2-4 dynamical times for the system at the end of the targeted merger), with the smaller merger scenarios resulting in more low-mass (∼10^6-7.5) satellites compared to the larger merger scenarios. * Modifying the early merger history of the central galaxy has little impact on the total number of satellites surrounding it and their mass functions at z=0. This indicates that the satellite MFs may not retain the memory of mergers occurring at early times to present day, even though the mergers can significantly alter the properties of the central galaxy itself. Additional observables such as metallicities and quenched fractions will be examined in future work. Our results indicate that the merger history of a galaxy can have noticeable impacts on its satellite population, but that any such impact on the stellar mass function is retained only for ∼ 2.25-4.25 Gyr after a significant major merger when considered at fixed eventual halo mass. Nonetheless, this is a cosmologically significant time window and therefore it may indeed be possible to link the recent merger activity of a galaxy to its satellite stellar mass function observationally. As previously emphasised by <cit.>, the GM technique highlights that linking any particular observations directly to the consequences of a purported merger is risky in ΛCDM, due to its highly correlated structure. Our results further underscore this point, showing that effects on the satellite population arise for a number of reasons both directly and indirectly linked to a merger. A combination of GM simulations and large volume simulations is probably required to understand how to disentangle these effects for future observations. Although our results are produced for MW-mass systems, logically such trends are likely to be seen at all mass regimes. Finally, as mentioned earlier, while these results consider the impact of the mergers on the satellite mass functions, it is possible that the impact on other satellite properties such as quenched fractions and metallicity distributions may be more pronounced and long-lived. Our future work will focus on understanding the response to these properties along with incorporating observational selection effects in order to make robust comparisons between our simulations and observed MW dwarfs. § AUTHOR CONTRIBUTIONS GJ: Data curation, formal analysis, investigation, writing – original draft. AP: Conceptualization, funding acquisition, methodology, resources, writing – review & editing. OA: Conceptualization, resources, writing – review & editing MR: Conceptualization, data curation, writing – review & editing. JR: Writing – review & editing FR: Writing – review & editing. § ACKNOWLEDGEMENTS This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 818085 GMGalaxies. This study used computing equipment funded by the Research Capital Investment Fund (RCIF) provided by UKRI, and partially funded by the UCL Cosmoparticle Initiative. OA and FR acknowledge support from the Knut and Alice Wallenberg Foundation, the Swedish Research Council (grant 2019–04659) and the Royal Physiographic Society in Lund. We acknowledge PRACE for awarding us access to Joliot-Curie at GENCI/CEA, France to perform the simulations presented in this work. Parts of the computations and data storage were enabled by resources (allocations SNIC 2022/5-136 and SNIC 2022/6-75) provided by the Swedish National Infrastructure for Computing (SNIC) at National Supercomputer Centre at Linköping University partially funded by the Swedish Research Council through grant agreement no. 2018-05973. MR is supported by the Beecroft Fellowship funded by Adrian Beecroft. This work also made extensive use of numpy <cit.> and matplotlib <cit.> packages. § DATA AVAILABILITY The data underlying this article will be shared upon reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2307.00522v1
20230702091109
LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance
[ "Linoy Tsaban", "Apolinário Passos" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage Matthias Troyer ================================================================================== foo foo2 Recent large-scale text-guided diffusion models provide powerful image generation capabilities. Currently, a significant effort is given to enable the modification of these images using text only as means to offer intuitive and versatile editing. However, editing proves to be difficult for these generative models due to the inherent nature of editing techniques, which involves preserving certain content from the original image. Conversely, in text-based models, even minor modifications to the text prompt frequently result in an entirely distinct result, making attaining one-shot generation that accurately corresponds to the user's intent exceedingly challenging. In addition, to edit a real image using these state-of-the-art tools, one must first invert the image into the pre-trained model’s domain - adding another factor affecting the edit quality, as well as latency. In this exploratory report, we propose LEDITS - a combined lightweight approach for real-image editing, incorporating the Edit Friendly DDPM inversion technique with Semantic Guidance, thus extending Semantic Guidance to real image editing, while harnessing the editing capabilities of DDPM inversion as well. This approach achieves versatile edits, both subtle and extensive as well as alterations in composition and style, while requiring no optimization nor extensions to the architecture. Code and examples are available on the project's https://editing-images-project.hf.space/webpage. § INTRODUCTION The exceptional realism and diversity of image synthesis using text-guided diffusion models have garnered significant attention, leading to a surge in interest. The advent of large-scale models <cit.> has sparked the imaginations of countless users, granting unprecedented creative freedom in generating images. Consequently, ongoing research endeavors have emerged, focusing on exploring ways to utilize these powerful models for image editing. Recent developments in intuitive text-based editing showcased the ability of diffusion based methods to manipulate images using text alone <cit.>. In a recent work by Brack et al.<cit.> the concept of semantic guidance (SEGA) for diffusion models was introduced. SEGA requires no external guidance, is calculated during the existing generation process and was demonstrated to have sophisticated image composition and editing capabilities. The concept vectors identified with SEGA were demonstrated to be robust, isolated, can be combined arbitrarily, and scale monotonically. Additional studies explored alternative methods of engaging with image generation that are rooted in semantic understanding, such as Prompt-to-Prompt <cit.>, which leverages the semantic information of the model’s cross-attention layers that associates pixels with tokens from the text prompt. While operations on the cross-attention maps enable various changes to the generated image, SEGA does not require token-based conditioning and allows for combinations of multiple semantic changes. Text-guided editing of a real image with state-of-the-art tools requires inverting the given image, which poses a significant challenge in leveraging them for real images. This requires finding a sequence of noise vectors that once used as input for a diffusion process, would produce the input image. The vast majority of diffusion-based editing works use the denoising diffusion implicit model (DDIM) scheme <cit.>, which is a deterministic mapping from a single noise map to a generated image. In the work of Huberman et al. <cit.>, an inversion method for the denoising diffusion probabilistic model (DDPM) scheme was proposed. They suggest a new way to compute noise maps involved in the diffusion generation process of the DDPM scheme, so that they behave differently than the ones used in regular DDPM sampling: they are correlated across timesteps and have a higher variance. Edit Friendly DDPM inversion was shown to achieve state-of-the-art results on text-based editing tasks (either by itself or in combination with other editing methods) and can generate diverse results for each input image and text, contrary to DDIM inversion-based methods. In this overview we aim to casually explore the combination and integration of the DDPM inversion and SEGA techniques, which we refer to as LEDITS. LEDITS consists of a simple modification to the semantically guided diffusion generation process. This modification extends the SEGA technique to real images as well as introduces a combined editing approach that makes use of the editing capabilities of both methods simultaneously, showing competitive qualitative results with state-of-the-art methods. § RELATED WORK §.§ Edit friendly DDPM inversion A significant challenge of diffusion-based methods for image editing and manipulation is the extension to real images that requires inverting the generation process. In particular, inversion of DDPM sampling scheme <cit.> posed a major challenge that was recently addressed by Huberman et al. <cit.>. In their work, they suggest an alternative inversion, that consists of a novel way to compute the T+1 noise maps involved in the diffusion generation process of the DDPM scheme, so that they are better suited for editing. In the DDPM sampling scheme, the reverse diffusion process starts from a random noise vector x_T ∼ N(0,ℐ) and iteratively denoises it using x_t-1 = μ̂_̂t̂(x_t) + σ_t z_t t=T,...,1 where z_t are iid standard normal vectors, and μ̂_̂t̂(x_t) = √(α̅_t-1) (x_t -√(1-α̅_t)ϵ_θ_t )/ √(α̅) + √(1-α̅_t-1 - σ^2_t)ϵ_θ_t where ϵ_θ_t is the neural network noise estimate of x_t, and σ_t = ηβ_t (1-α̅_t-1)/(1-α̅_t) where β_t stands for a variance schedule and η∈ [0,1] with η=1 corresponding to the original DDPM work. The edit friendly DDPM inversion method constructs the sequence x_1,..,x_T such that structures within the image x_0 are more strongly “imprinted” into the noise maps z_1,...,z_T that are extracted by isolating z_t from eq.<ref>. §.§ Semantic Guidance The concept of Semantic Guidance <cit.> was introduced to enhance fine grained control over the generation process of text guided diffusion models. SEGA extends principles introduced in classifier-free guidance by exclusively interacting with the concepts already present in the model’s latent space. The calculation takes place within the ongoing diffusion iteration and is designed to impact the diffusion process across multiple directions. More specifically, SEGA uses multiple textual descriptions e_i, representing the given target concepts of the generated image, in addition to the text prompt p. § LEDITS - DDPM INVERSION X SEGA We propose a straightforward integration that consists of a simple modification to the SEGA scheme of the diffusion denoising process. This modification allows the flexibility of editing with both methods while still maintaining complete control over the editing effect of each component. First, we apply DDPM inversion on the input image to estimate the latent code associated with it. To apply the editing operations, we perform the denoising loop such that for each timestep t, we repeat the logic used in SEGA but with the DDPM inversion scheme, using the pre-computed noise vectors. More specifically, we start the denoising process with x_T computed with DDPM inversion. Let ϵ_θ_t be the the diffusion model’s (DM), noise estimate with semantic guidance (following the SEGA logic) in timestep t. Then we update the latents according to eq.<ref> such that x_t-1 = μ̂_̂t̂(x_t; ϵ_θ_t) + σ_t z_t where z_t is the corresponding noise map, obtained from the inversion process. A pseudo-code of our method is summarized in Alg. <ref>. A general overview is provided in Fig. <ref>. § EXPERIMENTS foo foo We explored two editing workflows: The first, using DDPM purely for inversion (i.e. target prompt=””), such that a perfect reconstruction of the original image is achieved and editing is done by performing semantic guidance with SEGA edit concepts. The second is performing two editing operations simultaneously by choosing a target prompt that reflects a desired output, in addition to semantic guidance with SEGA edit concepts. We observe that both approaches add diversity and versatility to the pure DDPM inversion outputs (figures <ref> <ref>), and extend the amount of control over edit operations. In addition, our experiments indicate that SEGA guidance vectors generally maintain their properties of robustness and monotonicity as can be seen in figures <ref>,<ref>. Our qualitative experiments show competitive results with state-of-the-art methods and demonstrate the following properties: fidelity vs. creativity - The combined approach adds another layer of flexibility in tuning the effect of the desired edit, balancing between preserving the original image semantics and applying creative edits. flexibility and versatility - adding SEGA editing concepts on top of the ddpm edit (reflected in the target prompt) maintains the quality of the DDPM edit (Fig. <ref>, <ref>). Complementing capabilities - The combined control can compensate for the limitations of one approach or the other in various cases. In Fig. <ref>. we explore the effect of the skip-steps and target guidance scale (the strength parameter of the classifier-free scale) parameters on the edited output, when using solely DDPM inversion for the editing operation. In comparison, we also examine the effect of SEGA concepts with increasing edit guidance scales when editing solely with SEGA (and using DDPM for inversion). We observe that the pure DDPM inversion edited outputs and pure SEGA edited outputs range differently on the scale of fidelity to the source image and compliance with the target prompt. In addition, given the straightforward integration of the two methods, we maintain the performance advantages of the two techniques, thus making this overall approach lightweight. § CONCLUSION In this report, we explored the combination of the DDPM inversion technique with semantic guidance and introduced LEDITS. We show that this efficient and lightweight approach spans a wide range of editing capabilities and extends the level of fine-grained control users have over the effect of editing operations. Our results indicate LEDITS generally maintains the individual strengths of each method, including SEGA properties such as robustness, and monotonicity. Our qualitative experiments indicate the two techniques can be used simultaneously for independent editing operations leading to more diverse outputs without harming the fidelity to the semantics of the original image and compliance with the editing prompts. § LIMITATIONS Given the casual and exploratory nature of this report, we leave quantitative evaluations for future works and outside the scope of this work. The purpose of this report was to merely explore and suggest an intuitive editing workflow for real images, demonstrate it's qualitative abilities and potentially drive further works along this path. § METHODS §.§ Implementation The implementation of our approach builds on the Stable Diffusion and Semantic Stable Diffusion pipelines from the https://github.com/huggingface/diffusers HuggingFace diffusers library. For all experiments and evaluations we used StableDiffusion-v-1-5 checkpoint. For the DDPM Inversion implementation, we used the official implementation at - https://github.com/inbarhub/DDPM_inversionhttps://github.com/DDPM-inversion. Our implementation is available on the project's https://editing-images-project.hf.space/webpage. §.§ Experiments All images used for our analysis were downloaded from: https://www.pexels.com/https://www.pexels.com/. In all experiments, we configured all methods to use 100 forward and backward steps. Table <ref> summarizes the huper-parameters we used for all methods to produce the results shown in Fig. <ref>. DDPM and P2P hyper-parameters used for Fig. <ref> were set with identical values to those used in <cit.> for quantitative assessments. unsrt
http://arxiv.org/abs/2307.03229v1
20230706180004
Adaptive projected variational quantum dynamics
[ "David Linteau", "Stefano Barison", "Netanel Lindner", "Giuseppe Carleo" ]
quant-ph
[ "quant-ph", "cond-mat.other", "physics.comp-ph" ]
[email protected] Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland National Centre for Computational Design and Discovery of Novel Materials MARVEL, EPFL, Lausanne, Switzerland Physics Department, Technion, 320003 Haifa, Israel Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland National Centre for Computational Design and Discovery of Novel Materials MARVEL, EPFL, Lausanne, Switzerland We propose an adaptive quantum algorithm to prepare accurate variational time evolved wave functions. The method is based on the projected Variational Quantum Dynamics (pVQD) algorithm, that performs a global optimization with linear scaling in the number of variational parameters. Instead of fixing a variational ansatz at the beginning of the simulation, the circuit is grown systematically during the time evolution. Moreover, the adaptive step does not require auxiliary qubits and the gate search can be performed in parallel on different quantum devices. We apply the new algorithm, named Adaptive pVQD, to the simulation of driven spin models and fermionic systems, where it shows an advantage when compared to both Trotterized circuits and non-adaptive variational methods. Finally, we use the shallower circuits prepared using the Adaptive pVQD algorithm to obtain more accurate measurements of physical properties of quantum systems on hardware. Adaptive projected variational quantum dynamics Giuseppe Carleo August 1, 2023 =============================================== § INTRODUCTION Simulation of static and dynamic properties of quantum systems is a notoriously hard task for classical computers. While analytical solutions are available only for specific cases, the amount of time and computing resources required in general by exact numerical methods is exponential in the system size, making the calculations quickly unfeasible. While several approximated many-body numerical techniques have been proposed <cit.>, the accurate description of important physical and chemical phenomena is a very active research problem <cit.>. In recent years, quantum computers have seen significant developments <cit.>, opening potential opportunities for scientific discoveries. Hardware capabilities continue to advance steadily, and we can already create and manipulate complex many-body quantum systems <cit.>. However, large-scale fault-tolerant quantum computers remain far in the future, and contemporary devices show limitations in connectivity, size, and coherence times. Accounting for these constraints, Variational Quantum Algorithms (VQAs) have emerged as the leading strategy to take advantage of near-term quantum devices <cit.>. In this class of algorithms, the solution of a given problem (e.g. finding the ground state of a physical system) is encoded in a quantum circuit that depends on some parameters optimized with the aid of a classical device. VQAs have not only been proposed for quantum simulations but also for a variety of different applications, such as machine learning <cit.>, combinatorial optimization <cit.>, quantum error correction <cit.> and compilation <cit.>. Variational schemes have also been introduced in quantum dynamics <cit.>, as a more efficient alternative to Trotterization <cit.>. The accuracy of a variational quantum simulation is then tied to the ability of a parameterized circuit to describe time-evolved wave functions. Even if the initial wave function is well-described by the chosen parameterized circuit, the complexity of the time-evolved wave functions varies with time and the chosen circuit may fail to describe them. The choice of the parameterized circuit is therefore crucial and it remains an open problem in variational simulations of quantum dynamics. Adaptive schemes have been proposed in the context of variational ground state search <cit.> especially to avoid committing to a particular parameterized circuit. The key idea is to construct the parameterized circuit during optimization. By systematically appending specific quantum gates to the parameterized circuit, adaptive methods have been shown to surpass standard approaches in the number of operations required and in the accuracy of the final results. Moreover, adaptive methods provide flexible circuits suited for dynamics simulations <cit.>. However, including an adaptive step for dynamics usually requires measurements of additional quantities, that might be difficult to perform, or auxiliary qubits. In this work, we introduce an adaptive variational algorithm for real-time evolution based on the projected Variational Quantum Dynamics (pVQD) algorithm <cit.>, denoted Adaptive pVQD. The method inherits all the properties of the original pVQD algorithm and integrates the adaptive modification of the parameterized circuit without requiring auxiliary qubits. The structure of this paper is as follows: in <ref> we present the algorithm and describe how the adaptive routine is performed; in <ref> we apply the method to study a time-dependent and a fermionic system, benchmarking the method against Trotter evolution and the original pVQD algorithm; <ref> concludes the paper with some considerations and outlooks on the proposed method. § METHOD Consider a physical system governed by a Hamiltonian H. For clarity of exposition, we focus on time-independent Hamiltonians. However, this is not a requirement of the algorithm, as we explicitly show in <ref>. To simulate the dynamics of quantum systems on a quantum computer, we have to prepare the time-evolved wave function |Ψ(t) ⟩ = U(t)|ψ_0 ⟩, where | ψ_0 ⟩ = U_0 |0 ⟩^⊗ N is the initial state, N indicates the number of qubits representing the physical system and U(t) is the unitary time evolution operator. The Adaptive pVQD algorithm aims to approximate the state |Ψ(t) ⟩ using parameterized states of the form | ψ (θ, A) ⟩ = U(θ, A) | ψ_0 ⟩ = ∏_i e^-i θ_i A_i | ψ_0 ⟩, where each real parameter θ_i ∈θ is associated to a Hermitian generator A_i ∈A. The parameterized state is therefore specified by the set of parameters and operators {θ, A}, and it can be implemented as a quantum circuit. From now on, we adopt the notation | ψ(θ) ⟩≡ | ψ(θ, A) ⟩ and U(θ) ≡ U(θ, A). To simulate a physical model until a final time t_f, we divide the evolution into small time intervals Δ t. We further assume that the parameterized state | ψ ( θ ) ⟩ is a good approximation of the time-evolved wave function at time t. The wave function at time t + Δ t can thus be represented by U_TS(Δ t) | ψ ( θ ) ⟩, where U_TS(Δ t) is a Trotter-Suzuki decomposition of the time evolution operator U(Δ t) <cit.>. In this manuscript we use a first order decomposition, but higher orders can be considered. The choice of the optimal Δ t is problem dependent and will be discussed in <ref>. We then approximate the evolution step t → t+Δ t using a new set of parameters θ→θ + d θ that maximizes the overlap between U_TS(Δ t) | ψ ( θ ) ⟩ and | ψ( θ+d θ) ⟩. This can be achieved by minimizing, with respect to d θ, the infidelity ℐ(d θ,Δ t) = 1 - ℱ(d θ,Δ t), where the fidelity ℱ(d θ,Δ t) = | ⟨ψ( θ+d θ) | U_TS (Δ t) | ψ ( θ ) ⟩ |^2 can be measured on a quantum device <cit.>. At each time step, the initial parameters and operators {θ, A} are those obtained at the previous time step. Assuming that the set of operators A is sufficient to describe the state at time t + Δ t, we find the parameter shift d θ^* that minimizes ℐ(d θ,Δ t). Details about the minimization routine can be found in <ref>. If the minimization routine is not successful, new gates built using generators (A_0^*, A_1^*, ⋯, A_k^*) from the operator pool are added to the parameterized circuit following the adaptive procedure described in <ref>. This adaptive procedure is repeated up until the convergence criteria are met. The algorithm starts with the initial state |ψ_0⟩ represented by an empty set of operators. As needed, new gates are added through the time evolution until the chosen final time t_f. The complete procedure is illustrated in <ref>. We note that the original pVQD scheme <cit.> can be recovered by fixing the set of operators A through the entire simulation. §.§ Adaptive step When the parameterized circuit | ψ(θ) ⟩ is not expressive enough to accurately describe the time step evolution by only shifting the variational parameters, we add new gates to it. This is referred to as the adaptive step of the algorithm. Given an operator pool, we determine the best gate to grow the quantum circuit. As first proposed in <cit.>, we look for the operator whose gate maximizes the derivative of the cost function with respect to its parameter. This is achieved by iterating over all the operators in the pool, a step that can be performed in parallel even on different quantum devices. For ground state methods, the cost function is the energy of the system, while the gradient is obtained by measuring the expectation value of the commutator between the trial operator and the Hamiltonian <cit.>. We must ensure that is possible to apply a similar procedure when dynamics is considered. In the adaptive scheme proposed in <cit.>, this step requires an additional measurement of the variance of the Hamiltonian with respect to the non-adaptive case. In our method, the gradient of the fidelity with respect to the shift dθ_a of parameter θ_a associated with a trial operator A_a has the form ∂ℱ/∂ dθ_a = ⟨ϕ(θ,Δ t )| e^-idθ_a A_a [P_0,iA_a] e^idθ_a A_a|ϕ(θ,Δ t )⟩, where we define the projector P_0 = ψ_0ψ_0 and the state |ϕ(θ,Δ t )⟩ = U^†(θ) U_TS(Δ t) |ψ(θ )⟩ (see <ref> for the full derivation). To ensure continuity of time evolution, we initially set θ_a=0. We note that measuring the derivative of the fidelity corresponds to measuring the Hermitian operator [P_0,iA_a] with respect to the pVQD circuit U^†(θ)U_TS(Δ t) |ψ(θ )⟩ modified by the addition of the gate e^idθ_a A_a. However, we evaluate the derivative using the parameter shift rule <cit.>, as for the minimization routine (for more details, see <ref>). This operator search is still parallelizable on multiple devices and does not require auxiliary qubits. The adaptive step has been lately extended and optimized <cit.>, with new protocols that greatly reduce the computational resources required with respect to the first proposal. In particular, we adopt the scheme presented in <cit.>, which increases the depth of the parameterized circuit |ψ(θ)⟩ by 1 at every adaptive step. While the infidelity defined in <ref> remains above a fixed threshold ε, additional adaptive steps are performed. For a detailed description, see <ref>. §.§ Operator pool The choice of the operator pool is a key ingredient in the success and efficiency of adaptive variational algorithms. Having a complete pool of operators is exponentially complex in the size of the physical system, therefore, one has to make some restrictions in its selection. Many different strategies have been proposed, such as the creation of a minimally complete pool <cit.>, the inclusion of symmetries directly in the operator pool <cit.>, or the extension of a complete pool acting on a subsystem of the studied model <cit.>. In the study of the dynamics, we can refer to the Trotterization of the time evolution operator to select the pool. In particular, we consider local (L) and non-local (NL) operator pools, respectively, given by 𝒜_L = { X_i, Y_i, Z_i }_i=0^N-1 ∪{ X_i X_i+1, Y_i Y_i+1, Z_i Z_i+1}_0 ≤ i ≤ N-2, 𝒜_NL = { X_i, Y_i, Z_i, X_i X_j, Y_i Y_j, Z_i Z_j }_0 ≤ i < j ≤ N-1, where X_i, Y_i and Z_i are the Pauli gates acting on site i. Given that 𝒜_L⊂𝒜_NL, we expect that 𝒜_NL will generate more flexible parameterized states. However, not only the choice of 𝒜_NL leads to a measurement overhead, but the non-local nature of this pool may add long-range controlled-NOT (CNOT) gates to the circuit, according to the device connectivity. In <ref>, we report the comparison of the two pools in the study of a fermionic system. § RESULTS We apply the Adaptive pVQD method to the study of the 1D Heisenberg XYZ model with an external driving field and the 2D Fermi-Hubbard model. Both have non-trivial dynamics and open the pVQD method to the study of time-dependent and fermionic systems. In both cases, open boundary conditions were imposed. §.§ Driven Heisenberg model Given an open chain of L spins, the driven Heisenberg XYZ Hamiltonian can be written as: H(t) = ∑_i=0^L-2 ( J_x X_i X_i+1 + J_y Y_i Y_i+1 + J_z Z_i Z_i+1 ) + D(t) where J_x, J_y and J_z are coupling parameters and D(t) is the time-dependent driving term. Many different driving terms can be applied to the system. Among those we choose D(t) = ∑_i=0^L-1 (-1)^i sin( ω t) Z_i , where ω is the driving frequency. First, we investigate the performance of the Adaptive pVQD algorithm with a local pool on a perfect simulator and compare to Trotterized circuits and the original implementation of pVQD. We consider J_x=1, J_y=0.8, J_z=0.6, an antiferromagnetic initial state |ψ_0⟩ = |0101⟩ and a final evolution time t_f = 2. In the classic version of the pVQD algorithm, we have to choose an ansatz for the time evolved wave function. We consider a circuit equivalent to a Trotter step where all the rotations are defined by variational parameters. The Trotter step circuit implementation for this model is shown in <ref>. Both the Trotter and the pVQD full circuits are then obtained repeating this structure n_TS times. In particular, we fix n_TS = 10 for the Trotter circuit and n_TS = 3 for the pVQD ansatz. After running the algorithms, we compare the different circuits obtained and use them to measure expectation values of single- and two-spin observables. The results are shown in <ref>. The Trotter circuit lags behind variational methods both in terms of accuracy and resource required. The pVQD method instead achieves accurate results until t=1.0, where the associated circuit becomes shallower than the one of Adaptive pVQD. This phenomenon suggests that in that time step the fixed representation power is the main source of error in the variational calculations. In order to show the flexibility of the Adaptive pVQD, we implement a naive modification of the pVQD algorithm, that we indicate as pVQD with block extensions. In this case, a new step of the Trotterized variational ansatz is added to the circuit once the optimization procedure does not reach the desired accuracy. While this approach does improve the performance of the pVQD algorithm, we remark that it is not general, as it depends on the ansatz structure we have chosen. Furthermore, we can see from the bottom panel of <ref> that the Adaptive pVQD method always produces shallower circuits, with resources tailored to the needs of the specific time step. Then, we extend the study to systems with different sizes. To this end, we define the integrated exact infidelity Δ_ℐ^ex(t_f) = ∫_0^t_f( 1 - |⟨Ψ(t) | ψ(θ) ⟩|^2) dt with respect to the exact wave function |Ψ(t)⟩ computed on a classical device. We again fix a final evolution time t_f = 2 and evaluate Δ_ℐ^ex(t_f) for each method for systems of L ∈[3,11] spins. In particular, we consider a Trotter circuit with a fixed depth of n_TS=10 and one with fixed Trotter step size dt=J_x t/n_TS=0.05, the same we use in the Trotter step of the pVQD algorithm. The results are shown in <ref>, together with the circuit depth at the end of the time evolution. We note that the depth of the Adaptive pVQD circuits increases with the system size and converges to the Trotter circuit with fixed depth, while having a lower integrated exact infidelity. We highlight that <ref> only indicates the depth of the final circuit. In the case of Adaptive pVQD, this corresponds to the deepest circuit prepared. The Trotterized circuits with a fixed Trotter step size yield the lowest values for Δ_ℐ^ex, but n_TS = 40 Trotter steps are required to evolve the system to t_f = 2, resulting in circuits almost one order of magnitude deeper than any other. We performed multiple pVQD simulations with different variational ansätze equivalent to n_TS = 1,2,3,8 Trotter steps. We note that the integrated exact infidelities of pVQD with n_TS = 1,2,3 have a steep transition when the number of gates becomes smaller than the adaptive circuit. This phenomenon suggests that the ansatz limitation is the main source of error in the variational calculations, while the adaptive circuit is able to increase effectively its representation power. On the other hand, the standard pVQD calculation with n_TS = 8 never undergoes this transition. While the integrated exact infidelity is always lower than the adaptive approach, we have to note that the entire time evolution is performed with a deeper circuit. Finally, we note a plateau in the depth of the circuit required by the adaptive algorithm when L>8. This is similar to what observed in <cit.>, where the system size at which the number of gates required saturates depends on the evolution time. The adaptive method is able to produce circuits that are orders of magnitude shallower than Trotterization while keeping the accuracy comparable to it. Those circuit can be used to improve the measurement of observables at long times on current quantum devices, which are otherwise limited by the depth of the Trotterization. For this reason, we first run the Adaptive pVQD algorithm on the simulator and use the resulting sets of variational parameters to prepare quantum circuit on the hardware for a system of L=4 spins. In <ref>, we compare observables measured both on those variational wave functions and on Trotterized circuits with a fixed Trotter step size of dt=0.2. In this experiment, the final Trotter circuit has 180 CNOTs. This circuit is beyond what is currently accessible on quantum devices, settling the expectation value of the correlator close to 0 for J_x t > 0.8. On the other hand, the Adaptive pVQD parameterized circuit |ψ(θ)⟩ has 28 CNOTs at the end of the evolution. This improvement in the number of gates is crucial for the application of error mitigation techniques, especially at longer times. In particular, zero noise extrapolation (ZNE <cit.>) was applied both on the noisy simulations and hardware experiments. We choose a quadratic fit on values obtained with noise scaling factors [ 1,2,3 ]. Moreover, when running our algorithm on hardware, we dynamically decouple the idle qubits from the active ones using the standard procedure available in Qiskit <cit.>. We expect that more advanced noise mitigation techniques, such as the one presented in <cit.>, will improve the results on the Trotter circuit. However, this is also true for the variational circuit prepared by the Adaptive pVQD. §.§ Fermi-Hubbard model The Hamiltonian of the Fermi-Hubbard model on a L_x × L_y rectangular lattice is given by H = -J ∑_⟨ i j ⟩, σ (c_i σ^† c_j σ + c_j σ^† c_i σ) + U ∑_i=0^L_x L_y-1 n_i ↑ n_i ↓, where c_i σ^† (c_i σ) is the creation (annihilation) fermionic operator of spin σ∈{↑, ↓} at site i, n_i σ = c_i σ^† c_i σ counts the number of fermions with spin σ at site i and ⟨ i j ⟩ denotes nearest neighbor sites on the lattice. The first term in the Hamiltonian accounts for the hopping between nearest neighbor lattice sites, while the second term describes the on-site interactions. There are several ways to encode fermionic Hamiltonians into qubit operators <cit.>. In this work, we consider the Jordan-Wigner mapping <cit.> to encode each fermionic mode into a qubit. Since every lattice site can host two modes (↑, ↓), N = 2 L_x L_y qubits are required to simulate the Fermi-Hubbard model on a L_x × L_y grid. Before performing a fermionic encoding, we eliminate the spin index via c_i ↑→ c_i and c_i ↓→ c_i + N/2 (and analogously for the number operator n_i σ). We then map each fermionic operator into a spin operator: c_i → Z^⊗ i⊗σ^+ ⊗𝕀^⊗ N-i-1, c_i^† → Z^⊗ i⊗σ^- ⊗𝕀^⊗ N-i-1, where σ^± = (X ± i Y)/2. The local occupation number can then be identified with the local spin number according to n_i ∈{ 0,1 }↦ Z_i ∈{↑, ↓}. More details on the fermionic indexing convention and implementing a Trotter step can be found in <ref>. Given that the mapping requires an ordering of the fermionic modes, operators that are local in space might generate very long Pauli strings. For example, considering the snake-like pattern, vertical hopping terms generate strings of Pauli Z with sizes up to 2L_x -2. This represents a bottleneck in studying fermionic systems with dimensionality higher than 1 on current quantum devices. By restricting the operator pool, we investigate the possibility of describing time-evolved wave functions of the 2D Hubbard model using only local gates. We perform noiseless simulations of a 2 × 2 square lattice, comparing local and non-local operator pools. In particular, we measure the expectation values of a local density operator and a density correlator and count the number of CNOTs in the circuits. We use a fixed-depth Trotter simulation and a pVQD with block extension as a benchmark. The results are shown in <ref>. We do not restrict ourselves to specific quantum hardware to keep the comparison as general as possible. Instead, we count the number of CNOTs in a circuit by transpiling it into an abstract device with all-to-all connectivity that is able to perform arbitrary single qubit rotations and CNOTs. The local and non-local pool variants show different behavior over time in the count of CNOTs. We note that the non-local variant always requires fewer CNOTs than its local counterpart. However, some CNOTs are long-range, and their implementation on an actual device can be challenging on hardware with fixed topology and limited connectivity. In contrast, the circuit structure produced by the local pool variant is already suited for current hardware implementation. More details about the Adaptive pVQD output circuits can be found in <ref>. Moreover, the plot highlights another limitation of the naive pVQD with block extensions approach. Indeed, it always prepare more expensive circuits than the Adaptive pVQD with non local pool and in the end it has similar CNOT requirement to the local variant, while being restricted to use long range gates as required by the Trotter step. § CONCLUSIONS We presented an adaptive version of pVQD, called Adaptive pVQD, to simulate the real-time evolution of quantum systems. This algorithm importantly circumvents the need to choose a fixed ansatz from the beginning of the time evolution. The parameterized quantum circuits are grown adaptively to be both problems and hardware-tailored. This is obtained with a measurement overhead required to determine the best gate among those included in the operator pool. However, the gate search can be operated in parallel and, in our scheme, does not involve circuits with auxiliary qubits. This makes the Adaptive pVQD algorithm more hardware-efficient than standard methods, as exemplified in this work with the driven Heisenberg model on the IBM quantum hardware. Finally, we have simulated the dynamics of the 2D Hubbard model with only local gates, using the adaptive procedure to mitigate one of the bottlenecks that current quantum devices face in studying fermionic systems. Given the ease of introduction to the standard pVQD algorithm and its benefits, we believe that the adaptive procedure described here can be of great use in the simulation of dynamics both for current and future quantum devices. § DATA AVAILABILITY The code used to run the simulations is open source and can be found at <cit.>. It was written in Python using Qiskit <cit.>. Exact classical simulations were performed using Qutip <cit.>. § ACKNOWLEDGMENTS We thank S. Economou for insightful discussions. This research was supported by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602). § MINIMIZATION ROUTINE Here we present additional details on the minimization routine that we applied throughout the simulations we presented in the main text. In particular, we follow a gradient-based approach, with gradient computed using the parameter-shift rule. Gradient-based and non-gradient-based optimization algorithms for dynamics were previously used for instance in <cit.> and <cit.>, for both ideal and noisy quantum simulations. The parameter shift rule readily applies here since every Pauli string A_i is involutory, i.e. A_i^2 = 𝕀 <cit.>. For a fixed set of operators A, the gradient of the infidelity was thus computed via the parameter shift rule: ∂ℐ/∂ dθ_i = ℐ(θ + dθ + s e_i) - ℐ(θ + dθ - s e_i) / 2 sin s , where e_i is the standard unit vector, and we fixed s=π/2. The gradient was then fed to Adam <cit.>, implemented with the default hyperparameters and a learning rate α = 0.005. The shift parameters d θ^* were consequently obtained using Adam. Two stopping criteria for the optimizer were used: (1) the ℓ_∞-norm of the gradient of the infidelity is below a tolerance and (2) a maximum number of iterations is reached. Fianlly, as showed in <cit.>, an optimization threshold independent from Δ t can be used if ℐ is substituted with ℐ / Δ t ^2 as cost function. § GRADIENT OF THE FIDELITY In this Appendix, we derive the expression for the gradient of the adaptive step presented in <ref>. Given the quantum circuit U(θ) that prepares the state | ψ (θ) = U(θ) | ψ_0 ⟩, we want to add the gate e^-i dθ_a A_a to it, defining the new state | ψ (θ+d θ) ⟩ = U(θ) e^-i dθ_a A_a | ψ_0 ⟩. To obtain the gradient of the fidelity with respect to this added parameter d θ_a, it is convenient to first rewrite the fidelity given in <ref> as follows ℱ(d θ,Δ t) = | ⟨ψ( θ+d θ) | U_TS (Δ t) | ψ ( θ ) ⟩ |^2 = | ⟨ψ_0 | e^i dθ_a A_a U^†(θ) U_TS(Δ t) U(θ) | ψ_0 ⟩ |^2 = ⟨ψ_0 | e^i dθ_a A_a U^†(θ) U_TS(Δ t) U(θ) | ψ_0 ⟩ * ⟨ψ_0 | U^†(θ) U_TS^†(Δ t) U(θ) e^-i dθ_a A_a | ψ_0 ⟩ = ⟨ψ_0 | U^†(θ) U_TS^†(Δ t) U(θ) e^-i dθ_a A_a | ψ_0 ⟩ * ⟨ψ_0 | e^i dθ_a A_a U^†(θ) U_TS(Δ t) U(θ) | ψ_0 ⟩ = ⟨ϕ(θ, Δ t) | e^-i dθ_a A_a P_0 e^i dθ_a A_a | ϕ(θ, Δ t) ⟩, where we defined | ϕ(θ, Δ t) ⟩ = U^†(θ) U_TS(Δ t) U(θ) | ψ_0 ⟩ and the projector P_0 = | ψ_0 ⟩⟨ψ_0 |. One can then readily differentiate with respect to d θ_a to obtain ∂ℱ/∂ dθ_a = ⟨ϕ(θ,Δ t )| e^-idθ_a A_a [P_0,iA_a] e^idθ_a A_a|ϕ(θ,Δ t )⟩ which precisely corresponds to <ref>. § ADAPTIVE STEP IMPLEMENTATION In this Appendix we illustrate the adaptive procedure we have used in our simulations, based on what was initially proposed in <cit.>. The overall procedure can be divided in the following steps: * Compute the gradient of the fidelity for each operator in the pool. To process the pool, the gate e^-i θ_a A_a associated to each trial operator A_a ∈𝒜 is appended one at a time to the current parameterized circuit {θ, A}, resulting in the trial circuit { (θ, 0), (A, A_a ) }. For the trajectory in parameter space to remain continuous, the new parameter θ_a is set to 0. The gradient of the fidelity with respect to the new parameter is computed for each trial circuit using the parameter shift rule, given explicitly in <ref>. * Pick the operator in the pool that maximizes the gradient. Update the parameters and operators to θ→ (θ,0) and A→ (A, A^*), where A^* is the operator A_a that maximizes the fidelity gradient. * Remove the operators in the pool that act on qubit(s) already acted on. Given that the operator A^* obtained in <ref> acts on the qubits indices α, the subset of the operator pool that also acts on at least one index in α, namely 𝒜_α = { A_a | A_a ∈𝒜 acts on β, β∪α∅} should be removed from the current operator pool. Hence the pool can be updated as follows: 𝒜→𝒜∖𝒜_α. * Go back to <ref> until the operator pool is empty. * Return the new circuit. The new parameterized circuit is characterized by θ→ (θ, 0, ⋯, 0) and A→ (A, A_0^*, A_1^*, ⋯, A_k^*), assuming that k new operators were added. As stated in the main text, this procedure guarantees that the depth of the parameterized circuit |ψ(θ)⟩ is increased by 1 in each adaptive step <cit.>. § ADAPTIVE PVQD OUTPUT CIRCUITS We illustrate in <ref> examples of parameterized circuits obtained with the Adaptive pVQD algorithm in simulations shown in the main text. Each column of operators in the circuits corresponds to an adaptive step. § TROTTER STEP CIRCUIT ENCODINGS In this Appendix we provide the circuits we used to implement a single Trotter step of the driven Heisenberg and the Hubbard models. The Trotter step in the driven Heisenberg model is implemented with a checkerboard pattern of the two qubit gates R_XX, R_YY, R_ZZ, with a layer of single qubit R_Z at the end. We show a sketch in <ref>. To realize the Trotter circuit for the Hubbard model, we first have to establish an ordering in the latices sites and the modes. We number the sites using a snake-like pattern and, as indicated in the main text, we eliminate the spin index via c_i ↑→ c_i and c_i ↓→ c_i + N/2. Under this ordering, the Jordan-Wigner transformation of the Hamiltonian terms reads c_i↑^† c_j ↑ + c_j ↑^† c_i ↑ ↦1/2[ X_i ∏_k=i+1^j-1 Z_k X_j + Y_i ∏_k=i+1^j-1 Z_k Y_j], c_i↓^† c_j ↓ + c_j ↓^† c_i ↓ ↦1/2[ X_i+N/2∏_k=i+1^j-1 Z_k+N/2 X_j+N/2 + + Y_i+N/2∏_k=i+1^j-1 Z_k+N/2 Y_j+N/2], n_i ↑ n_i ↓ ↦1/4 (𝕀-Z_i)(𝕀-Z_i+N/2), where we assumed j>i without loss of generality. Given the mapped Hamiltonian, the Trotter step can not be implemented using only R_XX, R_YY, R_ZZ and R_Z gates. Indeed, the non locality of the mapping requires some multi-qubit rotation with size up to 2L_x. The two multi-qubit gates are the rotations generated by the Pauli strings XZZX and YZZY, which can be decomposed as shown in <cit.>. <ref> presents our implementation. 66 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Sandvik et al.(2010)Sandvik, Avella, and Mancini]2010_sandvik_computational_methods author author A. W. Sandvik, author A. Avella, and author F. Mancini, title title Computational studies of quantum spin systems, in https://doi.org/10.1063/1.3518900 booktitle AIP Conference Proceedings (publisher AIP, year 2010)NoStop [Carleo and Troyer(2017)]2017_carleo_nqs author author G. Carleo and author M. Troyer, title title Solving the quantum many-body problem with artificial neural networks, https://doi.org/10.1126/science.aag2302 journal journal Science volume 355, pages 602 (year 2017)NoStop [Orús(2019)]2019_orus_tensor_networks_review author author R. Orús, title title Tensor networks for complex quantum systems, https://doi.org/10.1038/s42254-019-0086-7 journal journal Nature Reviews Physics volume 1, pages 538 (year 2019)NoStop [Carleo et al.(2019)Carleo, Cirac, Cranmer, Daudet, Schuld, Tishby, Vogt-Maranto, and Zdeborová ]2019_carleo_ml_in_physics author author G. Carleo, author I. Cirac, author K. Cranmer, author L. Daudet, author M. Schuld, author N. Tishby, author L. Vogt-Maranto, and author L. Zdeborová , title title Machine learning and the physical sciences, journal journal Reviews of Modern Physics volume 91, https://doi.org/10.1103/revmodphys.91.045002 10.1103/revmodphys.91.045002 (year 2019)NoStop [Cady et al.(2008)Cady, Crabtree, and Brudvig]Cady2008 author author C. W. Cady, author R. H. Crabtree, and author G. W. Brudvig, title title Functional models for the oxygen-evolving complex of photosystem ii, https://doi.org/https://doi.org/10.1016/j.ccr.2007.06.002 journal journal Coordination Chemistry Reviews volume 252, pages 444 (year 2008), note the Role of Manganese in Photosystem IINoStop [Schimka et al.(2010)Schimka, Harl, Stroppa, Grüneis, Marsman, Mittendorfer, and Kresse]Schimka2010 author author L. Schimka, author J. Harl, author A. Stroppa, author A. Grüneis, author M. Marsman, author F. Mittendorfer, and author G. Kresse, title title Accurate surface and adsorption energies from many-body perturbation theory, https://doi.org/10.1038/nmat2806 journal journal Nature Materials volume 9, pages 741 (year 2010)NoStop [Leggett(2006)]Leggett2006 author author A. J. Leggett, title title What DO we know about high Tc?, https://doi.org/10.1038/nphys254 journal journal Nature Physics volume 2, pages 134 (year 2006)NoStop [Balents(2010)]Balents2010 author author L. Balents, title title Spin liquids in frustrated magnets, https://doi.org/10.1038/nature08917 journal journal Nature volume 464, pages 199 (year 2010)NoStop [Arute et al.(2019)Arute et al.]2019_arute_quantum_supremacy author author F. Arute et al., title title Quantum supremacy using a programmable superconducting processor, https://doi.org/10.1038/s41586-019-1666-5 journal journal Nature volume 574, pages 505 (year 2019)NoStop [Zhong et al.(2020)Zhong et al.]2020_zhong_quantum_advantage_photons author author H.-S. Zhong et al., title title Quantum computational advantage using photons, https://doi.org/10.1126/science.abe8770 journal journal Science volume 370, pages 1460 (year 2020)NoStop [Huang et al.(2022)Huang, Broughton, Cotler, Chen, Li, Mohseni, Neven, Babbush, Kueng, Preskill, and McClean]2022_huang_quantum_advantage author author H.-Y. Huang, author M. Broughton, author J. Cotler, author S. Chen, author J. Li, author M. Mohseni, author H. Neven, author R. Babbush, author R. Kueng, author J. Preskill, and author J. R. McClean, title title Quantum advantage in learning from experiments, https://doi.org/10.1126/science.abn7293 journal journal Science volume 376, pages 1182 (year 2022)NoStop [Dolde et al.(2014)Dolde, Bergholm, Wang, Jakobi, Naydenov, Pezzagna, Meijer, Jelezko, Neumann, Schulte-Herbrüggen, Biamonte, and Wrachtrup]2014_dolde_optimal_control author author F. Dolde, author V. Bergholm, author Y. Wang, author I. Jakobi, author B. Naydenov, author S. Pezzagna, author J. Meijer, author F. Jelezko, author P. Neumann, author T. Schulte-Herbrüggen, author J. Biamonte, and author J. Wrachtrup, title title High-fidelity spin entanglement using optimal control, journal journal Nature Communications volume 5, https://doi.org/10.1038/ncomms4371 10.1038/ncomms4371 (year 2014)NoStop [Waldherr et al.(2014)Waldherr, Wang, Zaiser, Jamali, Schulte-Herbrüggen, Abe, Ohshima, Isoya, Du, Neumann, and Wrachtrup]2014_waldherr_optimal_control author author G. Waldherr, author Y. Wang, author S. Zaiser, author M. Jamali, author T. Schulte-Herbrüggen, author H. Abe, author T. Ohshima, author J. Isoya, author J. F. Du, author P. Neumann, and author J. Wrachtrup, title title Quantum error correction in a solid-state hybrid spin register, https://doi.org/10.1038/nature12919 journal journal Nature volume 506, pages 204 (year 2014)NoStop [Nam et al.(2019)Nam et al.]2019_nam_ionq_progress author author Y. Nam et al., https://doi.org/10.48550/ARXIV.1902.10171 title Ground-state energy estimation of the water molecule on a trapped ion quantum computer (year 2019)NoStop [Wan et al.(2020)Wan, Jördens, Erickson, Wu, Bowler, Tan, Hou, Wineland, Wilson, and Leibfried]2020_wan_boulder_ion_traps author author Y. Wan, author R. Jördens, author S. D. Erickson, author J. J. Wu, author R. Bowler, author T. R. Tan, author P.-Y. Hou, author D. J. Wineland, author A. C. Wilson, and author D. Leibfried, title title Ion transport and reordering in a 2d trap array, https://doi.org/10.1002/qute.202000028 journal journal Advanced Quantum Technologies volume 3, pages 2000028 (year 2020)NoStop [Hughes et al.(2020)Hughes, Schäfer, Thirumalai, Nadlinger, Woodrow, Lucas, and Ballance]2020_hughes_high_fidelity_ent_gate author author A. C. Hughes, author V. M. Schäfer, author K. Thirumalai, author D. P. Nadlinger, author S. R. Woodrow, author D. M. Lucas, and author C. J. Ballance, title title Benchmarking a high-fidelity mixed-species entangling gate, https://doi.org/10.1103/PhysRevLett.125.080504 journal journal Phys. Rev. Lett. volume 125, pages 080504 (year 2020)NoStop [Kim(2023)]Kim2023 title title Evidence for the utility of quantum computing before fault tolerance, https://doi.org/10.1038/s41586-023-06096-3 journal journal Nature volume 618, pages 500 (year 2023)NoStop [Cerezo et al.(2021)Cerezo, Arrasmith, Babbush, Benjamin, Endo, Fujii, McClean, Mitarai, Yuan, Cincio, and Coles]2021_cerezo_vqa_rev author author M. Cerezo, author A. Arrasmith, author R. Babbush, author S. C. Benjamin, author S. Endo, author K. Fujii, author J. R. McClean, author K. Mitarai, author X. Yuan, author L. Cincio, and author P. J. Coles, title title Variational quantum algorithms, https://doi.org/10.1038/s42254-021-00348-9 journal journal Nature Reviews Physics volume 3, pages 625 (year 2021)NoStop [Bharti et al.(2022)Bharti, Cervera-Lierta, Kyaw, Haug, Alperin-Lea, Anand, Degroote, Heimonen, Kottmann, Menke, Mok, Sim, Kwek, and Aspuru-Guzik]2022_bharti_nisq_algo_rev author author K. Bharti, author A. Cervera-Lierta, author T. H. Kyaw, author T. Haug, author S. Alperin-Lea, author A. Anand, author M. Degroote, author H. Heimonen, author J. S. Kottmann, author T. Menke, author W.-K. Mok, author S. Sim, author L.-C. Kwek, and author A. Aspuru-Guzik, title title Noisy intermediate-scale quantum algorithms, https://doi.org/10.1103/RevModPhys.94.015004 journal journal Rev. Mod. Phys. volume 94, pages 015004 (year 2022)NoStop [Yuan et al.(2019)Yuan, Endo, Zhao, Li, and Benjamin]2019_yuan_variational_simulations_rev author author X. Yuan, author S. Endo, author Q. Zhao, author Y. Li, and author S. C. Benjamin, title title Theory of variational quantum simulation, https://doi.org/10.22331/q-2019-10-07-191 journal journal Quantum volume 3, pages 191 (year 2019)NoStop [Peruzzo et al.(2014)Peruzzo, McClean, Shadbolt, Yung, Zhou, Love, Aspuru-Guzik, and O'Brien]2014_peruzzo_vqe author author A. Peruzzo, author J. McClean, author P. Shadbolt, author M.-H. Yung, author X.-Q. Zhou, author P. J. Love, author A. Aspuru-Guzik, and author J. L. O'Brien, title title A variational eigenvalue solver on a photonic quantum processor, journal journal Nature Communications volume 5, https://doi.org/10.1038/ncomms5213 10.1038/ncomms5213 (year 2014)NoStop [Biamonte et al.(2017)Biamonte, Wittek, Pancotti, Rebentrost, Wiebe, and Lloyd]2017_biamonte_quantum_machine_learning author author J. Biamonte, author P. Wittek, author N. Pancotti, author P. Rebentrost, author N. Wiebe, and author S. Lloyd, title title Quantum machine learning, https://doi.org/10.1038/nature23474 journal journal Nature volume 549, pages 195 (year 2017)NoStop [Cong et al.(2019)Cong, Choi, and Lukin]2019_cong_quantum_cnn author author I. Cong, author S. Choi, and author M. D. Lukin, title title Quantum convolutional neural networks, https://doi.org/10.1038/s41567-019-0648-8 journal journal Nature Physics volume 15, pages 1273 (year 2019)NoStop [Farhi et al.(2014)Farhi, Goldstone, and Gutmann]2014_farhi_quantum_combinatorial_opt author author E. Farhi, author J. Goldstone, and author S. Gutmann, https://doi.org/10.48550/ARXIV.1411.4028 title A quantum approximate optimization algorithm (year 2014)NoStop [Wang et al.(2018)Wang, Hadfield, Jiang, and Rieffel]2018_wang_qaoa_maxcut author author Z. Wang, author S. Hadfield, author Z. Jiang, and author E. G. Rieffel, title title Quantum approximate optimization algorithm for maxcut: A fermionic view, https://doi.org/10.1103/PhysRevA.97.022304 journal journal Phys. Rev. A volume 97, pages 022304 (year 2018)NoStop [Johnson et al.(2017)Johnson, Romero, Olson, Cao, and Aspuru-Guzik]2017_johnson_var_error_correction author author P. D. Johnson, author J. Romero, author J. Olson, author Y. Cao, and author A. Aspuru-Guzik, https://doi.org/10.48550/ARXIV.1711.02249 title Qvector: an algorithm for device-tailored quantum error correction (year 2017)NoStop [Xu et al.(2021)Xu, Benjamin, and Yuan]2021_xu_var_error_correction author author X. Xu, author S. C. Benjamin, and author X. Yuan, title title Variational circuit compiler for quantum error correction, https://doi.org/10.1103/PhysRevApplied.15.034068 journal journal Phys. Rev. Appl. volume 15, pages 034068 (year 2021)NoStop [Khatri et al.(2019)Khatri, LaRose, Poremba, Cincio, Sornborger, and Coles]2019_khatri_var_quantum_compiling author author S. Khatri, author R. LaRose, author A. Poremba, author L. Cincio, author A. T. Sornborger, and author P. J. Coles, title title Quantum-assisted quantum compiling, https://doi.org/10.22331/q-2019-05-13-140 journal journal Quantum volume 3, pages 140 (year 2019)NoStop [Sharma et al.(2020)Sharma, Khatri, Cerezo, and Coles]2020_sharma_var_quantum_compiling author author K. Sharma, author S. Khatri, author M. Cerezo, and author P. J. Coles, title title Noise resilience of variational quantum compiling, https://doi.org/10.1088/1367-2630/ab784c journal journal New Journal of Physics volume 22, pages 043006 (year 2020)NoStop [Jones and Benjamin(2022)]2022_jones_var_quantum_compilation author author T. Jones and author S. C. Benjamin, title title Robust quantum compilation and circuit optimisation via energy minimisation, https://doi.org/10.22331/q-2022-01-24-628 journal journal Quantum volume 6, pages 628 (year 2022)NoStop [Li and Benjamin(2017)]2017_ying_vqa_example author author Y. Li and author S. C. Benjamin, title title Efficient variational quantum simulator incorporating active error minimization, https://doi.org/10.1103/PhysRevX.7.021050 journal journal Phys. Rev. X volume 7, pages 021050 (year 2017)NoStop [Cîrstoiu et al.(2020)Cîrstoiu, Holmes, Iosue, Cincio, Coles, and Sornborger]2020_crstoiu_vff author author C. Cîrstoiu, author Z. Holmes, author J. Iosue, author L. Cincio, author P. J. Coles, and author A. Sornborger, title title Variational fast forwarding for quantum simulation beyond the coherence time, journal journal npj Quantum Information volume 6, https://doi.org/10.1038/s41534-020-00302-0 10.1038/s41534-020-00302-0 (year 2020)NoStop [Yao et al.(2021)Yao, Gomes, Zhang, Wang, Ho, Iadecola, and Orth]2021_yao_adaptive_tdva author author Y.-X. Yao, author N. Gomes, author F. Zhang, author C.-Z. Wang, author K.-M. Ho, author T. Iadecola, and author P. P. Orth, title title Adaptive variational quantum dynamics simulations, https://doi.org/10.1103/PRXQuantum.2.030307 journal journal PRX Quantum volume 2, pages 030307 (year 2021)NoStop [Lin et al.(2021)Lin, Dilip, Green, Smith, and Pollmann]Hsuan_2021 author author S.-H. Lin, author R. Dilip, author A. G. Green, author A. Smith, and author F. Pollmann, title title Real- and imaginary-time evolution with compressed quantum circuits, https://doi.org/10.1103/PRXQuantum.2.010342 journal journal PRX Quantum volume 2, pages 010342 (year 2021)NoStop [Barratt et al.(2021)Barratt, Dborin, Bal, Stojevic, Pollmann, and Green]Barratt_2021 author author F. Barratt, author J. Dborin, author M. Bal, author V. Stojevic, author F. Pollmann, and author A. G. Green, title title Parallel quantum simulation of large systems on small NISQ computers, https://doi.org/10.1038/s41534-021-00420-3 journal journal npj Quantum Information volume 7, pages 79 (year 2021)NoStop [Barison et al.(2021)Barison, Vicentini, and Carleo]2021_barison_p-vqd author author S. Barison, author F. Vicentini, and author G. Carleo, title title An efficient quantum algorithm for the time evolution of parameterized circuits, https://doi.org/10.22331/q-2021-07-28-512 journal journal Quantum volume 5, pages 512 (year 2021)NoStop [Berthusen et al.(2022)Berthusen, Trevisan, Iadecola, and Orth]2022_berthusen_dynamics_on_hardware author author N. F. Berthusen, author T. V. Trevisan, author T. Iadecola, and author P. P. Orth, title title Quantum dynamics simulations beyond the coherence time on noisy intermediate-scale quantum hardware by variational trotter compression, journal journal Physical Review Research volume 4, https://doi.org/10.1103/physrevresearch.4.023097 10.1103/physrevresearch.4.023097 (year 2022)NoStop [Barison et al.(2022)Barison, Vicentini, Cirac, and Carleo]2022_barison_vfk author author S. Barison, author F. Vicentini, author I. Cirac, and author G. Carleo, title title Variational dynamics as a ground-state problem on a quantum computer, https://doi.org/10.1103/PhysRevResearch.4.043161 journal journal Phys. Rev. Res. volume 4, pages 043161 (year 2022)NoStop [Miessen et al.(2023)Miessen, Ollitrault, Tacchino, and Tavernelli]Miessen2023_te author author A. Miessen, author P. J. Ollitrault, author F. Tacchino, and author I. Tavernelli, title title Quantum algorithms for quantum dynamics, https://doi.org/10.1038/s43588-022-00374-2 journal journal Nature Computational Science volume 3, pages 25 (year 2023)NoStop [Trotter(1959)]1959_trotter author author H. F. Trotter, title title On the product of semi-groups of operators, http://www.jstor.org/stable/2033649 journal journal Proceedings of the American Mathematical Society volume 10, pages 545 (year 1959)NoStop [Suzuki(1976)]1976_suzuki author author M. Suzuki, title title Generalized trotter's formula and systematic approximants of exponential operators and inner derivations with applications to many-body problems, @noop journal journal Communications in Mathematical Physics volume 51, pages 183 (year 1976)NoStop [Abrams and Lloyd(1997)]1997_abrams_trotter_for_simulation author author D. S. Abrams and author S. Lloyd, title title Simulation of many-body fermi systems on a universal quantum computer, https://doi.org/10.1103/PhysRevLett.79.2586 journal journal Phys. Rev. Lett. volume 79, pages 2586 (year 1997)NoStop [Ortiz et al.(2001)Ortiz, Gubernatis, Knill, and Laflamme]2001_ortiz_trotter_for_simulation author author G. Ortiz, author J. E. Gubernatis, author E. Knill, and author R. Laflamme, title title Quantum algorithms for fermionic simulations, https://doi.org/10.1103/PhysRevA.64.022319 journal journal Phys. Rev. A volume 64, pages 022319 (year 2001)NoStop [Tacchino et al.(2019)Tacchino, Chiesa, Carretta, and Gerace]Tacchino_2019 author author F. Tacchino, author A. Chiesa, author S. Carretta, and author D. Gerace, title title Quantum computers as universal quantum simulators: State-of-the-art and perspectives, https://doi.org/10.1002/qute.201900052 journal journal Advanced Quantum Technologies volume 3, pages 1900052 (year 2019)NoStop [Grimsley et al.(2019)Grimsley, Economou, Barnes, and Mayhall]2019_grimsley_adapt-vqe author author H. R. Grimsley, author S. E. Economou, author E. Barnes, and author N. J. Mayhall, title title An adaptive variational algorithm for exact molecular simulations on a quantum computer, https://doi.org/10.1038/s41467-019-10988-2 journal journal Nature Communications volume 10, pages 3007 (year 2019)NoStop [Tang et al.(2021)Tang, Shkolnikov, Barron, Grimsley, Mayhall, Barnes, and Economou]2021_tang_qubit_adapt_vqe author author H. L. Tang, author V. Shkolnikov, author G. S. Barron, author H. R. Grimsley, author N. J. Mayhall, author E. Barnes, and author S. E. Economou, title title Qubit-ADAPT-VQE: An adaptive algorithm for constructing hardware-efficient ansätze on a quantum processor, journal journal PRX Quantum volume 2, https://doi.org/10.1103/prxquantum.2.020310 10.1103/prxquantum.2.020310 (year 2021)NoStop [Van Dyke et al.(2022)Van Dyke, Barron, Mayhall, Barnes, and Economou]2022_van_dyke_pool_tiling author author J. S. Van Dyke, author G. S. Barron, author N. J. Mayhall, author E. Barnes, and author S. E. Economou, https://doi.org/10.48550/ARXIV.2206.14215 title Scaling adaptive quantum simulation algorithms via operator pool tiling (year 2022)NoStop [Anastasiou et al.(2022)Anastasiou, Chen, Mayhall, Barnes, and Economou]2022_economou_tetris author author P. G. Anastasiou, author Y. Chen, author N. J. Mayhall, author E. Barnes, and author S. E. Economou, https://doi.org/10.48550/ARXIV.2209.10562 title Tetris-adapt-vqe: An adaptive algorithm that yields shallower, denser circuit ansätze (year 2022)NoStop [Gomes et al.(2023)Gomes, Williams-Young, and de Jong]Niladri_2023 author author N. Gomes, author D. B. Williams-Young, and author W. A. de Jong, https://doi.org/10.48550/ARXIV.2302.03093 title Computing the many-body green's function with adaptive variational quantum dynamics (year 2023)NoStop [Anastasiou et al.(2023)Anastasiou, Mayhall, Barnes, and Economou]anastasiou2023really author author P. G. Anastasiou, author N. J. Mayhall, author E. Barnes, and author S. E. Economou, @noop title How to really measure operator gradients in adapt-vqe (year 2023), https://arxiv.org/abs/2306.03227 arXiv:2306.03227 [quant-ph] NoStop [Mari et al.(2021)Mari, Bromley, and Killoran]2021_mari_param_shift_rule author author A. Mari, author T. R. Bromley, and author N. Killoran, title title Estimating the gradient and higher-order derivatives on quantum hardware, journal journal Physical Review A volume 103, https://doi.org/10.1103/physreva.103.012405 10.1103/physreva.103.012405 (year 2021)NoStop [Shkolnikov et al.(2021)Shkolnikov, Mayhall, Economou, and Barnes]shkolnikov2021avoiding author author V. O. Shkolnikov, author N. J. Mayhall, author S. E. Economou, and author E. Barnes, @noop title Avoiding symmetry roadblocks and minimizing the measurement overhead of adaptive variational quantum eigensolvers (year 2021), https://arxiv.org/abs/2109.05340 arXiv:2109.05340 [quant-ph] NoStop [Yordanov et al.(2021)Yordanov, Armaos, Barnes, and Arvidsson-Shukur]2021_yordanov_adapt author author Y. S. Yordanov, author V. Armaos, author C. H. W. Barnes, and author D. R. M. Arvidsson-Shukur, title title Qubit-excitation-based adaptive variational quantum eigensolver, https://doi.org/10.1038/s42005-021-00730-0 journal journal Communications Physics volume 4, pages 228 (year 2021)NoStop [Temme et al.(2017)Temme, Bravyi, and Gambetta]Temme_2017_zne author author K. Temme, author S. Bravyi, and author J. M. Gambetta, title title Error mitigation for short-depth quantum circuits, https://doi.org/10.1103/PhysRevLett.119.180509 journal journal Phys. Rev. Lett. volume 119, pages 180509 (year 2017)NoStop [tA v et al.(2021)tA v et al.]qiskit author author A. tA v et al., https://doi.org/10.5281/zenodo.2573505 title Qiskit: An open-source framework for quantum computing (year 2021)NoStop [van den Berg et al.(2023)van den Berg, Minev, Kandala, and Temme]VandenBerg2023 author author E. van den Berg, author Z. K. Minev, author A. Kandala, and author K. Temme, title title Probabilistic error cancellation with sparse Pauli–Lindblad models on noisy quantum processors, journal journal Nature Physics https://doi.org/10.1038/s41567-023-02042-2 10.1038/s41567-023-02042-2 (year 2023)NoStop [E.P. Jornan(1993)]Jordan93 author author E. W. E.P. Jornan, https://doi.org/https://doi.org/10.1007/978-3-662-02781-3 title The Collected Works of Eugene Paul Wigner (publisher Springer, Berlin, Heidelberg, year 1993)NoStop [Bravyi and Kitaev(2002)]Bravyi02_bk author author S. B. Bravyi and author A. Y. Kitaev, title title Fermionic quantum computation, https://doi.org/10.1006/aphy.2002.6254 journal journal Annals of Physics volume 298, pages 210 (year 2002)NoStop [Verstraete and Cirac(2005)]Verstraete_2005 author author F. Verstraete and author J. I. Cirac, title title Mapping local hamiltonians of fermions to local hamiltonians of spins, https://doi.org/10.1088/1742-5468/2005/09/P09012 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2005, pages P09012 (year 2005)NoStop [Whitfield et al.(2016)Whitfield, Havlí čček, and Troyer]Whitfield_2016 author author J. D. Whitfield, author V. c. v. Havlí čček, and author M. Troyer, title title Local spin operators for fermion simulations, https://doi.org/10.1103/PhysRevA.94.030301 journal journal Phys. Rev. A volume 94, pages 030301 (year 2016)NoStop [Setia et al.(2019)Setia, Bravyi, Mezzacapo, and Whitfield]Setia_2019 author author K. Setia, author S. Bravyi, author A. Mezzacapo, and author J. D. Whitfield, title title Superfast encodings for fermionic quantum simulation, https://doi.org/10.1103/PhysRevResearch.1.033033 journal journal Phys. Rev. Res. volume 1, pages 033033 (year 2019)NoStop [Chen and Xu(2022)]Chen_2022 author author Y.-A. Chen and author Y. Xu, https://doi.org/10.48550/ARXIV.2201.05153 title Equivalence between fermion-to-qubit mappings in two spatial dimensions (year 2022)NoStop [Nys and Carleo(2023)]Nys_2023 author author J. Nys and author G. Carleo, title title Quantum circuits for solving local fermion-to-qubit mappings, https://doi.org/10.22331/q-2023-02-21-930 journal journal Quantum volume 7, pages 930 (year 2023)NoStop [Linteau(2022)]github author author D. Linteau, @noop title adaptive-pvqd, howpublished <https://github.com/dalin27/adaptive-pvqd> (year 2022)NoStop [Johansson et al.(2012)Johansson, Nation, and Nori]qutip author author J. R. Johansson, author P. D. Nation, and author F. Nori, title title Qutip: An open-source python framework for the dynamics of open quantum systems, journal journal Comp. Phys. Comm. volume 183, https://doi.org/10.1016/j.cpc.2012.02.021 10.1016/j.cpc.2012.02.021 (year 2012)NoStop [Kingma and Ba(2014)]2014_kingma_adam_optimizer author author D. P. Kingma and author J. Ba, https://doi.org/10.48550/ARXIV.1412.6980 title Adam: A method for stochastic optimization (year 2014)NoStop
http://arxiv.org/abs/2307.01949v1
20230704225718
Impact of Higher-Order Structures in Power Grids' Graph on Line Outage Distribution Factor
[ "Nafis Sadik", "Mohammad Rasoul Narimani" ]
math.OC
[ "math.OC" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: [email protected]. August 1, 2023 ================================================================================================================================================================= Power systems often include a specific set of lines that are crucial for the regular operations of the grid. Identifying the reasons behind the criticality of these lines is an important challenge in power system studies. When a line fails, the line outage distribution factor (LODF) quantifies the changes in power flow on the remaining lines. This paper proposes a network analysis from a local structural perspective to investigate the impact of local structural patterns in the underlying graph of power systems on the LODF of individual lines. In particular, we focus on graphlet analysis to determine the local structural properties of each line. This research analyzes potential connections between specific graphlets and the most critical lines based on their LODF. In this regard, we investigate N-1 and N-2 contingency analysis for various test cases and identifies the lines that have the greatest impact on the LODFs of other lines. We then determine which subgraphs contain the most significant lines. Our findings reveal that the most critical lines often belong to subgraphs with a less meshed but more radial structure. These findings are further validated through various test cases. Particularly, it is observed that networks with a higher percentage of ring or meshed subgraphs on their most important line (based on LODF) experience a lower LODF when that critical line is subject to an outage. Additionally, we investigate how the LODF of the most critical line varies among different test cases and examine the subgraph characteristics of those critical lines. DC load flow, Line outage distribution factor, Graph theory, Graphlets. § INTRODUCTION The power system is comprised of interconnected networks, and the loss of even one line can potentially lead to a major blackout. As a result, it is crucial to identify lines that are critical to the network and have the potential to trigger a cascading failure and even a blackout. For that reason, remedial actions such as network redesign or transmission line capacity expansion are required<cit.>. Power system contingency analysis has lately resurfaced as the traditional grid is transitioning into a smart grid. Particularly as the grid is transitioning into a cyber-physical system, grid collapse can be caused by the collapse of a few essential lines<cit.>. Numerous efforts have been expended on identifying critical lines in power systems<cit.>. Complex network theory has been leveraged by various studies to identify critical elements in power systems. Betweenness centrality, a measure based on the shortest path-based between different nodes in a graph, is used to identify the most vulnerable lines in power system <cit.>. Other graph theoretical methods, such as the clustering algorithm, have also been employed to identify influential buses on the network<cit.>. Those research approaches focuses on the topology of the network without considering the physics that govern the network, i.e. power flow equations, line outage distribution factor, etc. Line outage distribution factor(LODF) is an indicator that quantifies the redistribution of power flow amongst surrounding lines when a line outage happens <cit.>. It is a linear sensitivity factor that utilizes DC power flow <cit.>. LODF plays a vital role in optimal power flow (OPF) studies <cit.> by quantifying the impact of line outages on power flow distribution in the network. It assists in improving system reliability, congestion management, and optimal transmission planning by providing valuable insights into the network's behavior under various operating conditions. A combination of the line outage distribution factor and complex network analysis typically yields a better contingency analysis. To extract data from various levels of contingency, the group betweenness centrality concept can be used with the line outage distribution factor <cit.>. Nevertheless, to better analyze contingency analysis, LODFs among lines can be analyzed with the local structural properties of those lines. The literature suggests that there have been very few works that relate subgraph properties with power grid contingency analysis. Subgraph analysis is an approach that highlights the local structural properties of a network. Graphlets are induced connected subgraph. A subgraph pattern is called a motif when it is found too frequently in a network compared to random networks<cit.>. Motif analysis is also leveraged in power system contingency analysis studies to investigate the impact of the local structure of a network on power system characteristics. For instance, it has been discovered that robust and fragile electrical networks exhibit varying degrees of deterioration of motifs under attack<cit.>. However, network motifs represents more global properties rather than local properties of a network. In particular, it is a measure of how many times a subgraph is found on the whole network. This paper leveraged the local properties of power system's graph, in particular how many times a graphlet can be found in a line, in analyzing the LODF measure in power system. The LODF is used in this paper to determine the most critical lines in the network. In particular, N-1 contingency is applied to the test cases to find the loss of which line yields to higher LODF values. The N-2 contingency is also investigated in this paper. Because of the computational complexity of N-2 contingency analysis for larger test cases, this study only considers smaller test cases for analysis. Graphlet analysis of those lines is performed in order to find a relationship between lines with high LODFs and the local structure of network. It is explored whether any critical lines contain certain sorts of graphlets. This will expose any vulnerable network components that have the potential to cause a cascading failure. That will eventually provide power system operators with crucial recommendations to improve power system vulnerability in the event of a cascading failure. The following is how this document is structured. Section <ref> reviews LODF metric. Section <ref> explains how power networks can be interpreted as complex network. Section <ref> presents our methodology. Section <ref> shows the results and Section <ref> concludes the paper. § OVERVIEW OF LINE OUTAGE DISTRIBUTION FACTOR This section explains the concept of the line outage distribution factors (LODFs) that is employed in this study. To compute LODFs, we first need to consider Injection Shift Factors (ISF). The injection shift factor is a linear sensitivity factor which measures the sensitivities of active power line flows due to injection of active power bus injections <cit.>. Assume a power system consists of N buses and L lines. The injection shift factor ι^k_lm of line lm represents the sensitivity of the change in power flow in line lm due to a change in real power injection at bus k ∈ N. The injection shift factor matrix of a power system can be computed by DC power flow. ι = B^-1_n AB_l Here, B^-1_n ∈ℝ^L× L represents the original branch susceptance matrix, A∈ℝ^L× N denotes reduced incidence matrix, and B_l∈ℝ^L× N represents reduced nodal susceptance matrix. Power transmission distribution factor (PTDF) for line lm for incident z of real power change Δ P in bus c to bus d can be defined as, τ^z_lm = ι^c_lm - ι^d_lm Consequently, change of power flow in line lm for Δ P power flow change in bus c to bus d can be defined in terms of power as, Δω^z_l_m = τ^z_l_mΔ P PTDF matrix of the entire system can be obtained using (<ref>). τ_L = ι A^T For outage of line lo, LODF of line lm can be considered as ϕ^lo_lm. If pre-outage real power flow in line lo is ω_lo, change of power flow Δω^z(lo)_lm in line lm after the outage of line lo can be written in terms of LODF as follows <cit.> Δω^z(lo)_lm = ϕ^lo_lmω_lo Using equation (<ref>), equation (<ref>) can be extended as. ϕ^lo_lm = τ^z(lo)_lm/1-τ^z(lo)_lo In this paper, we first determine the most critical lines using LODFs. For a single test case consisting of N buses and L lines, the LODF matrix will be of size L × L. In particular, for a single line outage, there will be L-1 number of LODFs. The maximum LODF value is recorded among those L-1 LODFs. We conduct the N-1 contingency analysis for all lines in the network to determine maximum LODF for all contingencies. After finding LODF matrix, a limited number of lines with highest and lowest LODF are investigated from local structure point of view. A similar approach is taken for N-2 contingency. Following the outage of a line, it is determined that the outage of whichever line follows, in conjunction with the first, stresses the network the most. As a result, some combinations of lines for N-2 contingency can be obtained. Each combination gives a maximum LODF for an N-2 contingency. In this process, LODF helps to determine the most critical lines of a network in both N-1 and N-2 contingencies. In following sections, the local network characteristics of those critical lines are investigated. § POWER GRID AS A COMPLEX NETWORK Network Graphlets are building blocks of networks. Analysis of Graphlets is found to be an indispensable tool for understanding local network structure, in contrast to measures based on node degree distribution and its functions that primarily address a global network topology <cit.>. Examining the local structure of any grid requires power grid to be modeled as a complex network. The grid can be represented as an undirected graph G(V,E), where V represents the grid's buses and E represents the grid's lines. A graph G'(V',E') is a subgraph of graph G, G'⊆G, only when V'⊆V and E'⊆E. The subgraph G' can be called an induced subgraph of G if E' contains all edges e_uv∈ E such that u,v∈ V'. Graphs G' and G” can be called isomorphic if there exists a bijection h:V'→ V” such that any two adjacent nodes u,v ∈ V' of G' are also adjacent in G” after the mapping occurs. If G_k = (V_k,E_k) is a k-node subgraph of G and if there exists an isomorphism between G_k and G' where G' ∈ G, then there is an occurrence of G_k in G. In this paper, we only investigate 4-node subgraphs. To this end, we use the FANMOD algorithm, represented in Algorithm <ref>, to enumerate subgraphs <cit.>. In this algorithm, first all 4-node subgraphs are enumerated and then classified. Here, the notation of N({v}) means neighbours of nodes v ∈ V. V_subgraph means set of four node connected vertices. Furthermore, if vertex w ∈ V/ V_subgraph , then N_excl(w,V_subgraph) denotes the exclusive neighbourhood of vertex w, in which no vertices belong to either V_subgraph or the neighbourhood of V_subgraph. After enumerating subgraphs, we apply subgraph isomorphism to classify all 4-node subgraphs. In general, there are six types of four-node connected subgraphs that can be found in any power grid network. Fig. <ref> shows four-node subgraphs which are labeled as M1-M6. Subgraphs M1, M2 and M3 represent radial structures, while M4, M5 and M6 symbolize ring and mesh structures. Different types of subgraph analysis have been done in the literature. There have been much research on graphlet analysis in a network. However, the impact of higher-order structures such as graphlets in power grids' network is less studied. In this study, our methodology focuses on how many times a graphlet can be found on a particular line. § METHODOLOGY §.§ N-1 Contingency The LODF calculates the redistribution of a line's load among the other lines. However, we observe that for outages of a few particular lines, the LODF of some other lines are larger. This essentially means that outages on those critical lines are putting a strain on the network. In other words, for the outage of those critical lines, LODF impacts on other lines are significantly higher. In this section, the graphlet characteristics of those critical lines is observed. In particular, we observe how many times a particular graphlet can be found on a line. For N-1 contingency, the most critical lines of the IEEE 30-bus network based on LODF impact are illustrated by red lines in Fig 2. Based on the LODF analysis, it can be said that outage on those lines strain the network most. From a graph theory point of view, graphlet analysis can be used to determine which 4-node graphlets those critical lines are belong to. As a consequence, the local structural properties of those critical lines will be revealed. In this connection, we determine how many times a particular graphlet is incident on a line. Table <ref> and Table <ref> represent graphlet characteristics of most and least critical lines based on N-1 LODF analysis for IEEE 30-bus network. When two tables are compared, it is clear that the local structural characteristics of the two distinct type of lines differ noticeably. In the table of most critical lines, there are no M3-M6 graphlets in any lines. On the contrary, there is an M4 ring graphlet present for all lines in the table of least critical lines. In other words, if a line is part of a ring graphlet M4, its outage is unlikely to have a significant influence on the network. In addition to that, from figure <ref>, it is also clear that there are no critical lines in the graph region where the M4 ring graphlet exists. As a result, the area is considerably stabilized by the presence of a ring structure. On the contrary, most of the critical lines are part of some radial structured graphlet such as M1 and M2. In the following section, we will observe the local structural properties of lines with the highest LODF impact among different networks. §.§ N-2 Contingency In N-2 contingency, we observe network performance in the case of an outage of two lines. In N-1 contingency for a network of L lines, we have a LODF matrix of L × L size. However, for N-2 contingency, we have L number of L-1 sized matrices. From each of those L-1 sized matrices, we choose the row with highest LODF value. Consequently, we get L combination of two lines which actually represents lines from N-2 contingency. In particular, that means we first simulate the outage of a single line, then we observe the outage of which other line strains the network most. In that way, we get L combination of N-2 contingency lines and then we determine which single combination yields the maximum LODF impact. In Fig <ref>, line combinations 15-23 and 21-22 has the highest LODF among other combinations. If we look at the network from a structural point of view, we observe that those two lines do not belong to any ring or meshed graphlets. In the next section, we present the results of highest LODF impact line combination from N-2 contingency among different networks. § RESULTS In the previous section, we observed graphlet characteristics of the IEEE 30-bus network. Here, we apply the proposed approach on “pglib_793_goc” and “pglib_1354_pegase” as large networks from the PGLib-OPF benchmark library <cit.>. For “pglib_793_goc” test case, we observed that it has no M3, M4, M5, M6 graphlets in its most critical lines, while in it's least critical lines, M4 graphlets can be found. Similarly, for the case “pglib_1354_pegase”, we noticed that, there is a very small number of M3, M4, M5, M6 graphlets in it's most critical lines. In contrast, most of the least critical lines in this test case are belong to M3 and M4 graphlets. In order to generalize the finding, we investigate the proposed approach on different test cases with various sizes. To this end, we first determined the most critical lines of different test cases using N-1 LODF analysis. We then determined which graphlets they feature and the quantity of each graphlet present. In particular, total graphlet count of the most important line is determined. Then it is measured which graphlets account for what percentage of the entire graphlet count. This gives us the graphlet characteristics of that line. By taking the percentage of graphlets in a line, we can compare the results across different networks. The maximum LODF impact of most critical line of a network varies from network to network. Our objective is to relate the maximum LODF for a network possible and subsequent graphlet characteristics of the line for the outage of which the network experiences the maximum LODF. We present our findings in Fig. <ref>, where the size of the bubble represents each graphlet percentage of the most important line of a network. Also, the color of the bubbles changes with the maximum LODF of that most important line. From Fig. <ref>, it is evident that, cases that contain very high LODF impact line, contain no or very small percentages of ring or meshed graphlets such as M3, M4, M5 and M6 in their most critical line. Conversely, there are relatively higher percentages of ring or meshed graphlets in cases that have relatively low LODF impact lines. From Section <ref>, we get L combination of two N-2 contingency lines for a network consisted of L lines. The first line corresponds to N-1 contingency and the second line corresponds to N-2 contingency. For most combinations, we can observe certain N-2 lines recurring. However, we focus on the N-2 contingency line for the combination that produces the maximum LODF impact. Fig. <ref> shows the graphlet analysis of lines for N-2 contingency. From Fig. <ref> it can be noticed that, cases with very high LODF impact lines have less percentage of ring or meshed graphlets such as M3, M4, M5 and M6 in their most critical lines. This results can be easily extended to higher order contingency analysis and help power system operator to focus on specific part of the network for finding critical lines. Finding graphlets for larger test cases can be done offline which helps system operators to quickly determine potential critical lines in the system. § CONCLUSION Graph theoretical analysis has been extensively used to investigate power grid vulnerabilities. However, graphlet analysis in tandem with grid vulnerabilities is less studied. To address this gap, this paper first determines the most important line for N-1 contingency based on the LODF analysis. Then, it focuses on the graphlet characteristics of that critical line. This strategy is applied to various test cases and graphlet characteristics of the most important lines of those networks are investigated. This provides a comparison of the highest LODF possible for a network and graphlet features of the line for outages in which that LODF occurs. It is revealed that networks with a higher percentage of ring or meshed graphlets such as M3, M4, M5, M6 on their most important line experience less LODF for outage of that line. On the contrary, cases with very high LODF for outage on its most important line have very little to no percentages of ring or meshed graphlets in that line. This study also expands to, N-2 contingency where the same type of scenario is observed, which actually emphasizes the importance of ring or meshed graphlets in power grid networks for more resilient grids. The proposed graphlet analysis helps the power system operator to quickly determine possible candidates for critical lines. IEEEtran
http://arxiv.org/abs/2307.01395v1
20230703231347
Sparse approximation for t-statistics
[ "Micol Tresoldi", "Daniel Xiang", "Peter McCullagh" ]
math.ST
[ "math.ST", "stat.TH" ]
Precheck Sequence Based False Base Station Detection During Handover: A Physical Layer Based Security Scheme Xiangyu Li12, Kaiwen Zheng1, Sidong Guo1, Xiaoli Ma1 1School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA 2Georgia Tech Shenzhen Institute, Tianjin University, Shenzhen, China e-mail: {xli985, kzheng71, sguo93, xiaoli}@gatech.edu ============================================================================================================================================================================================================================================================================================================ In the signal plus noise model, it is of interest to quantify the evidence that a signal is active given conditionally independent replicate observations Y_j = X + ε_j on the signal X at a particular site. We study the problem in which the signal distribution is sparse, and the error distribution has an unknown variance so that the null distribution of the standardized statistic is Student-t. The main contribution of this paper is a sparse-mixture approximation to the non-null marginal density of the t-ratio. This formula demonstrates the effect of low degrees of freedom on the Bayes factor, or the conditional probability that the site is active. We illustrate some differences on a HIV dataset for gene-expression data previously analyzed by <cit.>. § INTRODUCTION Consider a sparse signal plus Gaussian noise model on the real line, where an independent sample variance is also observed, Y = X + , ∼ N(0,σ^2), S^2 ∼σ^2 χ_k^2. This model for (Y,S^2) typically arises where there is a need to summarize data from a Gaussian linear model, e.g., in high-throughput biology settings such as microarrays (<cit.>, <cit.>, <cit.>). A common assumption in the signal detection literature is to suppose that X is distributed according to an `atom and slab' mixture distribution <cit.> X ∼π_0_̣0 + (1-π_0)F_1 with an atom π_0 at zero containing most of the probability mass, and a diffuse component F_1 to model the non-null signal. In this note, we work with a more general concept of sparsity <cit.> and derive an approximation to the marginal density of the t-ratio Y √(k/S^2) when the distribution of X is sparse. To this end, up to a re-scaling of X, it suffices to consider a special case of (<ref>) with σ=1. In Section <ref> we review the definition of statistical sparsity and derive a mixture representation for the distribution of the t-ratio when the signal is sparse. In Section <ref> we compute conditional probabilities using this approximation and make a comparison with the probability integral transform numerically and on a real dataset of gene expression levels from HIV patients. Section <ref> concludes with a brief discussion, and Appendix <ref> contains proofs of our results. § STATISTICAL SPARSITY AND T-RATIOS A family of distributions (P_ρ) has a sparse limit with rate ρ and exceedance measure H if ∫ w(x)P_ρ( x) = ρ∫_\{0} w(x) H( x) + o(ρ) as ρ→ 0 for any function w: → that is bounded, continuous and O(x^2) at the origin. A measure H satisfying ∫_\{0} (1-e^-x^2/2) H( x) = 1, is called a unit exceedance measure. The inverse-power measure with index ∈ (0,2), defined by H_( x) C_· x/|x|^+1, C_ 2^/2-1/Γ(1-/2) satisfies (<ref>). The t-distribution on ∈ (0,2) degrees of freedom and scale parameter σ>0 has a sparse limit with rate ρ = ^/2Γ(d+1/2)/C_√(π)Γ(d/2)·σ^ and inverse-power exceedance H_. Now suppose X ∼ P_ρ, a symmetric distribution having a sparse limit with rate ρ and unit exceedance H, and the observation Y is generated from the sparse signal-plus-standard Gaussian model, Y = X + , ∼ N(0,1). The marginal density of Y is ∫ϕ(y - x) P_ρ( x) = ϕ(y) ∫ e^-x^2/2 P_ρ( x) + ϕ(y) ∫ (e^xy - 1) e^-x^2/2 P_ρ( x) = ϕ(y) ∫ ( e^-x^2/2-1+1 ) P_ρ( x) + ϕ(y) ∫(cosh(xy) - 1) e^-x^2/2 P_ρ( x) = ϕ(y)(1 - ρ) + ρϕ(y) ∫(cosh(xy) - 1) e^-x^2/2 H( x) + o(ρ) = (1 - ρ)ϕ(y) + ρϕ(y) ζ(y) + o(ρ), where the zeta function ζ is determined by the exceedance measure, ζ(y) ∫_\{0} (cosh(xy)-1) e^-x^2/2 H ( x). Note that ∫ϕ(y) ζ(y) y = 1 implies that, up to first order in the sparsity rate, Y is distributed according to a two-component mixture distribution, where ζ is the density ratio of the non-null and null components. §.§ Distribution of the t-ratio for a sparse signal Let Y = X + ε be distributed according to the two-component mixture (<ref>), and let S^2 ∼χ_k^2 be a scalar independent of Y. The ratio T Y √(k/S^2) also has a mixture representation where the null component is the t-distribution on k degrees of freedom, denoted by f_0, and the non-null component is determined by the zeta function relative to f_0. When the signal distribution X ∼ P_ρ has an inverse-power exceedance measure, the Student-t zeta function has an explicit analytic form. Suppose the observation is distributed as Y ∼ (1-ρ)ϕ+ρψ, ψ(y) ϕ(y) ζ(y) ζ(y) = ∫_\{0}(cosh(xy)-1) e^-x^2/2H_( x) for a unit inverse-power exceedance H_ with index ∈ (0,2), and Y is independent of S^2 ∼χ_k^2. Then T = Y √(k/S^2) is marginally distributed according to the mixture, (1-ρ) f_0(t) + ρ∑_r=1^∞ζ_d,r f_r(t) where the coefficients ζ_d,r are non-negative and add to one, and the densities f_r are given by ζ_,r = -(-)r/2^r r!, f_r(t) = t^2r/( 1 + t^2/k)^r+1/2+k/2×Γ(1/2)/k^r + 1/2π^1/2 B(r+1/2, k/2), where (-)r (-d)(-d+2)⋯(-d+2(r-1)) for r≥ 1, (-) 0 1 by convention, and B(a,b) Γ(a)Γ(b)/Γ(a+b) is the Beta function. A convenient way to write the marginal density (<ref>) is the binary mixture (1-ρ)f_0(t) + ρ f_0(t) ζ_k(t), ζ_k(t) ∑_r=1^∞ζ_d,r t^2r/ 1 r(1+k) r/(k + t^2)^r, where we have used formula (<ref>) to deduce a crucial consequence for the present work, namely that the density ratio f_r(t) / f_0(t) for fixed k ≥ 1 is f_r(t)/f_0(t) = t^2r/ 1 r(1+k) r/(k + t^2)^r. These are the key elementary functions that arise in the definition of the zeta function when the null distribution is Student-t rather than standard normal. §.§ Probability integral transformation Let g be the monotone transformation → that transforms Student-t to standard normal, i.e., X ∼ f_0 implies g(X) ∼ N(0,1), so g is the probability integral transform that sends t-scores to z-scores. More explicitly, g(y) = Φ^-1(F_0(y)), where F_0 is the cumulative distribution function (cdf) of the t-distribution with k degrees of freedom, and Φ is the cdf of a standard normal distribution. Let ∈ (0,2), and suppose T is distributed according to the t-mixture (<ref>). Then the transformed variable Z = gT is distributed as a mixture with density ϕ(z)(1-ρ + ρζ_k(g^-1 z)), where g^-1 z is the t-score. By contrast, if Y is distributed as in Theorem <ref>, it has density ϕ(y) (1-ρ+ρζ_∞(y) ), where ζ_∞(y) lim_k →∞ζ_k(y). In practice we may observe independent copies (Y_i,S_i^2) for i=1,…,n from the model (<ref>), with a different signal X_i for each site. Classical procedures in the multiple testing literature (e.g. <cit.>, <cit.>) typically take as input a list of p-values. This reduction of the data, obtained by the probability integral transform of the t-scores, subsequently ignores the degree of freedom parameter k in the calculation of false discovery rates. As expression (<ref>) indicates, the density of the non-null component depends on k, whereas running the p-values through a general purpose multiple testing procedure such as Lindsey's method (<cit.>, <cit.>, <cit.>) or the BH procedure effectively assumes that the convolution occurs in the space of z-scores. The formula obtained in Theorem <ref> for the zeta function relative to the Student-t null facilitates a comparison between these two approaches, discussed in the next section. § NUMERICAL COMPARISON Although there are close similarities, the probability-integral transformed t-mixture is not the same as the standard Gaussian mixture (1-ρ)ϕ(z) + ρϕ(z) ζ_∞(z). The zeta-function for the transformed mixture is ζ_k(g^-1 z), which is close to, but not the same as ζ_∞(z). Consideration of the difference may arise from an analysis that transforms the t-scores into z-scores, assuming for simplicity that the convolution takes place in the space of these z-scores. The goal in such an analysis is to calculate the probability that an observation arose from the null component. For instance, the two-groups model is a description of the data generating process that posits a latent variable indicating the component from which the observation arose, H ∼Bernoulli(ρ) Z | H ∼ϕ if H=0 ψ if H=1, where ψ(z) = ϕ(z)ζ(z) is an alternative density satisfying ψ(0)=0. A quantity of interest is the local false discovery rate (<cit.>), defined as the conditional probability that the latent variable H is zero, (z) = (H=0 | Z=z) = (1-ρ)ϕ(z)/(1-ρ)ϕ(z)+ρψ(z). Note that in the convolutional model Z=X+, the event that H=0 is not the same as X=0. In fact, the calculation leading to expression (<ref>) shows that (H=1) corresponds to (1-e^-X^2/2), while X=0 may be a zero-probability event. In general, expression (<ref>) is an upper bound on the conditional probability (X=0 | Z). The zeta function together with the sparsity rate determine the conditional odds ratio, 1-(z)/(z) = ρ/1-ρ×ζ(z). For small degrees of freedom (≤ 10), the conditional odds that a signal is active can be diminished by roughly 20% in the region of interest (e.g. 3 ≤ |z| ≤ 10). The ratios ζ_∞(z) / ζ_k(t), with z = g(t), are shown below for k=10 and three values of : =5pt [ 5l Ratios ζ_∞(z) / ζ_10(t) for k=10; t z =0.5 =1.0 =1.5; 3.0 2.47 0.84 0.94 1.05; 4.0 3.02 0.79 0.93 1.10; 5.0 3.46 0.73 0.90 1.12; 7.0 4.12 0.65 0.85 1.12; 10.0 4.80 0.57 0.82 1.18; 15.0 5.51 0.52 0.85 1.37; ] It is apparent that the ratio ζ_∞(z) / ζ_10(t) may be appreciably less than one for heavy-tailed signals with d<1, or appreciably greater than one for shorter-tailed signals with d > 1. Also, the ratio is not monotone as a function of the argument. The region of most interest for comparison is typically 3 ≤ |z| ≤ 6. Ordinarily, k=100 and k=∞ are effectively equivalent for most statistical purposes. In sparsity calculations, however, the dependence on the degrees of freedom is far from negligible in the region of interest, even for k above 100. For =1, the zeta-ratios ζ_∞(z)/ζ_100(z) at z=3,4,5,6 are 1.14, 1.62, 3.37 and 11.68 respectively. §.§ Comparison on HIV data The HIV data, taken from Chapter 6 of <cit.>, is a case-control study with gene-activity levels measured at 7680 genomic sites. For each site we compute a difference between the average gene expression levels for case and control, consisting of 4 HIV-positive and 4 HIV-negative individuals, respectively. The histogram of resulting differences is asymmetric, and although the methods we demonstrate in this section here assume a symmetric signal distribution, we still find it worthwhile to make the comparison on this dataset, as it has a low degree of freedom (k=6) for the pooled variance estimate. In the analysis by <cit.>, the goal of the study was to identify a small subset of the genes that are potentially relevant towards understanding HIV. Each difference was divided by a pooled standard error to obtain a t-score, and these were transformed into z-scores via Z_i = Φ̅^-1(1-F_0(T_i)), where Φ̅ = 1- Φ is the complement of the standard normal cdf. After this pre-processing step, the (Z_i) were viewed as independent samples from a two-groups model with standard Gaussian null component, H_i ∼Bernoulli(ρ) Z_i | H_i ∼ (1-H_i) ϕ + H_i ψ, independently for an alternative distribution ψ satisfying ψ(0)=0. This zero-density assumption is made to identify the parameters and is commonly made in the two-groups model (see e.g. <cit.>, <cit.>). If ψ is the normal mixture defined by the inverse-power exceedance with index ∈ (0,2), the maximized log likelihood relative to the null model is 48.23, which occurs at ρ̂_z = 0.0059, _z = 1.09. In formula (<ref>), these values yield an estimate ℓ̂_z,i of the lfdr for each site i=1,…,7680. Assuming the null component is instead Student-t, i.e., each T_i comes from the two-groups model (<ref>), the maximized log likelihood is 56.13, which occurs at ρ̂_t = 0.0045, _t = 0.60. Combining these estimates with the formula for the zeta function when the null is Student-t gives a very similar set of local fdr estimates ℓ̂_t,i1-ρ̂_t/1-ρ̂_t + ρ̂_t ζ_k(T_i), i=1,…,7680. The local fdr estimates for the sites with the top differences |ℓ̂_z,i-ℓ̂_t,i| are displayed below. =7pt [ 5l Local fdr values for the top 6 differences; Z_i T_i ℓ̂_z,i ℓ̂_t,i |ℓ̂_z,i-ℓ̂_t,i|; -3.98 -9.69 0.44 0.35 0.095; -3.94 -9.38 0.48 0.39 0.094; 4.11 10.72 0.34 0.24 0.092; -3.90 -9.13 0.51 0.42 0.091; -4.26 -12.01 0.23 0.16 0.077; -4.42 -13.63 0.14 0.09 0.055; ] The smallest fitted ℓ̂_t,i values are approximately one half the smallest fitted ℓ̂_z,i values. The BH procedure at level α=0.1 yields 16 rejections, and the ℓ̂_t,i values among these range from 0.00 to 0.42, with an average of 0.108. Within the BH(0.1) set, the ℓ̂_z,i values range from 0.00 to 0.51, with an average of 0.144. §.§.§ Fixed =1 In this subsection we also compare the lfdr estimates with =1 fixed. The maximum likelihood estimates of the sparsity rates are ρ̂_z = 0.0053 and ρ̂_t = 0.0078, while the maximized log likelihoods are 48.13 and 52.99 relative to the null (ρ=0). Counter-intuitively, the resulting top differences are a little larger. =8pt [ 5l Local fdr for top 6 differences with =1; Z_i T_i ℓ̂_z,i ℓ̂_t,i |ℓ̂_z,i-ℓ̂_t,i|; -3.90 -9.13 0.52 0.39 0.130; -3.93 -9.38 0.49 0.36 0.128; -3.98 -9.69 0.45 0.33 0.125; -3.61 -7.41 0.72 0.60 0.116; 4.11 10.72 0.34 0.24 0.107; -3.45 -6.64 0.80 0.70 0.096; ] The values of ℓ̂_t,i among the 16 BH rejections now range from 0.00 to 0.39 with an average value of 0.104, whereas the values ℓ̂_z,i among the rejections range from 0.00 to 0.52 with an average value of 0.147. § DISCUSSION In this paper, we derived a mixture representation of the non-null density of a t-ratio when the signal is sparse, and illustrated the maximum-likelihood procedure for estimating the local false discovery rate when the signal distribution has an inverse-power exceedance measure. The formula for the density is determined by the Student-t zeta function and the sparsity rate ρ, and depends explicitly on the degrees of freedom parameter k. When k →∞, the formula recovers the Bayes factor for the standard Gaussian mixture in the sparse setting. For small k, we have demonstrated differences in the deduced local false discovery rates, both numerically in the region of interest, and on a dataset of high-throughput gene expression levels of HIV patients. Although neither model accommodates the asymmetry that is present in the HIV data, the Student t-model fits appreciably better than the transformed z-model in terms of the log-likelihood, and the average fitted lfdr-value within typical BH_α subsets is a reasonably close match with α. The analysis presented here is agnostic to the assumption of equal variances across genes. If there were reason to believe that the variance was constant across sites, we could pool the estimates, yielding a scaled χ^2 variable with essentially infinite degrees of freedom. Our analysis also assumes that the sparsity rate is the same at every site, and the joint distribution of (X_i, Y_i) depends on (ρ, d, σ_i). But the local false discovery formula (<ref>) is agnostic on the question of independence from one site to another. § PROOFS It follows from Lemma <ref> that ψ is a mixture ψ(y) = ∑_r=1^∞ζ_,r· g_r(y) , g_r(y) y^2rϕ(y)/ 1 r. Note that g_r is a probability density since ∫ y^2rϕ(y) y = 1 r is the formula for a Gaussian moment of even degree. If X_r ∼ g_r and S^2 ∼χ^2_k, then by Lemma <ref>, the density of T_r = X_r √(k/S^2) is given by f_r(t) = t^2r/( 1 + t^2/k)^r+1/2+k/2×Γ(1/2)/k^r + 1/2π^1/2 B(r+1/2, k/2), so the marginal distribution of T = Y√(k/S^2) can be written (1-ρ) f_0(t) + ρ∑_r=1^∞ζ_,r f_r(t). The density of Z is [(1-ρ) f_0(g^-1z) + ρ f_0 (g^-1z) ζ_k(g^-1z)](g^-1)'(z) By the inverse function theorem, (g^-1)'(z) = 1/g'(g^-1z) = ϕ(z)/f_0(g^-1z), which implies the first formula. For the second formula, note that by Lemma <ref>, the density of Y can be written (1-ρ)ϕ(y)+ρϕ(y)ζ(y), where ζ(y) = ∑_r=1^∞ζ_,ry^2r/ 1 r, ζ_,r = - (-d) r/2^r r!. Now since lim_k→∞(1+k) r/(k+y^2)^r = 1 for fixed r∈ and y ∈, ζ_k(y) = ∑_r=1^∞ζ_,ry^2r/ 1 r(1+k) r/(k+y^2)^r→∑_r=1^∞ζ_,ry^2r/ 1 r = ζ(y) as k →∞, since the sequence (1+k) r/(k+y^2)^r is eventually monotone in k, and the series is convergent for each fixed k. Suppose Y distributed Y ∼ (1-ρ)ϕ+ρψ, ψ(y) ϕ(y) ζ(y) ζ(y) = ∫_\{0}(cosh(xy)-1) e^-x^2/2H_( x) for a unit inverse-power exceedance H_ with index ∈ (0,2). Then ζ(y) = ∑_r=1^∞ζ_,ry^2r/ 1 r, ζ_,r -(-)r/ 2^r r! . By definition, the zeta function for Y is ζ(y) = ∫_\{0} (cosh(xy)-1)e^-x^2/2 H_( x) = ∫_\{0}∑_r=1^∞(xy)^2r/(2r)! e^-x^2/2 H_( x) = ∑_r=1^∞y^2r/ 1 r· 1 r/(2r)!∫_\{0}x^2r e^-x^2/2 H_( x) . It remains to show that 1 r/(2r)!∫_\{0} x^2re^-x^2/2 H_( x) = -(-)r/ 2^r r! . The left hand side can be evaluated 1 r/(2r)!∫_\{0} x^2r e^-x^2/2 H_ ( x) = 1 r/(2r)!∫_\{0} x^2r e^-x^2/2 2^/2 - 1/Γ(1-/2) x/|x|^1+ = 2^/2/-Γ(-/2)2^r r!∫_\{0} |x|^2r-1- e^-x^2/2 x, using the definition Γ(z+1)=zΓ(z). Now using that (|Z|^p) = 2^p/2Γ( p+1/2)/√(π) for p > -1 and Z ∼ N(0,1), the above becomes = 2^/2√(2π)/-Γ(-/2)2^r r!·2^2r-1-/2Γ( 2r-1-+1/2)/√(π) = Γ( r-/2)/-Γ(-/2) r! =(2-)⋯ (2r-2-)/2 · 4 ⋯ (2r)= -(-)r/ 2^r r! which are monotone decreasing at rate O(r^-1-/2) for large r. Suppose X_1∼ g_r where g_r(x) = x^2rϕ(x)/ 1 r and X_2^2 ∼χ_k^2 independently. Then the density of T_1 X_1 √(k/X_2^2) is given by f_r(t) = Γ(1/2)/k^r + 1/2π^1/2 B(r+1/2, k/2)·t^2r/(1+t^2/k)^r+1/2+k/2. Independence implies that the joint density of X_1, X_2 on ^d×^+ is the product x_1^2r e^-x_1^2/2 x_2^k/2-1 e^-x_2/2×1/(2π)^1/2 1 r1/2^k/2Γ(k/2). The transformation (x_1, x_2) ↦ (t_1, t_2) = (x_1 √(k/x_2), x_2) has Jacobian √(t_2/k), so the joint distribution of the transformed variables is t_1^2r (t_2/k)^r e^-t_2 (t_1^2 / k + 1)/2 t_2^k/2-1√(t_2/k)×1/(2π)^1/2 1 r 2^k/2Γ(k/2) = 1/k^r + 1/2(2π)^1/2 1 r 2^k/2Γ(k/2) t_1^2r· t_2^r+1/2+k/2 - 1 e^-t_2 (t_1^2 / k + 1)/2 . We recognize the pdf of the Gamma(r+1/2+k/2,(1+t_1^2/k)/2) distribution in the above expression, and integrate over t_2∈^+ to obtain the marginal density of T_1 at t_1∈: f_r(t_1) = t_1^2r/( 1 + t_1^2/k)^r+1/2+k/2×Γ(r+1/2+k/2) 2^r+1/2+k/2/k^r + 1/2 (2π)^1/2 1 r 2^k/2Γ(k/2) = t_1^2r/( 1 + t_1^2/k)^r+1/2+k/2×Γ(1/2)/k^r + 1/2π^1/2 B(r+1/2, k/2), where B(·,·) is the beta function. Note that f_0(·) is the Student-t density on k degrees of freedom in . dcu
http://arxiv.org/abs/2307.01261v2
20230703180002
Josephson diode effects in twisted nodal superconductors
[ "Pavel A. Volkov", "Étienne Lantagne-Hurtubise", "Tarun Tummuru", "Stephan Plugge", "J. H. Pixley", "Marcel Franz" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall", "cond-mat.str-el" ]
Department of Physics, Harvard University, Cambridge, Massachusetts, 02138 USA Department of Physics, University of Connecticut, Storrs, Connecticut 06269, USA Department of Physics and Astronomy, Center for Materials Theory, Rutgers University, Piscataway, NJ 08854, USA Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA Department of Physics and Astronomy & Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, BC V6T 1Z4, Canada Department of Physics, University of Zurich, Winterthurerstrasse 190, Zurich 8057, Switzerland Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands Department of Physics and Astronomy, Center for Materials Theory, Rutgers University, Piscataway, NJ 08854, USA Center for Computational Quantum Physics, Flatiron Institute, 162 5th Avenue, New York, NY 10010 Department of Physics and Astronomy & Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, BC V6T 1Z4, Canada Recent Josephson tunneling experiments on twisted flakes of high-T_c cuprate superconductor Bi_2Sr_2CaCu_2O_8+x revealed a non-reciprocal behavior of the critical interlayer Josephson current – i.e., a Josephson diode effect. Motivated by these findings we study theoretically the emergence of the Josephson diode effect in twisted interfaces between nodal superconductors, and highlight a strong dependence on the twist angle θ and damping of the junction. In all cases, the theory predicts diode efficiency that vanishes exactly at θ = 45^∘ and has a strong peak at a twist angle close to θ = 45^∘, consistent with experimental observations. Near 45^∘, the junction breaks time-reversal symmetry spontaneously. We find that for underdamped junctions showing hysteretic behavior, this results in a dynamical Josephson diode effect in a part of the -broken phase. The direction of the diode is trainable in this case by sweeping the external current bias. This effect provides a sensitive probe of spontaneous -breaking. We then show that explicit -breaking perturbations with the symmetry of a magnetic field perpendicular to the junction plane lead to a thermodynamic diode effect that survives even in the overdamped limit. We discuss an experimental protocol to probe the double-well structure in the Josephson free energy that underlies the tendency towards spontaneous -breaking even if is broken explicitly. Finally, we show that in-plane magnetic fields can control the diode effect in the short junction limit, and predict the signatures of explicit -breaking in Shapiro steps. Josephson diode effects in twisted nodal superconductors Marcel Franz August 1, 2023 ======================================================== § INTRODUCTION Twisted nodal superconductors have emerged as a promising platform to engineer exotic forms of superconductivity <cit.>, capable of hosting topological phases with potentially large energy scales inherited from constituent high-T_c materials such as monolayer cuprate Bi_2Sr_2CaCu_2O_8+x (BSCCO) <cit.>. In particular, time-reversal symmetry breaking (TRSB), a prerequisite for chiral topological superconductivity, has been predicted to occur spontaneously <cit.> for twist angles θ close to 45^∘. The mechanism driving this unconventional transition is the second harmonic of the interlayer Josephson current-phase relation (CPR), that favors a non-trivial (i.e., different from 0 or π) phase difference across the junction in equilibrium <cit.>. Recent Josephson experiments <cit.> at the twisted interface between thin flakes of BSCCO have provided evidence for this second harmonic. In particular, anomalous Fraunhofer patterns in the presence of in-plane magnetic fields and fractional Shapiro steps under microwave driving are consistent with the second harmonic being dominant near θ = 45^∘. However, neither of these observations are directly sensitive to TRSB at the interface. Moreover, it is possible theoretically to have dominant second harmonic in CPR that does not lead to spontaneous TRSB <cit.>. Additional complication arises due to the two-dimensional nature of the interface where TRSB occurs, precluding bulk probes, such as muon spin resonance or nuclear spin resonance. Therefore, more direct probes of TRSB are required. Some observables proposed to detect TRSB at the interfaces include polar Kerr effect measurements <cit.> and spontaneous currents around impurities or edges <cit.>, that may be detected using SQUID magnetometry <cit.> or oscillations in magnetic field <cit.>. Here we argue that a particularly sensitive probe of TRSB in superconductors is the Josephson diode effect (JDE), whereby the critical current in a Josephson junction becomes dependent on the current polarity. Conceptually, this follows from the simple observation that current is odd under both time reversal and inversion and hence the diode effect can be present only when both symmetries are broken <cit.>. Non-reciprocal transport properties of superconductors have been explored recently in a variety of setups where and are broken. In non-centrosymmetric superconductors, supercurrent diode effects can be induced by applying an external magnetic field or by proximitizing with a magnetic material. This was observed recently in an experiment on a Nb/V/Ta superlattice without an inversion center <cit.>, in magnetic Josephson junctions built from twisted bilayer graphene <cit.> or d-wave superconductors on top of topological insulators <cit.>, in InAs quantum wells <cit.>, in NbSe_2 nanowires <cit.> or films <cit.>, and in the Dirac semimetal NiTe_2 <cit.>. Recent theoretical works on the JDE in inversion-broken superconductors under an external magnetic field, including Rashba superconductors, were also reported <cit.>. SC diode effects can also occur through spontaneous breaking, which gives rise to a hysteresis loop in the diode response as a function of applied magnetic field, as reported in experiments on alternating-twist trilayer graphene <cit.>. Another recent experiment on TMD-based junctions reports JDE without a clear source of TRSB, which is difficult to reconcile with the basic symmetry requirements mentioned above <cit.> and could be related to theory ideas developed in <cit.>. Other relevant theory work on JDE include Refs. <cit.>. In this work, we investigate the JDE in twisted c-axis cuprate junctions, schematically depicted in Fig. <ref>(a), with the aim of providing a theoretical background for recent experimental results <cit.>. In particular, Ref. <cit.> has reported a diode effect induced and controlled by magnetic field, whereas Ref. <cit.> reported a diode effect in the absence of magnetic field. Ref. <cit.>, in addition to the above effects, observed a zero-field diode effect that can be trained by a directed current sweeping. We base our analysis on the mean-field theory of twisted d-wave superconductors following Refs. <cit.>. This approach famously predicts a strong suppression of the Josephson critical current for junctions with a twist angle θ close to 45^∘. Importantly, for these values of the twist angle the remnant value of the critical current is predicted to be due to a second-harmonic Josephson effect generated by Cooper pair co-tunneling. Both of these features were for the first time clearly observed in recent experiment <cit.>. Data presented in Refs. <cit.> on nominally similar samples showed only moderate suppression and were interpreted as evidence for an s-wave pairing component. However, experiments that could distinguish the second Josephson harmonic close to θ=45^∘ have not been performed in those works, leaving open a possibility that the same physics is being realized. Our analysis identifies two different pathways that can engender non-reciprocal currents in twisted d-wave junctions: (i) Dynamical JDE brought about by the spontaneous TRSB and (ii) Thermodynamic JDE relying on explicit TRSB unrelated to the Josephson physics. In the former the polarity of the diode depends on its history, is trainable with current biasing and allows for detection of spontaneous TRSB in Josephson experiments. The latter implies a `memory effect', whereby the junction exhibits the same polarity of JDE after is has been cycled above the superconducting critical temperature T_c or driven into the resistive state by exceeding its critical current. Dynamical JDE arises as a result of the characteristic double-minimum structure of the free energy in the -broken phase, predicted to occur in bilayer d-wave junctions in the vicinity of θ=45^∘. Importantly, because of kinematic and symmetry constraints the effect vanishes at 45^∘ twist and extends over a subregion of the -broken phase as illustrated in the phase diagram Fig. <ref>(b). In the presence of explicit TRSB, on the other hand, the thermodynamic JDE occurs at all twist angles except θ=0^∘,45^∘ and at all temperatures below T_c. Its strength, as measured by the figure of merit η defined below, is found to peak in the vicinity of θ=45^∘. The rest of this paper is organized as follows. In Sec. <ref> we introduce the Ginzburg-Landau (GL) theory formalism appropriate for twisted nodal superconductors and discuss the relevant symmetries of the free energy in the context of the JDE. In Sec. <ref> we investigate the dynamical JDE and explain how it can be used as a sensitive probe of spontaneous -breaking. In Sec. <ref> we consider the explicit breaking of and its consequences for both thermodynamic and dynamical diode effects. Detailed experimental protocols that allow to distinguish the two categories of JDEs are contrasted in Sec. <ref>. Finally, we discuss the consequences of explicit breaking for Fraunhofer patterns and Shapiro steps measurements in twisted junctions in Sec. <ref> and present concluding remarks in Sec. <ref>. § SETUP AND SYMMETRIES We start from the Ginzburg-Landau description of twisted superconductors and derive the form Josephson energy of the twisted interface. While this approach is strictly valid only close to T_c, the resulting description of the Josephson energy can be shown to hold also at low temperatures for weak tunneling <cit.>. We can write the Ginzburg-Landau free energy for twisted bilayer d-wave superconductors (omitting gradient terms) as [ψ_1, ψ_2] = _0[ψ_1] + _0[ψ_2] + A |ψ_1|^2 |ψ_2|^2 + B ( ψ_1^* ψ_2 + h.c. ) + C ( ψ_1^*2ψ_2^2 + h.c. ) , where ψ_a (a=1,2) is the SC order parameter of layer a and _0[ψ_a] = α |ψ_a|^2 + β |ψ_a|^4, with α∼ (T-T_c) and β>0, denotes the free energy of each individual layer. The B term describes single Cooper pair tunnelling between the layers, while C describes Cooper pair co-tunneling. At , the former process is forbidden due to a vanishing overlap of the d-wave order parameters. Denoting the twist angle as θ, we follow Ref. <cit.> and assume B=-B_0cos2θ with B_0>0 while we take C as constant independent of the twist. When C>0 the free energy (<ref>) admits a -broken, chiral SC phase near the 45^ o twist. To see this we assume identical layers and take ψ_1 = ψ and ψ_2 = ψ e^i ϕ with real ψ. Eq. (<ref>) then becomes (ϕ) = _0 - ħ/2e[ J_c1cos 2θcosϕ - J_c2/2cos 2 ϕ], with J_c1= (4 e/ħ) B_0ψ^2, J_c2 = (8 e / ħ) C ψ^4. Here _0 collects terms independent of the phase ϕ. For sufficiently small |cos2θ|, that is, in the vicinity of θ=, the last term in Eq. (<ref>) begins to dominate and produces a free energy with two non-equivalent minima at ϕ=±ϕ_0, resulting in a spontaneously -broken superconducting state. The twist angle range for such chiral -broken SC in this model is ±θ_c with θ_c = 1/2arccos( 2 J_c2/J_c1) Note that C>0 is not required by symmetry. Although microscopic calculations often show this to be the case <cit.>, including additional effects, such as inhomogeneity, can lead to C<0 <cit.>. Additionally, we allow for the possibility of explicit -breaking in the SC state (i.e., we assume that the normal state from which SC emerges may also break ). Such a possibility is motivated by experimental observations of a memory effect in a few samples, whereby the diode polarity remains robust to thermal and current cycling <cit.>. In such a situation, the Ginzburg-Landau free energy will also include a term of the form _m= im/2(ψ_1 ψ_2^* - ψ_1^* ψ_2 ) = m ψ^2sinϕ . Due to the d-wave nature of the order parameters ψ_1 and ψ_2, the factor m in Eq. <ref> transforms as the irreducible representation A_2 of either of the point groups D_4 (valid for generic non-zero twist angle) and D_4d (valid for 45^ o twist). Additionally, m is odd under time reversal, : m→ -m. Therefore, m transforms as an out-of-plane magnetization. As this term couples directly to the superconducting phase difference ϕ we name it “magneto-chiral coupling". Note that, unlike the actual out-of-plane magnetic field, this term allows for homogeneous order parameters and does not imply generation of vortices. In principle, magnetic field below H_c1 would satisfy this requirement, as orbital or spin magnetisation. In this work, we will not consider the influence of Abrikosov vortex physics on the diode effect appropriate for experiments without applied field (see, however, Ref. <cit.>). An interplay between the magneto-chiral effect identified here and vortex physics would be an interesting topic for future works. Precisely at , the point group D_4d does not contain true inversion, but instead a mirror and 8-fold rotation S_8 that sends ψ_1 →ψ_2 and ψ_2 → -ψ_1, under which m is even. However, at 0^∘ or 90^∘ twist angle m instead transforms as the irrep A_2u of the point group D_4h, which is odd both under inversion and a mirror symmetry that interchanges the two layers, ψ_1 ↔ψ_2. Therefore m cannot arise from an out-of-plane magnetization at 0^∘ or 90^∘. The simplest (i.e. the lowest harmonic) twist-angle dependence consistent with the above requirements is m = m_0 sin2 θ, and we will use this functional form in our considerations below unless otherwise noted. The Josephson current through the junction I_J(ϕ) = (2e/ħ) ∂_ϕ is obtained as I_J(ϕ) = J_c1(θ) sinϕ - J_c2sin2ϕ + J_m(θ) cosϕ , where we introduced a shorthand notation J_c1(θ)= J_c1cos2θ, J_m(θ)= J_msin2θ, and J_m=(2e/ħ)ψ^2 m_0. Interestingly, the same form of the current-phase relation can arise in ferromagnetic Josephson junctions <cit.>. In this work we will focus on physical aspects that are unique to twisted nodal superconductors, including the twist-angle dependence and how to use the diode response to distinguish between spontaneous and explicit TRSB. Finally, we define the thermodynamic critical current I_c^± = max_ϕ [ ± I_J(ϕ) ], which is in general different from the actual measured critical current that may depend on the phase dynamics in the junction, as explained below. We note that thermodynamic non-reciprocity, that is I_c^+≠ I_c^-, is only possible when J_m≠ 0, in accord with the requirement that time reversal must be broken at the level of the free energy in order to observe the diode effect. On the other hand we will show that “dynamical" non-reciprocity is possible even when J_m=0, provided that the free energy has a double-well structure as described by Eq. (<ref>) when θ is close enough to . § DYNAMICAL DIODE EFFECT FROM SPONTANEOUS BREAKING We first discuss the possibility of Josephson diode effect for J_m=0. In the symmetry broken state – J_c1(θ) <2J_c2 in Eq. (<ref>) – nonzero ⟨ϕ⟩ breaks both time-reversal and reflection symmetries, which allows for the diode effect. However, the current (<ref>) still satisfies I_J(ϕ) = -I_J(-ϕ), which implies that its maximum and minimum have the same absolute value, i.e. it is the same in both directions. Nevertheless, as we show below, the actual measured critical current is equal to ±max_ϕ |I_J(ϕ)| only when capacitance of the junction can be ignored (i.e. the junction is overdamped), which typically occurs close to T_c. The situation changes when capacitive effects are considered. To study these effects, we use RCSJ model for the Josephson junction dynamics <cit.> in which the phase evolution is governed by ħ C/2e∂_ttϕ(t) + ħ/2eR∂_tϕ +J_c1(θ) sinϕ -J_c2sin 2 ϕ = J_0, with R the normal resistance of the junction, C its capacitance and J_0 represents the bias current. Voltage across the junctions is equal to ħ/2e∂_t ϕ per the Josephson relation. It is convenient to normalize all currents by J_c2 and define timescale t_0=ħ/(2eRJ_c2) to obtain a dimensionless equation β_c ∂_ττφ + ∂_τφ + J̅_c1cos(2θ) sinϕ -sin2 ϕ = J̅_0 where τ=t/t_0 is the dimensionless time variable. We defined J̅_c1 =J_c1/J_c2, J̅_0=J_0/J_c2 and β_c= 2e R^2 C J_c2/ħ denotes the Stewart-McCumber parameter. Eq. (<ref>) can be interpreted as an equation of motion of a phase “particle” (also referred to as the RCSJ particle) with an inertial mass ∝β_c in a tilted washboard potential U(ϕ) = -ħ/2eJ_0 ϕ +(ϕ). The parameter β_c controls the importance of friction: for β_c≫1 friction can be ignored, while for β_c≪1 inertia can be neglected and motion is purely viscous. In the latter case, if U(ϕ) has any local minima, the motion will terminate there. Therefore, when β_c≪ 1 the measured critical current is equal to the thermodynamic critical current. From a practical perspective, β_c can be controlled by temperature. It is expected to vanish when T → T_c, as the critical current J_c2 goes to zero (see Eq. <ref>). As temperature is lowered, β_c increases due to two effects: the critical current J_c2 becomes larger, while the resistance of the junction R may additionally increase due to the thermal depletion of quasiparticle population <cit.>. §.§ Weakly damped junction: β_c≫1 We now consider the case where capacitive effects are dominant, β_c≫1. We first ignore the effects of friction, and only take into account an infinitesimal damping to ensure a static steady state when the RCSJ particle is trapped in one of the potential wells. Effects of finite damping will be discussed in the following subsection. In the absence of friction, the motion can be analyzed from the energy viewpoint. As discussed above, for J_c1(θ)<2J_c2 the free energy at J_0=0 has two distinct minima. This is the interesting case for the onset of JDE. We assume without the loss of generality that the minimum spontaneously chosen is ϕ=ϕ_0>0 as illustrated in Fig. <ref>(a,b). Turning on the bias current corresponds to tilting the potential landscape for the phase particle. Because the potential is not symmetric around the chosen minimum, the sign of J_0 will matter for the determination of the critical current. It therefore follows that I_c can be different for the two directions of the bias current, giving rise to the junction non-reciprocity. Let us consider the onset of voltage across the junction upon increasing the bias current J_0. In our analysis we assume that the current is turned on adiabatically, i.e. at each value of J_0+dJ_0 we perform the stability analysis starting from the energy minimum at the previous step ϕ_0(J_0). Clearly, the phase value will track the local minimum of U(ϕ), until J_0=J_c where the minimum becomes unstable. For J_0>0 the particle follows the initial minimum at ϕ_0 until J_0=J_c, where it becomes unstable. J_c is given by J_c = J_c2/8√(8-2 x^2+2x√(x^2+8))(√(x^2+8)+x). with x=J_c1(θ)/(2J_c2). For J_0>J_c the phase ϕ will exhibit unbounded motion resulting in a nonzero ϕ̇ and, therefore, voltage. The situation is different for J_0<0. In this case, the minimum that is adiabatically connected to the original one can become unstable at a lower value of |J_0| = J' < J_c (see Fig. <ref>). The value of J' can be found as follows. Since the minimum becomes unstable at this current value, the second derivative of U(ϕ), Eq. (<ref>), should be zero at the respective equilibrium position, U”(ϕ')=0. Noting that U”(ϕ) is independent of J_0 we can find ϕ' that corresponds to the second local minimum disappearing. The corresponding current value (J') for its disappearance is then given by J'=√(1-cos^2(ϕ'_-))(J_c1(θ)-2J_c2cos(ϕ'_-)). The dependence of J' on J_c1(θ)/J_c2 is shown in Fig. <ref> together with that of J_c. One observes that for J_c1(θ)=2J_c2 one has J'=0, whereas for J_c1(θ)=0, one obtains J'=J_c. For |J|>|J'| two cases can be distinguished. Fig. <ref>(b) illustrates the case when at J' the initial energy is not sufficient for ϕ to overcome the next potential hump. Therefore, ϕ will exhibit an oscillatory motion around the remaining minimum. Infinitesimal damping will eventually localize the particle at the remaining stable minimum. So the actual critical current in this case remains J_c for both directions of applied current and the behavior is reciprocal. The other possibility is shown in <ref>(d). In this case, the potential energy of ϕ is sufficient to overcome the next barrier. This results, for infinitesimal damping, in an unbounded motion and non-zero voltage. In this case the critical current is -J' rather than -J_c and therefore, there is a nonzero diode effect for a given initial ϕ_0 value and adiabatic current ramping. Analyzing Eq. (<ref>) numerically, we find that this case is realized for (J_c1(θ)/J_c2)<(J_c1(θ)/J_c2)_D≈ 0.79. Thus, importantly, we conclude that the diode effect that occurs in the absence of explicit time-reversal breaking is only expected within a portion of the TRSB phase. In particular, this sets a limit for the twist angle |θ - 45^∘|≲ 0.4|θ_ TRSB - 45^∘| for the onset of the diode effect. In addition, JDE must vanish at θ= despite the broken time reversal symmetry at . We therefore stress that its observation is a sufficient but not necessary condition for TRSB. For 0.79J_c2<J_c1(θ)<2J_c2, TRS is broken but the diode effect cannot be observed due to kinetic energy arguments. §.§ Intermediate β_c Above we demonstrated that for β_c→∞ in Eq. (<ref>), there is a finite diode effect for small enough J_c1(θ). At the same time, for β_c=0 diode effect should be absent. Here we study numerically the behavior of Eq. (<ref>) for finite values of β_c. To find the critical current, we assume the initial conditions to be ϕ(0)=ϕ_0; ϕ'(0) = 0 and solve Eq. (<ref>) with bias current ramping up linearly in time, J̅_0 = τ/τ_0, where τ_0 is the inverse ramping rate. The critical current is determined by the value of τ when |ϕ| becomes larger than π. This criterion implies that ϕ has traversed the largest potential barrier (see Fig. <ref>) and has enough energy to continue moving. The criterion yields the exact value of J_c only in the limit τ_0→∞ but we have checked that already for τ_0=4000 the result is independent of τ_0 and use these converged results in our discussion. In Fig. <ref> (a) we present the value of the critical current for reverse bias (as shown in Fig. <ref>) as a function of the second harmonic critical current for different values of β_c. We can also quantify the dynamical diode efficiency by η_ dyn = I_M^+-I_M^-/I_M^++I_M^- where M=L(R) for J_m>(<)0. In Fig. <ref> (b) we show the calculated η_dyn as a function of twist angle. As expected from the above discussion, it vanishes at 45^∘ exactly, then increases up to a finite value on decreasing the twist angle, vanishing at a critical value. Note that this critical value always remains within the TRSB phase. Let us now discuss the effects of decreasing β_c (increasing damping) One observes that the transition between the diode and reciprocal regime remains abrupt down to the lowest value considered. This allows us to define a critical value of the first Josephson harmonic J_c1(θ) for the onset of the diode effect, as shown in Fig. <ref>. From the inset, one observes that the diode effect becomes extremely fragile at small values of β_c. At low β_c the RCSJ model solutions are known to become non-hysteretic <cit.>. In the hysteretic regime, the switching current I_c (critical current on increasing the current from zero) and retrapping current I_r (where the voltage drops to zero on decreasing the current from high value) are different, I_r<I_c. For the current bias between I_c and I_r, therefore, two steady state solutions of Eq. (<ref>) are possible. In the non-hysteretic regime, I_c=I_r, and there is only one steady state solution of Eq. (<ref>) for each value of J_0. However, the dynamical diode effect requires that there can be a zero-voltage and a finite-voltage solution at the same value of the bias current, depending on the initial condition for ϕ. Thus, hysteresis is a necessary condition for the dynamical diode effect. Fig. <ref>, inset, suggests that the diode effect disappears at around β_c ≈ 0.35-0.4. This critical value coincides with the hysteresis onset of the RCSJ model with the second harmonic current phase relation for J_c1(θ)=0. This value can be obtained from the value β_c^ hyst≈ 0.7 <cit.> for the first harmonic RCSJ model by replacing 2ϕ→ϕ̃ in Eq. (<ref>). § DIODE EFFECTS IN THE PRESENCE OF EXPLICIT BREAKING We now consider the situation where time-reversal symmetry is explicitly broken – that is, the GL free energy includes the contribution _m defined in Eq. (<ref>), with non-vanishing -breaking field m. The _m term is such that I_J(ϕ) ≠ - I_J(-ϕ), and therefore leads to a Josephson diode effect already at the level of thermodynamic critical currents. Such a -breaking term, if it survives for T>T_c, could also give rise to memory effects when the system is heated up above T_c. The junction would then show the same polarity of the diode effect after cycling above the critical temperature or after being driven into the resistive state by exceeding its critical current. §.§ Thermodynamic diode effect For concreteness we assume the simplest twist angle dependence compatible with the symmetry constraints of the magneto-chiral coupling indicated in Eq. (<ref>). However, our results are not changed qualitatively by adding higher harmonics that are also allowed by symmetry. It is then straightforward to obtain the thermodynamic critical currents, defined as I_c^± = max_ϕ[ ± I_J(ϕ) ], from the GL free energy description that leads to Eq. (<ref>) for the Josephson current. Fig. <ref> shows that when both J_c2 and J_m are non-zero the junction exhibits non-reciprocal behavior, I_c^+≠ I_c^-, for all twist angles except for θ=0,45^∘. This result can be understood as follows. First, note that for any m ≠ 0 the free energy component _m is odd under both and , and hence the basic symmetry requirements for the SC diode effect are met. As already discussed in Sec. II, for an untwisted junction symmetry dictates that when m=0, the free energy is symmetric around ϕ=0 and there is no thermodynamic non-reciprocity. At θ=45^∘ the magneto-chiral coupling is maximal, as per Eq. (<ref>), but now the first-order Josephson term J_c1(θ) vanishes. It is easy to verify that (ϕ) is then symmetric about its -breaking minima at ϕ_0=±π/2 and, once again, this implies equal thermodynamic critical currents for both directions. Furthermore, when J_c2=0 (no Cooper pair co-tunneling), I_J is stationary for ϕ_0 = arctan[J_c1(θ)/J_m(θ) ] and I_c^± = max[ ± I_J(ϕ_0) ] with I_J(ϕ_0) = ±√( J^2_c1(θ) + J_m^2(θ)) , and there is again no critical current asymmetry. This occurs because I_J(ϕ), while no longer anti-symmetric in ϕ, is nevertheless anti-symmetric with respect to a shifted origin. We thus need all three of J_c1(θ), J_c2 and J_m(θ) non-vanishing to observe thermodynamic non-reciprocity, as also demonstrated numerically in Fig. <ref>. We can derive a bound on the diode efficiency, expressed through the figure of merit η = I_c^+ - I_c^-/I_c^+ + I_c^- , as follows. We first observe that the maximum of | η | occurs along the J_m(θ) = ± J_c1(θ) lines, Fig. <ref> (c). This can be understood by recasting Eq. (<ref>) using trigonometric identities as I_J(ϕ) = √( J^2_c1(θ) + J_m^2(θ))sin(ϕ + α ) - J_c2sin2 ϕ, where tanα=J_m(θ)/J_c1(θ). Clearly, the current I_c^+ will be maximal when the maxima of the two sine functions coincide. This happens when α = - π/4 or α=3 π/4, implying J_m(θ) = - J_c1(θ). By sketching the two sine functions it is also easy to see that, at the same time, such α choices minimize I_c^-, hence leading to maximum η for J_m(θ) = - J_c1(θ), as indeed found numerically [Similarly, when α=+π/4 or α=-3π/4 the minima of the two sine functions in Eq. <ref> are aligned, leading to optimal but reversed diode efficiency, η = -1/3. Changing the sign of J_c2 also has the effect of reversing the diode efficiency.]. Taking α=- π/4 the current-phase relation (<ref>) simplifies to I_J(ϕ) = J_m(θ) (cosϕ - sinϕ) - J_c2sin 2 ϕ which gives the following critical currents I_c^+ = √(2) J_m(θ) + J_c2, and I_c^- = √(2) J_m(θ) - J_c2, J_c2 < J_m(θ)/2√(2), J_m^2(θ)/4 J_c2 + J_c2, J_c2 > J_m(θ)/2√(2). The maximal diode efficiency η = 1/3 occurs when J_c2 = J_m(θ)/√(2), as shown in Fig. <ref> (c). Note that at J_c2 = √(2) |J_m(θ)| the free energy transitions from having a double-well structure (for large J_c2) to a -broken single-well structure (for small J_c2) – see also Fig. <ref>. The largest diode efficiency therefore occurs in the regime with a single -broken ground state. In the double-well regime the largest diode efficiency is η = 7/25 near the transition point J_c2 = √(2) J_m(θ). Fig. <ref> further illustrates the non-monotonic behavior of the asymmetry parameter η as function of J_m, with a peak at an optimal value J_m^ opt. This peak is very broad for low twist-angle junctions, and becomes progressively sharper when θ approaches 45^∘. Junctions close to 45^∘ twist, as in Fig. <ref> (c), are very sensitive to explicit -breaking, as shown by their small optimal value J_m^ opt. We also stress that, within the GL free energy description, both the theory with J_c2 > 0 (where the system exhibits chiral SC in a range of twist angles) and J_c2 < 0 (which favors the trivial phase for all twist angles) show similar phenomenology for the thermodynamic diode effect. As such, the observation of a non-zero asymmetry η in the presence of explicitly-broken is, in itself, insufficient to determine whether the underlying superconductor is chiral. However, if the -breaking term can be controlled externally (such as by applying an out-of-plane magnetic field), the presence of two -breaking ground states for J_c2 > 0 should be accompanied by a hysteresis loop when J_m is swept back and forth around 0. Such a hysteresis is not expected in the theory with J_c2<0, which supports a single -preserving ground state. §.§ Dynamical diode effect in the presence of explicit time-reversal breaking We now revisit the results of Sec. <ref> in the presence of explicit TRSB terms in the Josephson energy, which now takes the form U_m(ϕ) = -ħ/2eJ_0 ϕ +ħ/2e J_m(θ) sinϕ +(ϕ), where (ϕ) is defined by Eq. (<ref>). For J_0=0, there is always only one global minimum of Eq. (<ref>). However, two distinct local minima in the Josephson energy U_m(ϕ) can still remain. In particular, for a fixed value of J_m, two minima exist for J_c1(θ) ranging from zero (corresponding to 45^∘) to a finite critical value. In Fig. <ref> we present this critical value of the first harmonic of the Josephson current as a function of J_m. Note that for J_m>2J_c2 only a single minimum exists for all values of J_c1. In this section, similarly to Sec. <ref>, we consider the dynamical diode effect neglecting damping, i.e, for β_c≫1 and assuming adiabatic current ramping. Let us consider the case when two distinct minima exist (Fig. <ref>(a)). For ϕ being initially at L (R) there exist two characteristic current values (I_L(R)^±) when the particle escapes that minimum under positive/negative current bias. In Fig. <ref> (b) we present I_L(R)^± calculated numerically for J_m/J_c2=0.5 as a function of J_c1. The behavior qualitatively resembles Fig. <ref>, but the curves for I_L and I_R are not symmetric, reflecting the explicit TRSB. The black dots in Fig. <ref>(b) mark the values where one of the minima ceases to exist, but not necessarily leading to the onset of voltage. At low J_c1 values there are four such characteristic currents, two positive and two negative. For the current values between the second and third point, two minima in the potential exist. Remarkably, for J_c1(θ)≥ J_c2 one observes that this range does not cover zero current. This implies that additional minima in (<ref>) can be generated by the applied current in presence of finite J_m, even if there is only one minimum at J_0=0 (see Fig. <ref>). The application of a current can thus restore, to a degree, the symmetry of the potential, by counteracting the TRSB term in Eq. (<ref>). Let us now discuss the expected behavior of the critical current in an experiment. In thermodynamic equilibrium, ϕ always starts at L for J_m>0 (Fig. <ref> (a)), and therefore I_R and bistability of the potential is unobservable in the critical current (see, however, Sec. <ref> for the non-equilibrium case). Nonetheless, the possibility of preemptive escape alters the diode effect strength with respect to thermodynamic effect corresponding to the overdamped limit discussed in Sec. <ref>. In Fig. <ref>(c) we present the maximal diode efficiency η_ dyn, Eq. (<ref>), for varying J_c1 and fixed J_m and compare it with the thermodynamic η_, Eq. (<ref>), where |I_c^±| = max[|I_L^±|,|I_R^±|]. The dynamical diode efficiency can be larger than the maximal value 1/3 for η_, due to the possibility of early escape. At low values of J_m, η_ dyn≈0.53, corresponding to the dynamical diode effect caused by spontaneous symmetry breaking, Fig. <ref>. § DYNAMICAL VS. THERMODYNAMIC DIODE EFFECT: EXPERIMENTAL PROTOCOL In the above sections we have demonstrated that both spontaneous (Sec. <ref>) and explicit (Sec. <ref>) TRSB can lead to current nonreciprocity in twist junctions of d-wave superconductors. However, for purely spontaneous TRSB, one expects a randomly fluctuating sign of the nonreciprocity, while the addition of even a small explicit TRSB term will fix it. This raises the question of how the possible spontaneous nature of TRSB and the bistability of the current-phase relation can be identified in an experiment. As has been discussed above, while the diode effect in the thermodynamic critical current requires the second harmonic of the Josephson current to be present, it may occur even in the case when there is no bistability. Here we explain how can one characterize the TRSB in twisted nodal superconductors by adapting the protocol used in experiments on ferromagnetic Josephson junctions <cit.>. §.§ Spontaneous TRSB: J_m=0 Since TRSB occurs spontaneously, the equilibrium value ϕ is chosen randomly to be equal to +ϕ_0 or -ϕ_0. The key insight is that one can deterministically prepare the system in one or the other equilibrium by current sweeping. Consider adiabatically decreasing |J_0| from high bias larger than I_c towards zero. For β_c>β_c^cr (hysteretic regime) the voltage will only go to zero at I_r, |I_r|<I_c. Let us focus on how retrapping occurs. The value of I_r corresponds to the case when ϕ(t) eventually stops (ϕ̇=0 as t→∞) for any initial conditions. For such a solution to exist, the potential, Eq. (<ref>) has to have at least one local minimum (otherwise, there will be a force acting on ϕ and causing motion). In Sec. <ref>, we demonstrated (see Fig. <ref>) that for J'<J_0<J_c values only one minimum exists. Moreover, this minimum is adiabatically connected with the right (left) minimum at J_0=0 for J_0>(<)0. Thus, for I_r>J' (which requires sufficiently strong damping, but still in the hysteretic regime), the value of ϕ can be deterministically prepared by retrapping. A more general argument can be given, that extends to lower I_r, where for each interval ϕ∈ [2π n, 2π (n+1)] there exist two minima of (<ref>). Without loss of generality, let us assume J_0>0. For J_0>I_r, ϕ̇(t) is a periodic function such that ϕ(t) advances by 2π over the period. This implies that ϕ̇ has a global minimum ϕ̇_ min for every period. For large J_0, where Josephson nonlinearity can be neglected ϕ̇>0 for all t. Therefore, for large enough J_0, ϕ̇_ min>0. To go into the retrapped state, ϕ̇_ min has to go through zero at some value of J_0. We will show now that this value is the retrapping current. At this critical value, ϕ̇_ min=0, but ϕ̇≥0 at all times. The equation (<ref>), on the other hand, implies that ϕ̈∼ -U'(ϕ). So if ϕ̇(t_ min)=0, ϕ̇ has to be negative either before or after this moment unless U_m'(ϕ(t_ min))=0. This implies that ϕ has to be in the minimum or maximum of U_m(ϕ). The former can be excluded, since infinitesimal increase of J_0 will not set ϕ into motion. Thus, ϕ is at the maximum of U_m(ϕ). This implies that reducing J_0 further would lead to ϕ̇<0. Since energy is dissipated with time, ϕ will never be able to overcome the potential maximum and is thus retrapped. For Eq. (<ref>), only one such maximum exists per 2π interval if J_c1≠0. Therefore, ϕ would stop to the left (right) of the potential maximum for J_0>(<)0 and will be trapped in different minima depending on the sign of J_0. Note that for large β_c, ϕ can perform many oscillations before stopping, and thus the result of retrapping becomes extremely sensitive to β_c <cit.>. The above discussion leads to the following experimental protocol. One can “prepare" ϕ to be in the left or right minimum by retrapping. Then, biasing the junction with positive or negative voltage will lead to different critical currents in a part of TRSB phase (see Fig. <ref>). One can implement these ideas by comparing the measured voltage for three periodic current patterns depicted in Fig. <ref>(a). We refer to these as full sweep and half sweep protocols J_0^ full(t) and J_0^ half,±(t), mathematically defined as J_0^ full(t) = J_ maxs(t), 0<s(t)<1 J_ max(2-s(t)) 1<s(t)<3 J_ max(-4+s(t)) 3<s(t)<4 where s(t)=[t,4t_c]/t_c and J_0^ half,±(t)=± | J_0^ full(t) |. For the full sweep and half sweep the junction is biased in opposite directions after an interval over which they coincide. The latter can be viewed as a "preparation" step, where ϕ is prepared in the same minimum, but afterwards biased in opposite directions for full and half sweep. A difference in the measured voltage between the two protocols implies dynamical diode effect and spontaneous TRSB. To demonstrate the protocol explicitly we numerically solved Eq. (<ref>). Two typical solutions are presented in Fig. <ref> (b,c). One observes that full and half sweep protocols always give different critical current values, as expected from dynamical diode effect. Note that there is no difference between ± half sweep cases, because there is no explicit TRSB. One observes that changing the value of β_c leads to different relation between full and half sweep cases. In particular, this points out that the minimum, where ϕ is retrapped depends on the value of β_c <cit.>. Nonetheless, for both cases the dynamical diode effect and thus spontaneous TRSB can be demonstrated from this protocol. §.§ Explicit TRSB: J_m≠0 Large J_m excludes the possibility of deterministic retrapping since there is only one minimum at zero current bias, see Fig. <ref>). However, at smaller values of J_m, two minima remnant from the spontaneous TRSB state can still exist. In Fig. <ref> we present four characteristic regimes that occur as a function of J_c1 for a fixed J_m. For large J_c1 (Fig. <ref>(a)), the full and half sweep protocols yield the same values of the critical current. However, positive and negative bias (two stages of full sweep, or ± half-sweep protocols) show different critical current for voltage onset. This is the thermodynamic diode effect, described in Sec. <ref>. Decreasing J_c1 (Fig. <ref>(b,c)) leads to the splitting between full and half sweep protocols, consistent with additional characteristic current values appearing in Fig. <ref>(b). This indicates bistability of the potential (<ref>) remnant from the spontaneous TRSB state at J_m=0. Interestingly, the full-half splitting disappears at yet smaller values of J_c1 (corresponding to twist angles closest to 45^∘), Fig. <ref>(d). The analysis of the numerical solution suggest that at low J_c1 retrapping training is ineffective - ϕ always gets trapped in the global minimum, leaving only signatures of the thermodynamic diode effect. Thus, the described protocol gives access (for sufficiently large β_c and not too close to 45^∘) to all four characteristic values of the current (Fig. <ref> (b,c)). This allows to directly demonstrate the presence of two minima in the Josephson energy even in the presence of explicit TRSB. § OTHER IMPLICATIONS OF SECOND HARMONIC AND MAGNETO-CHIRAL COUPLING As discussed in earlier theoretical works <cit.>, and confirmed by experimental observations in twisted BSCCO bilayers <cit.>, the presence of the second harmonic term J_c2(θ) in the free energy (ϕ) can be probed by means of perturbing the junction using in-plane magnetic field or a radio-frequency (RF) drive. In this Section we briefly discuss the effect of these perturbations on the Josephson response in the presence of both J_c2(θ) and magneto-chiral coupling J_m(θ). We find that the asymmetry between the two current directions is sensitive to the presence of in-plane magnetic field which can therefore be used to probe the effect in greater detail. The magneto-chiral coupling, on the other hand, modifies the junction response to the RF drive and produces asymmetry in the resulting Shapiro steps. §.§ Fraunhofer patterns In a regular Josephson junction, all of the junction area can support the critical current I_c^± because of the position independent phase difference between the superconductors. An in-plane magnetic field, however, induces a phase gradient such that the maximum and minimum interlayer current densities vary spatially. More concretely, with the junction plane perpendicular to the z direction, when a magnetic field B_y is applied along y, the Josephson current density along x given by I_J(ϕ_x), where the phase variation <cit.> ϕ_x = 2 π d/Φ_0 B_y x + ϕ_0. Here d is the effective junction thickness, Φ_0= (hc/2e) denotes the superconducting flux quantum and ϕ_0 is a uniform phase shift. When integrated over the area of a junction of unit length L and width W, the interference from different contributions results in a Fraunhofer pattern I_J(ϕ_0, Φ) = [J_c1(θ) sin(ϕ_0)+J_m(θ)cos(ϕ_0)] sin( πΦ/Φ_0)/πΦ/Φ_0 - J_c2sin(2ϕ_0) sin(2 πΦ/Φ_0)/2 πΦ/Φ_0, where Φ=dLB_y is the magnetic flux through the junction. The equilibrium critical currents for a given flux are determined by the extrema of I_J(ϕ_0, Φ) with respect to ϕ_0. We notice that Eq. (<ref>) has the same form as the zero-field expression <ref>, albeit with renormalized coefficients. The second and first harmonics are renormalized differently. In particular, at Φ/Φ_0 = n+1/2 with n integer, the second harmonic vanishes, while the first harmonic and the magneto-chiral term do not. At the corresponding field values B_y we therefore expect the thermodynamic diode effect to be suppressed. This is indeed observed in our simulation results <ref>(a,b). The diode effect persists at nonzero field strengths except at half-integer fluxes where contribution from the second harmonic vanishes because of its π periodicity. In its absence, the diode effect cannot be induced by the J_m term alone and, hence, I_c^+ = I_c^-. Moreover, the switch in the values of the second harmonic at half-integer fluxes changes the diode polarity and is manifested as the oscillating pattern in the I_c^+ and I_c^- curves. The polarity flipping as a function of the in-plane field is, therefore, suggestive of a dominant second harmonic alongside the magneto-chiral term. §.§ Shapiro steps When a Josephson junction is subjected to an external RF drive, the I-V curves show steps at integer multiples of the voltage V_s = (ħω / 2e), where ω is the drive frequency. The phenomenon is captured by the resistively shunted junction (RSJ) model <cit.> ħ/2 e R∂ϕ/∂ t + I_J(ϕ) = I_dc + J_rfsin(ω t), where R is the junction resistance, I_dc is the measured direct current (dc) and J_rf is the drive amplitude. We solve Eq. (<ref>) numerically using energy units where ħ/2e = 1, R=0.7 and ω=0.6. Two representative results are depicted in <ref>(c-d). The n-th Shapiro step represents a n photon-assisted tunneling of Cooper pairs across the junction. In the presence of a dominant second harmonic, co-tunneling of Cooper pairs gives rise to steps at half-integer multiples of V_s. In a symmetric Josephson junction, the step heights for positive and negative bias voltages are identical. We observe that in the presence of the magneto-chiral term the step heights at half-integer voltages are no longer symmetric about zero voltage, <ref>(d). Such an asymmetry, therefore, could be indicative of explicit TRSB in the junction. § DISCUSSION AND CONCLUSIONS The free energy (ϕ) of a Josephson junction can develop the characteristic double-well structure when the Cooper pair co-tunneling process (i.e. the second CPR harmonic J_c2) becomes dominant. In twisted c-axis junctions between two d-wave superconductors this is expected to occur as the twist angle approaches 45^∘ and the single-pair tunneling J_c1 is suppressed <cit.>. In this work we identified two types of Josephson diode effects that can occur in these kinds of junctions. The dynamical JDE depends on the initial state of the system and relies on the fact that, for a given free energy minimum, the barrier between the SC and resistive states is generally different for the two polarities of the bias current. Therefore, when the damping is small, the junction can exhibit consistent JDE, provided that the measurement protocol is devised such that one always starts from the same free energy minimum. According to our analysis, spontaneous TRSB is a prerequisite for the dynamical JDE; however, it occurs only in a portion of the -broken phase as illustrated in Fig. <ref>. Observation of the dynamical JDE therefore provides evidence for the bistability of the free energy landscape and spontaneous -breaking in the twisted junction. The thermodynamic JDE by contrast depends on explicit breaking that is present at the level of the GL free energy – that is, exists already in the normal state of the material. In this case (ϕ) has a single global minimum away from ϕ=0,π and the diode effect exhibits a fixed polarity controlled by the sign of the breaking term. As we discussed, the second harmonic J_c2 must be present in this case also for the device to show JDE, even though it need not be dominant. In addition, thermodynamic JDE survives in the limit of strong damping. Importantly, signatures of the bistability of the potential can be revealed in an experiment even in the case when is broken explicitly. As discussed in Sec. <ref> this can be achieved by comparing different current sweep protocols, allowing to trap the phase in a local minimum of the free energy. Some consequences of the spontaneous -breaking predicted by theory <cit.> have been explored in earlier work by Zhao et al. <cit.> who reported anomalous Fraunhofer diffraction patterns and half-integer Shapiro steps in near-45^∘ twisted BSCCO junctions. More recently, the same group reported evidence of zero-field SC diode effect in these devices <cit.>. Samples with twist angle slightly away from 45^∘ (but within a window of about ± 6^∘) showed behavior consistent with the dynamical diode effect discussed in Sec. <ref>, indicative of a Josephson free energy with a pronounced double-minimum structure. The non-reciprocal response was probed via the “full-sweep/half-sweep protocol" described in Sec. <ref>, whereby the current sequence applied to the twisted junction is defined so as to controllably prepare the system in one of the two -broken minima. Importantly, junctions outside of the 45± 6^∘ twist angle window showed reciprocal behavior, consistent with our result that dynamical JDE is only expected to be observed within a portion of the TRSB phase. Furthermore, the diode effect was found to vanish in the limit θ→ 45^∘, in accord with the discussion in Sec. III. Our theory therefore provides a good explanatory framework for the experimental findings of Zhao et al. <cit.>. Together, these works make a compelling case for spontaneous TRSB in high-quality BSCCO junctions with a twist angle close to 45^∘. Several samples studied in Ref. <cit.> were reported to exhibit a memory effect indicative of the thermodynamic JDE, with fixed diode polarity independent of its current bias history <cit.>. Most of them showed the full sweep/half sweep splitting, indicating bistability of the potential, see Sec. <ref>. According to our analysis in Sec. <ref> such behavior is suggestive of explicit -breaking in the device that is, presumably, present already in the normal state. Since optimally doped BSCCO crystals are normally thought to be non-magnetic, the nature of this normal-state -breaking poses an interesting open question. We see two distinct possibilities for its origin: (i) Even though a single monolayer BSCCO is non-magnetic, it is possible that a twisted bilayer develops normal state orbital or spin magnetism. This would not be unprecedented – twisted graphene bilayers are well known to develop -breaking instabilities even thought a single monolayer is non-magnetic. (ii) A `vestigial' order <cit.> that arises from fluctuations of the two superconducting order parameters breaking time reversal symmetry <cit.>. Briefly, the idea is that above T_c the individual phase-averaged order parameters vanish, ⟨ψ_1⟩=⟨ψ_2⟩=0, but the composite object m∝⟨ iψ_1ψ_2^*+ c.c.⟩ may remain ordered up to a higher critical temperature T_m>T_c. This scenario requires the relative phase ϕ to remain fixed at one or the other -breaking value even above T_c, hence enabling the return to the same free energy minimum upon cooling back below T_c. We note however, that in this scenario, below T_c current training of the diode polarity should be possible, as no additional degrees of freedom are generated. Therefore, coupling of superconducting fluctuations to the other degrees of freedom (spins or orbitals) is required to explain the memory effect. Josephson diode effects have also been observed in twisted BSCCO junctions by two other groups <cit.>. Ref. <cit.> reports a diode effect in presence of a magnetic field along z. Its efficiency was found to be largest near 45^ o, but nevertheless non-zero for all twist angles considered. Further, the diode polarity can be switched by cycling an applied out-of-plane magnetic field, accompanied by a hysteresis loop. Such observations have been attributed to the presence of a component of the magnetic field through the junction – generated by the in-plane bending of flux lines connecting the misaligned Abrikosov vortex lattices of the two BSCCO flakes. Therefore, in such a setup the identification of potential signatures of spontaneous -breaking, as discussed in the present work, will be much more subtle. Developing a generalization of our theory to include effects of vortices remains an interesting question for future work. Ref. <cit.>, by contrast, reports observation of the diode effect in the nominal absence of external magnetic field for a sample close to 45^ o. We note that our theory predicts that the thermodynamic diode efficiency rises rapidly as the angle is tuned away from 45^∘ (see Fig. <ref>) and even a small misalignment from 45^ o can lead to a significant diode effect. The finite value of I_c in the sample has been interpreted in Ref. <cit.> as evidence of non-d-wave pairing component. However, the remnant critical current could as well arise from the second harmonic mechanism considered in our work <cit.>. Shapiro step measurements could be potentially used to test this scenario. Our results are agnostic to the microscopic origin of the second-harmonic term in the Josephson free energy. A first possibility is that of direct Cooper pair co-tunnelling between the twisted BSSCO flakes <cit.>. The expected magnitude of such a contribution is however difficult to compute accurately <cit.>. Another interesting recent proposal is that twist-angle inhomogeneities for near–45^∘ junctions may lead to a first harmonic term that vanishes on average but still retains significant spatial fluctuations <cit.>. Such a setup was then shown to lead, under suitable conditions, to an effective second-harmonic term with the correct sign to promote spontaneous -breaking at the interface. Similarly, we do not attempt to identify the microscopic mechanism behind the possible residual -breaking suggested in the normal state of twisted BSCCO bilayers by the results of Ref. <cit.>. This is clearly an intriguing effect and, provided that it can be reproduced in more samples and that conditions for its onset are better understood, furnishes an interesting topic for future studies. On the other hand, the phenomenological character of our model allows the application of our results to other systems where the same current-phase relation appears. For example, our results for the undamped and overdamped limit are in agreement with Refs. <cit.> and <cit.>, respectively. Therefore, the analysis performed in this work for arbitrary damping and the experimental protocols discussed in Sec. <ref> can prove useful for the analysis of a wide variety of systems <cit.>. § ACKNOWLEDGMENTS We thank X. Cui, P. Kim and S. Y. F. Zhao for insightful discussions. É. L.-H. acknowledges support from the Gordon and Betty Moore Foundation’s EPiQS Initiative, Grant GBMF8682 at Caltech. T. T. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programm (ERC-StG-Neupert-757867-PARATOP). J.H.P. is partially supported by the Air Force Office of Scientific Research under Grant No. FA9550-20-1-0136, the NSF CAREER Grant No. DMR-1941569, and the Alfred P. Sloan Foundation through a Sloan Research Fellowship. The Flatiron Institute is a division of the Simons Foundation. Work at UBC (M.F.) was supported by Natural Sciences and Engineering Research Council of Canada (NSERC), Canada First Research Excellence Fund (CFREF), and Canadian Institute for Advanced Research (CIFAR). Portion of the work has been completed at The Aspen Center for Physics.
http://arxiv.org/abs/2307.02638v1
20230705201357
Note on expanding implicit functions into formal power series by means of multivariable Stirling polynomials
[ "Alfred Schreiber" ]
math.CO
[ "math.CO", "Primary: 13F25, 11B83, Secondary: 05A19, 11C08" ]
Note on expanding implicit functions]Note on expanding implicit functions into formal power series by means of multivariable Stirling polynomials Department of Mathematics University of Flensburg Auf dem Campus 1 24943 Flensburg, Germany [email protected] Starting from the representation of a function f(x,y) as a formal power series with Taylor coefficients f_m,n, we establish a formal series for the implicit function y=y(x) such that f(x,y)=0 and the coefficients of the series for y depend exclusively on the f_m,n. The solution to this problem provided here relies on using partial Bell polynomials and their orthogonal companions. [2010]Primary: 13F25, 11B83; Secondary: 05A19, 11C08 [ Alfred Schreiber August 1, 2023 ==================== § INTRODUCTION The problem of calculating the higher derivatives of a function y=y(x), which is implicitly given by an equation f(x,y)=0, has been discussed several times already in the mathematical literature of the 19th and 20th centuries. L. Comtet has listed some of these papers in the bibliography of his famous monograph <cit.>. His own contribution to the problem can be found in <cit.>. Recently, the problem has attracted renewed attention, especially with regard to some of its combinatorial aspects. The results in <cit.> have been subjected to careful analysis by Wilde <cit.>, who also gives new proofs. Zemel <cit.> provides an in-depth combinatorial interpretation for those binomial building blocks that appear in the closed formula he proved for the higher derivatives of y; the same author has also achieved a generalization to several variables <cit.>. § PRELIMINARIES The procedure described in the following for calculating the higher derivatives of an implicit function starts from the problem as formulated by Comtet in <cit.>. There, for a function f given as a formal power series f(x,y)=∑_m,n≥0f_m,nx^m y^n/m!n! (with coefficients f_m,n from a fixed commutative field of characteristic zero) Comtet poses the somewhat modified (but equivalent) task of finding a formal power series y=y(x)=∑_n≥1y_nx^n/n! such that From this results immediately a representation of the k-th derivatives D^k(y) (k=1,2,3,…) as D^k(y)=y_k+∑_n≥1y_k+nx^n/n!. In order to be able to calculate y_n=D^n(y)(0), we assume f_0,0=0 and f_0,1≠0. Then, by writing f(x,y)=∑_n≥0φ_ny^n/n!, where φ_n=φ_n(x)=∑_m≥0f_m,nx^m/m!, we see that f(x,y)=0 is equivalent to g(y):=∑_n≥1φ_ny^n/n!=-φ_0. The formal power series g is compositionally invertible, since g(0)=0 and by assumption g'(0)=φ_1=f_0,1+x·∑(…)≠0. Let g denote the (unique) inverse of g. Then, the implicit function y is obtained from (<ref>) in the form y=y(x)=g(-φ_0(x)). Comtet <cit.> evaluates this expression using Lagrange's inversion formula and determines the coefficients y_n by collecting the terms in x^n/n! that occur in the process. But this only in principle! In fact, only some few ad hoc calculations are performed that yield explicit representations for y_1,y_2,y_3,y_4 (see the table on p. 153). Of course, this does not tell us what the general coefficient y_n looks like. In the following, we show how the concepts developed in <cit.> provide a complete insight into the structure of y_n solely as a function of the coefficients of f(x,y). This is done in two reduction steps. § THE FIRST REDUCTION STEP In the first step, we determine the n-th Taylor coefficient g_n of g: g_n=g_n(x):=[y^n/n!]g(y)=A_n,1(φ_1(x),…,φ_n(x)), where A_n,1 is the first member of the double-indexed family A_n,k of multivariable Stirling polynomials in the indeterminates X_1^-1,X_1,…,X_n-k+1 with 0≤ k≤ n; see <cit.>. A fundamental (and even characteristic) property of these polynomials is their inverse relationship to the partial Bell polynomials B_n,k <cit.>, which states that ∑_j=k^nA_n,jB_j,k=nk, where nn=1, nk=0, if n≠ k (Kronecker's symbol). For further information, the reader is referred to the monograph <cit.>. In <cit.>, the collective term `Stirling polynomials of the first and second kind' (in several indeterminates) was proposed for A_n,k and B_n,k because the associated coefficient sums A_n,k(1,…,1) and B_n,k(1,…,1) turn out to be just the (signed) Stirling numbers of the first and the Stirling numbers of the second kind, respectively. For our purposes we need the following explicit representation of A_n,1 as a linear combination of monomial terms <cit.>: A_n,1-2mu=-2muX_1^-(2n-1)-20mu∑_2n-2n-1-2mu(-1)^n-1-r_1(2n-2-r_1)!/r_2!… r_n!(2!)^r_2…(n!)^r_nX_1^r_1X_2^r_2… X_n^r_n. The sum has to be taken over the set 2n-2n-1 of all partitions of 2n-2 elements into n-1 non-empty blocks, that is, of all sequences r_1,r_2,…,r_n of non-negative integers such that r_1+r_2+⋯+r_n=n-1 and r_1+2r_2+⋯+nr_n=2n-2. From equations (<ref>) and (<ref>) we now get y(x) =∑_k≥1g_k(-φ_0(x))^k/k! =∑_k≥1(-1)^kA_k,1(φ_1(x),…,φ_k(x))φ_0(x)^k/k!. Using a well-known property of the partial Bell polynomials (see, for instance, <cit.>) and observing that D^j(φ_0)(0)=f_j,0 we have φ_0(x)^k/k!=∑_n≥ kB_n,k(f_1,0,…,f_n-k+1,0)x^n/n!, and thus from (<ref>) and (<ref>) y(x)=∑_k≥1∑_n≥ k(-1)^kg_k(x)B_n,k(f_1,0,…,f_n-k+1,0)x^n/n!, where g_k(x)=A_k,1(φ_1(x),…,φ_k(x)) is well-defined as a formal power series because of φ_1≠0. Of course, the term g_k(x) hides most of the remaining complexity, which is why we do the following power series `ansatz' in a purely formal way for now: A_k,1(φ_1(x),…,φ_k(x))=∑_j≥0a_k,jx^j/j!. With this we obtain from (<ref>) y(x) =∑_n,j≥0∑_k≥1(-1)^ka_k,jB_n,k(f_1,0,…,f_n-k+1,0)x^n+j/n!j! =∑_m≥ n≥ 0mn{∑_k≥1(-1)^ka_k,m-nB_n,k(f_1,0,…,f_n-k+1,0)}x^m/m!. Since the coefficient of x^m/m! is nonzero if and only if m≥ n≥ k, we get y_m=∑_n=1^mmn{∑_k=1^n(-1)^ka_k,m-nB_n,k(f_1,0,…,f_n-k+1,0)}. This preliminary result is already suitable to calculate the first coefficients. Let us consider the cases m=1 and m=2. — It follows from (<ref>) y_1=(-1)^1a_1,0B_1,1(f_1,0)=-a_1,0f_1,0. Observing A_1,1=X_1^-1 we thus obtain a_1,0=g_1(0)=A_1,1(φ_1)(0)=φ_1(0)^-1=f_0,1^-1 and hence y_1=-f_1,0f_0,1^-1 which corresponds to the familiar identity y'(x)=-f_xf_y^-1. Already for m=2 the computational effort increases noticeably. We have y_2 =21∑_k=1^1(-1)^ka_k,1B_1,k(f_1,0,…,f_2-k,0) +22∑_k=1^2(-1)^ka_k,0B_2,k(f_1,0,…,f_3-k,0) =-2a_1,1B_1,1(f_1,0)-a_1,0B_2,1(f_1,0,f_2,0)+a_2,0B_2,2(f_1,0). Now recall B_2,1=X_2, B_2,2=X_1^2, A_2,1=-X_1^-3X_2, and observe that g'_1(x)=-φ'_1(x)φ_1(x)^-2. This yields y_2 =-2g'_1(0)f_1,0-f_0,1^-1f_2,0+g_2(0)f_1,0^2 =2φ'_1(0)/φ_1(0)^2f_1,0-f_0,1^-1f_2,0-φ_2(0)/φ_1(0)^3f_1,0^2 =2f_0,1^-2f_1,0f_1,1-f_0,1^-1f_2,0-f_0,1^-3f_0,2f_1,0^2, which of course also follows immediately from y”(x)=2f_y^-2f_xf_xy-f_y^-1f_xx-f_y^-3f_yyf_x^2 if we take x=0. The number of distinct monomials in D^n(y) grows rapidly; it is 9 for y_3, 24 for y_4, and 91159 for y_15. Comtet <cit.> established a generating function for this sequence and gave a table with some of its values. See also Comtet/Fiolet <cit.> and the correction made by Wilde <cit.>. § THE SECOND REDUCTION STEP In the next and final step, we will show how the general Taylor coefficient a_k,l of g_k(x) which appears in equation (<ref>) can be represented by a polynomial expression depending solely on the f_m,n. Since the derivative D^l of l-th order is a linear operator for any integer l≥0, we obtain from equation (<ref>) a_k,l =D^l(a_k)(0)=D^l(A_k,1(φ_1,…,φ_k))(0) =∑_2k-2k-1-2mu(-1)^k-1-r_1(2k-2-r_1)!/r_2!… r_k!(2!)^r_2…(k!)^r_kD^l(φ_1^s_1φ_2^r_2…φ_k^r_k)(0), where s_1:=r_1-2k+1. We evaluate the term D^l(…) by means of the general Leibniz rule as follows: D^l(φ_1^s_1φ_2^r_2…φ_k^r_k)=∑_j_1+j_2+⋯+j_k=l j_1,j_2,…,j_k≥0 l!/j_1!j_2!⋯ j_k! D^j_1(φ_1^s_1)D^j_2(φ_2^r_2)⋯ D^j_k(φ_k^r_k). Therefore, only the expressions like D^j_ν(φ_ν^r_ν)(0) remain to be reduced. Recall that (<ref>) describes the fact that D^n(φ_0^k)(0)=k!B_n,k(D^1(φ_0)(0),…,D^n-k+1(φ_0)(0)) is the n-th Taylor coefficient of φ_0(x)^k. Accordingly D^j_ν(φ_ν^r_ν)(0) =r_ν!B_j_ν,r_ν(D^1(φ_ν)(0),…,D^j_ν-r_ν+1(φ_ν)(0)) =r_ν!B_j_ν,r_ν(f_1,ν,f_2,ν,…,f_j_ν-r_ν+1,ν). Finally, we obtain an explicit formula for the coefficients a_k,m-n in (<ref>) by putting l=m-n in (<ref>) and (<ref>) and combining this with (<ref>): a_k,m-n= ∑_2k-2k-1-2mu(-1)^k-1-r_1(2k-2-r_1)!/r_2!… r_k!(2!)^r_2…(k!)^r_k{∑_j_1+⋯+j_k=m-n(m-n)!/j_1!⋯ j_k! × (r_1-2k+1)!B_j_1,r_1-2k+1(f_1,1,f_2,1,…,f_j_1-r_1+2k,1) ×∏_ν=2^kr_ν!B_j_ν,r_ν(f_1,ν,f_2,ν,…,f_j_ν-r_ν+1,ν)}. In summary, we have reached the following result: Under the assumptions made in Section 1 for the function f(x,y) and its Taylor coefficients f_m,n, the implicit function y=y(x) with f(x,y)=0 can be represented as a formal Taylor series whose coefficients are given by the equations (<ref>) and (<ref>) solely as a function of the f_m,n. 00 comt1968 Comtet, L.: Polynômes de Bell et formule explicite des dérivées successives d'une fonction implicite. C. R. Acad. Sci. Paris Série A, t. 267 (1968), 457–460. comt1974 : Advanced Combinatorics. The Art of Finite and Infinite Expansions, rev. and enlarged edition. Reidel, Dordrecht (Holland) 1974. cofi1974 Comtet, L., Fiolet, M.: Sur les dérivées successives d'une fonction implicite, C. R. Acad. Sci. Paris, Série A, t. 278 (1974), 249–251. schr2015 Schreiber, A.: Multivariate Stirling polynomials of the first and second kind. Discrete Math. 338 (2015), 2462–2484. schr2021a : Inverse relations and reciprocity laws involving partial Bell polynomials and related extensions. Enumer. Combin. Appl. 1:1 (2021), Article S2R3. schr2021b : Stirling Polynomials in Several Indeterminates. Logos Verlag, Berlin 2021. wild2008 Wilde, T.: Implicit higher derivatives, and a formula of Comtet and Fiolet. Preprint: https://arXiv.org/pdf/0805.2674, version 1, 17 May 2008. zeme2019 Zemel, S.: The Combinatorics of Higher Derivatives of Implicit Functions. Monatsh. Math. 188 no. 4 (2019), 765–784. zeme2022 : On Higher Partial Derivatives of Implicit Functions and their Combinatorics. Preprint: https://arxiv.org/abs/2212.10172, version 1, 20 Dec 2022. *
http://arxiv.org/abs/2307.01067v1
20230703144718
Localized Questions in Medical Visual Question Answering
[ "Sergio Tascon-Morales", "Pablo Márquez-Neila", "Raphael Sznitman" ]
cs.CV
[ "cs.CV" ]
Localized Questions in Medical Visual Question Answering Tascon-Morales et al. University of Bern, Bern, Switzerland {sergio.tasconmorales, pablo.marquez, raphael.sznitman}@unibe.ch Localized Questions in Medical Visual Question Answering Sergio Tascon-Morales, Pablo Márquez-Neila, Raphael Sznitman August 1, 2023 ================================================================ Visual Question Answering (VQA) models aim to answer natural language questions about given images. Due to its ability to ask questions that differ from those used when training the model, medical VQA has received substantial attention in recent years. However, existing medical VQA models typically focus on answering questions that refer to an entire image rather than where the relevant content may be located in the image. Consequently, VQA models are limited in their interpretability power and the possibility to probe the model about specific image regions. This paper proposes a novel approach for medical VQA that addresses this limitation by developing a model that can answer questions about image regions while considering the context necessary to answer the questions. Our experimental results demonstrate the effectiveness of our proposed model, outperforming existing methods on three datasets. § INTRODUCTION Visual Question Answering (VQA) models are neural networks that answer natural language questions about an image <cit.>. The capability of VQA models to interpret natural language questions is of great appeal, as the range of possible questions that can be asked is vast and can differ from those used to train the models. This has led to many proposed VQA models for medical applications in recent years <cit.>. These models can enable clinicians to probe the model with nuanced questions, thus helping to build confidence in its predictions. Recent work on medical VQA has primarily focused on building more effective model architectures <cit.> or developing strategies to overcome limitations in medical VQA datasets <cit.>. Another emerging trend is to enhance VQA performance by addressing the consistency of answers produced <cit.>, particularly when considering entailment questions (, the answer to “Is the image that of a healthy subject?" should be consistent with the answer to “Is there a fracture in the tibia?"). Despite these recent advances, however, most VQA models restrict to questions that consider the entire image at a time. Specifically, VQA typically uses questions that address content within an image without specifying where this content may or may not be in the image. Yet the ability to ask specific questions about regions or locations of the image would be highly beneficial to any user as it would allow fine-grained questions and model probing. For instance, Fig. <ref> illustrates examples of such localized questions that combine content and spatial specifications. To this day, few works have addressed the ability to include location information in VQA models. In <cit.>, localization information is posed in questions by constraining the spatial extent to a point within bounding boxes yielded by an object detector. The model then focuses its attention on objects close to this point. However, the method was developed for natural images and relies heavily on the object detector to limit the attention extent, making it difficult to scale in medical imaging applications. Alternatively, the approach from <cit.> answers questions about a pre-defined coarse grid of regions by directly including region information into the question (, “Is grasper in (0,0) to (32,32)?"). This method relies on the ability of the model to learn a spatial mapping of the image and limits the regions to be on a fixed grid. Localized questions were also considered in <cit.>, but the region of interest was cropped before being presented to the model, assuming that the surrounding context is irrelevant for answering this type of question. To overcome these limitations, we propose a novel VQA architecture that alleviates the mentioned issues. At its core, we hypothesize that by allowing the VQA model to access the entire images and properly encoding the region of interest, this model can be more effective at answering questions about regions. To achieve this, we propose using a multi-glimpse attention mechanism <cit.> restricting its focus range to the region in question, but only after the model has considered the entire image. By doing so, we preserve contextual information about the question and its region. We evaluate the effectiveness of our approach by conducting extensive experiments on three datasets and comparing our method to state-of-the-art baselines. Our results demonstrate performance improvements across all datasets. § METHOD Our method extends a VQA model to answer localized questions. We define a localized question for an image  as a tuple (, ), where  is a question, and  is a binary mask of the same size as  that identifies the region to which the question pertains. Our VQA model p_θ, depicted in Fig. <ref>, accepts an image and a localized question as input and produces a probability distribution over a finite set 𝒜 of possible answers. The final answer of the model â is the element with the highest probability, â = _a∈𝒜 p_θ(a|, , ). The model proceeds in three stages to produce its prediction: input embedding, localized attention, and final classification. Input embedding. The question  is first processed by an LSTM <cit.> to produce an embedding ∈^Q. Similarly, the image  is processed by a ResNet-152 <cit.> to produce the feature map ∈^C×H×W. Localized attention. An attention mechanism uses the embedding to determine relevant parts of the image to answer the corresponding question. Unlike previous attention methods, we include the region information that the mask defines. Our localized attention module (Fig. <ref> right) uses both descriptors and the mask to produce multiple weighted versions of the image feature map, '=(, , ). To do so, the module first computes an attention map ∈^G×H×W with G glimpses by applying unmasked attention <cit.> to the image feature map and the text descriptor. The value of the attention map at location (h, w) is computed as, _:hw = softmax(^(g)·ReLU(^(x)_:hw⊙^(q))), where the index :hw indicates the feature vector at location (h, w), ^(x)∈^C'× C, ^(q)∈^C'× Q, and ^(g)∈^G× C' are learnable parameters of linear transformations, and ⊙ is the element-wise product. In practice, the transformations ^(x) and ^(g) are implemented with 1×1 convolutions and all linear transformations include a dropout layer applied to its input. The image feature maps  are then weighted with the attention map and masked with  as, '_cghw = _ghw·_chw·(↓_H×W)_hw, where c and g are the indexes over feature channels and glimpses, respectively, (h, w) is the index over the spatial dimensions, and ↓_H×W denotes a binary downsampled version of  with the spatial size of . This design allows the localized attention module to compute the attention maps using the full information available in the image, thereby incorporating context into them before being masked to constrain the answer to the specified region. Classification. The question descriptor  and the weighted feature maps ' from the localized attention are vectorized and concatenated into a single vector of size C·G + Q and then processed by a multi-layer perceptron classifier to produce the final probabilities. Training. The training procedure minimizes the standard cross-entropy loss over the training set updating the parameters of the LSTM encoder, localized attention module, and the final classifier. The training set consists of triplets of images, localized questions, and the corresponding ground-truth answers. As in <cit.>, the ResNet weights are fixed with pre-trained values, and the LSTM weights are updated during training. § EXPERIMENTS AND RESULTS We compare our model to several baselines across three datasets and report quantitative and qualitative results. Additional results are available in the supplementary material. §.§ Datasets We evaluate our method on three datasets containing questions about regions which we detail here. The first dataset consists of an existing retinal fundus VQA dataset with questions about the image's regions and the entire image. The second and third datasets are generated from public segmentation datasets but use the method described in <cit.> to generate a VQA version with region questions. DME-VQA <cit.>. 679 fundus images containing questions about entire images (, “what is the DME risk grade?") and about randomly generated circular regions (, “are there hard exudates in this region?"). The dataset comprises 9'779 question-answer (QA) pairs for training, 2'380 for validation, and 1'311 for testing. RIS-VQA. Images from the 2017 Robotic Instrument Segmentation dataset <cit.>. We automatically generated binary questions with the structure “is there [instrument] in this region?" and corresponding masks as rectangular regions with random locations and sizes. Based on the ground-truth label maps, the binary answers were labeled “yes” if the region contained at least one pixel of the corresponding instrument and “no” otherwise. The questions were balanced to maintain the same amount of “yes” and “no” answers. 15'580 QA pairs from 1'423 images were used for training, 3'930 from 355 images for validation, and 13'052 from 1'200 images for testing. INSEGCAT-VQA. Frames of cataract surgery videos from the InSegCat 2 dataset <cit.>. We followed the same procedure as in RIS-VQA to generate balanced binary questions with masks and answers. The dataset consists of 29'380 QA pairs from 3'519 images for training, 5'306 from 536 images for validation, and 4'322 from 592 images for testing. Fig. <ref> shows the distribution of questions in the three datasets. §.§ Baselines and metrics We compare our method to four different baselines, as shown in Fig. <ref>: No mask: no information is provided about the region in the question. Region in text <cit.>: region information is included as text in the question. Crop region <cit.>: image is masked to show only the queried region, with the area outside the region set to zero. Draw region: region is indicated by drawing its boundary on the input image with a distinctive color. We evaluated the performance of our method using accuracy for the DME-VQA dataset and the area under the receiver operating characteristic (ROC) curve and Average Precision (AP) for the RIS-VQA and INSEGCAT-VQA datasets. §.§.§ Implementation details: Our VQA architecture uses an LSTM <cit.> with an output dimension 1024 to encode the question and a word embedding size of 300. We use the ResNet-152 <cit.> with ImageNet weights to encode images of size 448×448, generating feature maps with 2048 channels. In the localized attention block, the visual and textual features are projected into a 512-dimensional space before being combined by element-wise multiplication. The number of glimpses is set to G=2 for all experiments. The classification block is a multi-layer perceptron with a hidden layer of 1024 dimensions. A dropout rate of 0.25 and ReLU activation are used in the localized attention and classifier blocks. We train our models for 100 epochs using an early stopping condition with patience of 20 epochs. Data augmentation consists of horizontal flips. We use a batch size of 64 samples and the Adam optimizer with a learning rate of 10^-4, which is reduced by a factor of 0.1 when learning stagnates. Models implemented in PyTorch 1.13.1 and trained on an Nvidia RTX 3090 graphics card. §.§ Results Our method outperformed all considered baselines on the DME-VQA (Table <ref>), the RIS-VQA, and the INSEGCAT-VQA datasets (Table <ref>), highlighting the importance of contextual information in answering localized questions. Context proved to be particularly critical in distinguishing between objects of similar appearance, such as the bipolar and prograsp forceps in RIS-VQA, where our method led to an 8 percent point performance improvement (Table <ref>). In contrast, the importance of context was reduced when dealing with visually distinct objects, resulting in smaller performance gains as observed in the INSEGCAT-VQA dataset. For example, despite not incorporating contextual information, the baseline crop region still benefited from correlations between the location of the region and the instrument mentioned in the question (, the eye retractor typically appears at the top or the bottom of the image), enabling it to achieve competitive performance levels that are less than 2 percent points lower than our method (Table <ref>, bottom). Similar to our method, the baseline draw region incorporates contextual information when answering localized questions. However, we observed that drawing regions on the image can interfere with the computation of guided attention maps, leading to incorrect predictions (Fig. <ref>, column 4). In addition, the lack of masking of the attention maps often led the model to wrongly consider areas beyond the region of interest while answering questions (Fig. <ref>, column 1). When analyzing mistakes made by our model, we observe that they tend to occur when objects or background structures in the image look similar to the object mentioned in the question (Fig. <ref>, column 3). Similarly, false predictions were observed when only a few pixels of the object mentioned in the question were present in the region. § CONCLUSIONS In this paper, we proposed a novel VQA architecture to answer questions about regions. We compare the performance of our approach against several baselines and across three different datasets. By focusing the model's attention on the region after considering the evidence in the full image, we show how our method brings improvements, especially when the complete image context is required to answer the questions. Future works include studying the agreement between answers to questions about concentric regions, as well as the agreement between questions about images and regions. splncs04
http://arxiv.org/abs/2307.01724v1
20230704135255
Calibration of the in-orbit center-of-mass of TaiJi-1
[ "Xiaotong Wei", "Li Huang", "Tingyang Shen", "Zhiming Cai", "Jibo He" ]
astro-ph.IM
[ "astro-ph.IM" ]
APS/123-QED [email protected] University of Chinese Academy of Sciences (UCAS), Beijing 100049, China International Centre for Theoretical Physics Asia-Pacific, UCAS, Beijing 100190, China Taiji Laboratory for Gravitational Wave Universe (Beijing/Hangzhou), UCAS, Beijing 100190, China University of Chinese Academy of Sciences (UCAS), Beijing 100049, China International Centre for Theoretical Physics Asia-Pacific, UCAS, Beijing 100190, China Taiji Laboratory for Gravitational Wave Universe (Beijing/Hangzhou), UCAS, Beijing 100190, China University of Chinese Academy of Sciences (UCAS), Beijing 100049, China International Centre for Theoretical Physics Asia-Pacific, UCAS, Beijing 100190, China Taiji Laboratory for Gravitational Wave Universe (Beijing/Hangzhou), UCAS, Beijing 100190, China Innovation Academy for Microsatellites, Chinese Academy of Sciences, Shanghai 201304, China [email protected] University of Chinese Academy of Sciences (UCAS), Beijing 100049, China International Centre for Theoretical Physics Asia-Pacific, UCAS, Beijing 100190, China Taiji Laboratory for Gravitational Wave Universe (Beijing/Hangzhou), UCAS, Beijing 100190, China Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China Taiji program is a space mission aiming to detect gravitational waves in the low frequency band. Taiji-1 is the first technology demonstration satellite of the Taiji Program in Space, with the gravitational reference sensor (GRS) serving as one of its key scientific payloads. For accurate accelerometer measurements, the test-mass center of the GRS must be positioned precisely at the center of gravity of the satellite to avoid measurement disturbances caused by angular acceleration and gradient. Due to installation and measurement errors, fuel consumption during in-flight phase, and other factors, the offset between the test-mass center and the center-of-mass (COM) of the satellite can be significant, degrading the measurement accuracy of the accelerometer. Therefore, the offset needs to be estimated and controlled within the required range by the center-of-mass adjustment mechanism during the satellite's lifetime. In this paper, we present a novel method, the Extended Kalman Filter combined with Rauch-Tung-Striebel Smoother, to estimate the offset, while utilizing the chi-square test to eliminate outliers. Additionally, the nonlinear Least Squares estimation algorithm is employed as a crosscheck to estimate the offset of COM. The two methods are shown to give consistent results, with the offset estimated to be dx ≈-0.19 mm, dy ≈ 0.64 mm, and dz ≈-0.82 mm. The results indicate a significant improvement on the noise level of GRS after the COM calibration, which will be of great help for the future Taiji program. Calibration of the in-orbit center-of-mass of TaiJi-1 Jibo He August 1, 2023 ===================================================== § INTRODUCTION In 2016, the LIGO collaboration made a groundbreaking discovery by detecting gravitational waves (GW) <cit.>, thereby providing a direct verification of the predictions made by Albert Einstein in his general theory of relativity a century ago <cit.>. This discovery has had a profound impact on basic scientific research worldwide. Space-based gravitational wave detection represents an intriguing frontier for future studies of the gravitational universe, as it can extend the reach of gravitational wave astronomy beyond that of the ground-based detectors, thereby allowing for a wider range of gravitational radiation sources to be observed <cit.>, which will provide invaluable information to deepen our understanding of the evolution of early universe and the nature of gravity. Several space-borne gravitational-wave observatories have been proposed, such as LISA <cit.>, DECIGO <cit.>, ASTROD <cit.>, Taiji <cit.>, and Tianqin <cit.>. The Taiji program, initiated by the Chinese Academy of Sciences (CAS), is a space mission that aims to detect gravitational waves (GW) in the frequency band between 0.1 mHz and 1.0 Hz, which is important in the fields of astronomy and cosmology <cit.>. The Taiji program proposes to detect GW signals using the Michelson laser interferometer principle, where each end of the interferometer contains a test mass (TM) serving as a reference body. This reference body is required to be free from spurious accelerations relative to its local inertial frame, and any spurious accelerations will affect the detection of tidal deformations caused by gravitational waves. To facilitate the development of technology for the Taiji program, a three-step road map has been proposed <cit.>. As the first step, a technology demonstrator satellite, Taiji-1 <cit.>, was launched on August 31, 2019. One of the key technologies validated by Taiji-1 is the Gravity Reference Sensor (GRS), which serves as an accelerometer and consists of sensors and electronic components <cit.>. The sensor comprises an electrode housing and a TM surrounded by the sensing electrode, as shown in Fig. <ref>. GRS has three axes, including one non-sensitive axis and two sensitive axes, the directions of +X, +Z, and +Y in Fig. <ref> correspond to the non-sensitive axis (radial direction), the first sensitive axis (flight direction), and the second sensitive axis, respectively. The sensor utilizes capacitive sensing technology to measure the disturbance acceleration of Taiji-1. The resulting data is sent to the drag-free controller, which instructs the thruster to apply force to compensate for the disturbing force. As the reference body of the future Taiji program interferometer, GRS needs to effectively mitigate the impact of all non-gravitational accelerations so that to reach the desired sensitivity of Taiji. Moreover, the accuracy of GRS is crucial for the drag-free control system, as the GRS readout results are used as inputs for issuing commands to control the spacecraft. GRS is susceptible to a range of noise sources, including Brownian noise arising from the surrounding air near TM, charge-induced noise due to TM charge accumulation, readout noise from voltage signals, temperature gradients, circuit noise, magnetic noise, self-gravity noise. A precise positioning of the test-mass of the accelerometer at the center-of-mass (COM) of the Taiji-1 satellite is crucial to suppress non-gravitational and perturbed accelerations such as angular motion-related accelerations and accelerations due to gravity gradients <cit.>. The COM of the accelerometer is adjusted to be at the COM of the satellite before launching. However, during the satellite's operation, the consumption of propellant causes the COM of the satellite to change relative to the satellite frame, leading to a shift of the accelerometer COM relative to the satellite COM over time. Therefore, it is crucial to regularly measure the deviation of the COM position of the two during the entire life cycle of the satellite, and use the COM adjustment mechanism to perform in-orbit adjustments so that the deviation is within a certain range <cit.>. In Section <ref>, the principle of the COM calibration is presented. The accelerometer measurement model is described in Section <ref>. We discuss the use of the Extended Kalman filter model and Rauch-Tung-Striebel Smoother for COM calibration in Section <ref>, while the detection and removal of outliers principle is explained in Section <ref>. The performance of the COM calibration is evaluated in Section <ref>, and the results and conclusions are summarized in Section <ref>. § PRINCIPLE OF COM CALIBRATION DURING IN-ORBIT As shown in Fig. <ref>, the primary components of the high-precision electrostatic levitation accelerometer include a free fall TM, a set of capacitive electrode plate that surrounds the TM (together forming a sensitive probe), and a peripheral capacitive sensing and electrostatic feedback control circuit. This circuit enables the detection of position and attitude changes between the TM and the electrodes, as well as the measurement of acceleration through its feedback voltage. There are six sensing, control, and feedback circuits that use the same principle to measure three translational accelerations and three angular accelerations of the TM concurrently. An offset of the COM of the electrostatic accelerometer from the COM of the satellite can cause measurement disturbances, primarily attributed to the angular motion of the satellite. Therefore, an estimate of the offset can be achieved using appropriate algorithms based on the relationship between the angular motion and the measurement disturbance. In the calibration experiment, a signal of certain frequency is injected into the attitude control to induce magnetic torque of satellite, which will cause disturbance to the accelerometer measurements and ensure its magnitude is sufficiently large to disregard other disturbance effect, such as solar pressure torque and aerodynamic torque. Then, the offset can be modeled from readout of accelerometer and star tracker. § ACCELEROMETER MEASUREMENT MODEL The accelerometer measurement output model of the relative acceleration of TM and the electrode cage for Taiji-1 is presented as follows <cit.>, a_out=d̈+ω̇× d+2 ω×ḋ+ω×(ω× d)+a_g+ a_ng where a_out represents the theoretical measurements of the accelerometer, d represent the COM offset between the accelerometer and the satellite, ḋ and d̈ denote the first and second partial derivation of d relative to time. ω and ω̇ are the angular velocity and the angular acceleration respectively. a_g is the acceleration due to the gravitational gradient, the accelerometer measurement model ignores the perturbations due to the solid Earth tides, ocean tides, rotational deformations, the planets including the Sun and Moon, and general relativity which can be neglected as for the calibration span is short enough. Furthermore, a_ng represents the non-gravitational acceleration on the satellite, such as atmospheric drag, solar radiation pressure, and the Earth radiation pressure. During the offset calibration period, the deviation of the COM of the TM from the COM of the satellite can be approximated as a constant offset, given the short measurement time. Therefore, a_out can be expressed as, a_out=ω̇× d+ω×(ω× d)+a_g+ a_ng. Considering the relatively smooth change of acceleration caused by non-conservative forces and gravity gradient, it can be approximated as a linear change within a limited time span. Therefore, the calibration interval is designed to be several minutes. The attitude control system of the Taiji-1 satellite is equipped with a star tracker and a gyroscope to respectively measure the attitude angle of the satellite relative to the inertial coordinate system and the angular velocity ω. Additionally, the angular accelerations ω̇ are obtained by a second-order polynomial fitting method. Assuming negligible scale factors and misalignment errors, after substituting the ω and ω̇, the accelerometer measurement model can be expressed as follows, A_out=A d+α t+β+A_n, where à can be expressed as Ã=[[ -ω_y^2-ω_z^2 ω_xω_y-ω̇_z ω_xω_z+ω̇_y; ω_xω_y+ω̇_z -ω_x^2-ω_z^2 ω_zω_y-ω̇_x; ω_xω_z-ω̇_y ω_zω_y+ω̇_x -ω_y^2-ω_x^2 ]]. The corresponding axis of the spacecraft can be represented by ω_i and ω̇_i (i=x,y,z) for angular acceleration and angular velocity, respectively. The linear slope is represented by α and the constant bias by β. In this study, we removed the linear effect by detrending A_out. Therefore, the model of A_out utilized in this study is, A_ out=A d + A_n where A_n is the measurement noise diagonal matrix. § EXTENDED KALMAN FILTER MODEL AND RAUCH-TUNG-STRIEBEL SMOOTHER FOR COM CALIBRATION Kalman filter is a high-efficiency recursive filter <cit.>. The filtering theory proposed by Kalman is only applicable to linear systems. An Extended Kalman Filter (EKF) was proposed in Ref. <cit.>, and can be applied to the nonlinear field. The accelerometer measurement model used in this study is given by Eq. <ref>, and one can define the state variable as the COM offset and use the Kalman filter to estimate the offset. Here, the state vector is denoted as X = [d_x, d_y, d_z] with ḋ_̇i̇ = 0 for i=x,y,z. The state equation, as derived in Ref. <cit.>, is, X̂_k=Φ_k,k-1 X_k-1. Here, k signify the step of filter, Φ_k,k-1 represents the state transition matrix from step k-1 to step k. It can be written as Φ_k,k-1 = I+F T, where I and F are identity matrix and zero matrix respectively, T is one filter step period. The output of the accelerometer can be defined as the observation equation, Z_out,k=H_kX_k+V_k, where H_k=Ã_k, and V_k is the discrete measurement noise that satisfies, E[V_k]=0, E[V_k V_j^T]=R_kδ_ij. Here, R_k denotes the variance matrix of the measurement noise. The predicted observation equation is given by, Ẑ_out,k=H_kX̂_k. The estimated covariance matrix is given by, P̂_k=Φ_k,k-1 P_k-1Φ_k,k-1^T+Q_k-1, where Q denotes the system noise variance matrix, which is assumed to be zero in this study. The Kalman gain is K_k=P̂_k H_k^T(H_kP̂_k H_k^T+R_k)^-1, and the state update value after Kalman filter is, X_k=X̂_k+K_k(Z_out,k - Ẑ_out,k). Here, X̂_k represents the prior estimate value, and X_k is the posterior estimate. The update of the error covariance matrix is, P_k=(I-K_k H_k) P̂_k(I-K_k H_k)^T+K_k R_k K_k^T. After the Kalman filter is applied, the difference between the predicted and observed measurements is known as the filtered residual. The filtered residual can be used to evaluate the accuracy of the filter. The covariance matrix of the filtered residual provides information on the uncertainty of the estimation. Therefore, it is important to analyze both the filtered residuals r_k and the covariance matrix of the filtered residuals R_k^K, as follows, r_k=Z_out,k-H_k X_k, R_k^K=R_k-H_k P_k H_k^T. The characteristic property of the Kalman filter is that the filtered residual vectors are uncorrelated, and even independent in the case of Gaussian distribution. After applying the aforementioned Kalman Filter, the state estimator of the dynamic system can be further refined using the Rauch-Tung-Striebel (RTS) smoother, as described in Ref. <cit.>. The smoothing equations are given by the following formulas, X̂_k+1=Φ_k+1,k X_k, P̂_k+1=Φ_k+1,k P_kΦ_k+1,k^T+Q_k, G_k=P_kΦ_k+1,k^TP̂_k+1^-1, X_k^S=X_k+G_k(X_k+1^S-X̂_k+1), P_k^S=P_k+G_k(P_k+1^S-P̂_k+1) G_k^T, r_k^S=Z_out,k-H_k X_k^S, R_k^S=R_k-H_k P_k^S H_k^T. The RTS smoother provides smoothed estimates of the state mean and state covariance at time step k, denoted as X_k^S and P_k^S, respectively. The smoother gain on time step k, denoted as G_k, corrects the RTS smoother estimate. The recursion is initialized at the last time step T of Kalman filter with X_T^S = X_T and P_T^S = P_T. The smoothed residuals are denoted as r_k^S, and the covariance matrix of smoothed residuals is denoted as R_k^S. To obtain a more accurate result, a combined method based on Kalman filter and the RTS Smoother (KF-RTS) and outliers removal is proposed. The data is filtered with the Kalman filter, and smoothed by the RTS smoother, during which a chi-square confidence test is performed, and outliers are removed. This KF-RTS process is iterated until there are no outliers. A possible drawback of the filter algorithm is the fact that one needs an initial value of the state vector together with its covariance matrix. This can be obtained by fitting a small number of measurements at the start of the track by a conventional least-squares fit, but this is not an elegant solution. The other possibility is to start with an arbitrary state vector and an infinite covariance matrix, i.e. a large multiple of the identity matrix. This is completely in the spirit of the filtering approach, but may lead to numerical instabilities in the computation of the gain matrix, since the infinities have to cancel in order to give a finite gain matrix. This may be difficult on a computer with a short word length. In this article the initial value of the state vector is set to zero and its covariance is chosen to be 0.001, which is large enough. The measurement noise is calculated by selecting a segment of stationary data from the corresponding data. Here, we use the nonlinear least squares (NLLS) method <cit.> for parameter estimation, as a crosscheck of the KF-RTS method. As is well-known, the key challenge for NLLS is to find the value of θ̂ that minimizes the function F(θ) F(θ) ≡1/2∑_i=1^m(f_i(θ))^2 where f_i(θ) ≡ y_i-Model(θ,input). The Levenberg-Marquardt (LM) algorithm <cit.>, which is an efficient method to minimize F(θ), is used to find the optimal parameters in this article. § DETECTION AND REMOVAL OF OUTLIERS During the Kalman filtering and RTS smoothing, each data point was utilized to obtain an optimal estimate of the state value, and this process also allows for an assessment of the quality of each data point. The chi-square value <cit.> is commonly used for assessing the quality of data and detecting outliers, which can be caused by spacecraft maneuvering, non-Gaussian noise, electronic noise, or other coupled noise, and deviate significantly from the normal sequence of measurements. The residuals of the global fit can be utilized to identify measurements with large residuals as potential outliers. Kalman filters and smoother enable the exploitation of complete information locally, to determine the validity of a measurement with high probability. The smoothed residual chi-square value can serve as a useful decision criterion for data quality of the measurement, χ^2=(r_k^S)^T (R_k^S)^-1 r_k^S. It is demonstrated that the test on smoothed residual chi-square consistently outperforms the test on filtered residual chi-square <cit.>. Thus, searching for possible outliers should be carried out during smoothing, as it allows the utilization of complete parameter information. During the smoothing process, the measurement point Z_out,k can be removed from the smoothed estimate X_k^S to obtain X_k^S^*, which represents the optimal estimate of the system state at step k using all data information except Z_out,k. This optimal estimate can be utilized for the detection and removal of outliers. To remove Z_out,k from the estimate X_k^S, an inverse Kalman filter can be applied with the covariance matrix of Z_out,k taken as negative. This step of the filter is described in Ref. <cit.>, and the smoothed estimate of X_k without using Z_out,k, X_k^S^*, can be calculated as, X_k^S^*=X_k^S+K_k^S^*(A_out,k-H_kX_k^S), in which K_k^S^*=P_k^SH_k^T(H_kP_k^SH_k^T - R_k)^-1, and P_k^S^*=(I-K_k^S^*H_k) P_k^S. If Z_out,k is a valid measurement and the covariance matrix of its Gaussian readout error is known, the quantity χ^2 follows a chi-square distribution with N_z degrees of freedom, where N_z is the dimension of Z_out,k. The measurement can be identified as an outlier if the value of χ^2 exceeds a certain threshold c. This threshold is chosen as the (1-α) quantile of the corresponding χ^2 distribution, where α represents the probability of rejecting a valid measurement and is chosen to be α=0.001 in this paper. The measurement Z_out,k can be removed permanently from the list as an outlier, and the RTS smoother can continue with X_k^S^* and P_k^S^* instead of X_k^S and X_k^S for updating the estimates X_j^S when j>k. To remove all outliers, this Kalman filter and RTS smoother must be recomputed without the outliers and iterated until convergence is achieved. § RESULTS OF COM CALIBRATION The COM bias estimation in this paper is performed using the two algorithms discussed earlier. Figure <ref> presents a comparison between the linear acceleration results obtained from the NLLS estimation and the original data, while Fig. <ref> illustrates the comparison between the linear acceleration results obtained from the extended Kalman filter algorithm and the original data. Both methods exhibit excellent agreement with the original data. The results of COM calibration using the KF-RTS algorithm before removing outliers are shown in Fig. <ref>, with the shaded area indicating the range of one standard deviation. Figure <ref> illustrates the comparison between the fit results obtained using the NLLS algorithm and the experimental data with outliers removed. The final round results of the offset obtained using the KF-RTS algorithm are presented in Fig. <ref>, with the shaded area indicating the range of one standard deviation. Figure <ref> displays the comparison between the final round results obtained using the KF-RTS algorithm and the experimental data. Table <ref> presents the estimation results for the COM offset obtained using the two methods. Despite the use of different estimation algorithms, the results obtained from both methods are highly consistent, thus increasing the confidence in the accuracy of the results. It is worth mentioning that, the value of χ^2/nof as the goodness of a fit is 2.33 for the first round and is 0.88 for the final round, where nof represents the number of degrees of freedom. Figure <ref> and Figure <ref> depict the comparison of the amplitude spectral density (ASD) between the calibration experimental data and flight data before and after the COM calibration. Figure <ref> shows that the peak observed in the experimental data is caused by a modulation signal, and its influence is highly suppressed after the COM calibration, which is consistent with our expectations. Figure <ref> demonstrates a significant reduction in the acceleration noise level of GRS readings in the frequency range of 0.001–0.1 Hz after the COM calibration. § DISCUSSION AND CONCLUSIONS Detecting and reducing the deviation between the COM of the inspection load and the COM of the satellite is crucial for high-precision accelerometers, as it can improve their accuracy. It is also a significant step in the development of space-based gravitational wave observatory to achieve their scientific objectives. Furthermore, the calibration can help obtain a high-precision gravity field, which is valuable for conducting geoscience research with greater accuracy. In this study, the offset of the COM between the inspection load and the satellite is estimated using the KF-RTS smoother. Outliers are detected using the chi-square test, and the inverse Kalman filter is applied to remove them. The LM algorithm, as a cross check, is used to find the optimal offset parameters for the NLLS method. The results obtained with both methods are in very good agreement, and the offset of COM between the inspection load and the satellite is estimated with an accuracy of 𝒪(10 m). After obtaining the COM offset, one can reduce it using the COM adjustment mechanism, and the effects of the COM offset can also be suppressed in the data-processing. The COM calibration is crucial for improving the accuracy of the accelerometer which will directly impact the detection sensitivity of the final space-based gravitational wave observatory. For the space-borne gravitational-wave observatories, such as Taiji-3, there are three satellites and each satellite is equipped with two TMs, which means that the COM of TMs does not coincide with that of the satellite. However, as a reference body for the satellite, it would cause residual acceleration if the TMs are away from their nominal positions. Therefore, it is necessary to periodically estimate or monitor the deviation of the COM of TM from a fixed point and make adjustments when necessary. The same calibration principle and methods as discussed in this paper can be used. The Taiji-3 satellite will be equipped with higher-precision star-sensitive instruments, which will enable more accurate results. § ACKNOWLEDGEMENTS This work is partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences Grant No. XDA15020700 and XDA15021100, and by the Fundamental Research Funds for the Central Universities. We acknowledge support from the National Space Science Data Center, National Science and Technology Infrastructure of China. apsrev4-2 31 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Abbott et al.(2016)Abbott et al.]PhysRevLett.116.061102 author author B. P. Abbott et al. (collaboration LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016)NoStop [Einstein(1916)]einstein1916approximative author author A. Einstein, @noop journal journal Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) volume 1916, pages 1 (year 1916)NoStop [Hu and Wu(2017)]10.1093/nsr/nwx116 author author W.-R. Hu and author Y.-L. Wu, https://doi.org/10.1093/nsr/nwx116 journal journal Natl. Sci. Rev. volume 4, pages 685 (year 2017)NoStop [Bender et al.(1998)Bender et al.]bender1998lisa author author P. Bender et al., https://lisa.nasa.gov/archive2011/Documentation/ppa2.08.pdf journal journal Max-Planck-Institut für Quantenoptik, Garching (year 1998)NoStop [Jennrich et al.(2011)Jennrich et al.]jennrich2011ngo author author O. Jennrich et al., https://sci.esa.int/documents/34985/36280/1567258287202-NGO_YB.pdf journal journal ESA/SRE (2011) volume 19 (year 2011)NoStop [Amaro-Seoane et al.(2017)Amaro-Seoane et al.]amaro2017laser author author P. Amaro-Seoane et al., https://arxiv.org/abs/1702.00786 journal journal arXiv volume 1702.00786 (year 2017)NoStop [Seto et al.(2001)Seto, Kawamura, and Nakamura]PhysRevLett.87.221103 author author N. Seto, author S. Kawamura, and author T. Nakamura, https://doi.org/10.1103/PhysRevLett.87.221103 journal journal Phys. Rev. Lett. volume 87, pages 221103 (year 2001)NoStop [Ni(2013)]Ni:2012eh author author W.-T. Ni, https://doi.org/10.1142/S0218271813410046 journal journal Int. J. Mod. Phys. D volume 22, pages 1341004 (year 2013)NoStop [Cyranoski et al.(2016)Cyranoski et al.]cyranoski2016chinese author author D. Cyranoski et al., https://doi.org/https://doi.org/10.1038/531150a journal journal Nature volume 531, pages 150 (year 2016)NoStop [Luo et al.(2016)Luo et al.]luo2016tianqin author author J. Luo et al., https://doi.org/10.1088/0264-9381/33/3/035010 journal journal Classical and Quantum Gravity volume 33, pages 035010 (year 2016)NoStop [Sathyaprakash and Schutz(2009)]sathyaprakash2009physics author author B. S. Sathyaprakash and author B. F. Schutz, https://doi.org/10.12942/lrr-2009-2 journal journal Living reviews in relativity volume 12, pages 1 (year 2009)NoStop [Schutz(1999)]schutz1999gravitational author author B. F. Schutz, https://doi.org/10.1088/0264-9381/16/12A/307 journal journal Classical and Quantum Gravity volume 16, pages A131 (year 1999)NoStop [Luo et al.(2020a)Luo, Guo, Jin, Wu, and Hu]LUO2020102918 author author Z. Luo, author Z. Guo, author G. Jin, author Y. Wu, and author W. Hu, https://doi.org/https://doi.org/10.1016/j.rinp.2019.102918 journal journal Results in Physics volume 16, pages 102918 (year 2020a)NoStop [Luo et al.(2020b)Luo, Wang, Wu, Hu, and Jin]luo2020taiji author author Z. Luo, author Y. Wang, author Y. Wu, author W. Hu, and author G. Jin, https://doi.org/10.1093/ptep/ptaa083 journal journal Progress of Theoretical and Experimental Physics volume 2021, pages 05A108 (year 2020b)NoStop [Cai and Deng(2021)]Cai:2021xsz author author Z. Cai and author J. Deng, https://doi.org/10.1142/S0217751X21400200 journal journal Int. J. Mod. Phys. A volume 36, pages 2140020 (year 2021)NoStop [Touboul et al.(1999)Touboul, Willemenot, Foulon, and Josselin]touboul1999accelerometers author author P. Touboul, author E. Willemenot, author B. Foulon, and author V. Josselin, http://www3.ogs.trieste.it/bgta/pdf/bgta40.3.4_TOUBOUL.pdf journal journal Boll. Geof. Teor. Appl volume 40, pages 321 (year 1999)NoStop [Wang et al.(2010)Wang, Bettadpur, Save, and Kruizinga]wang2010determination author author F. Wang, author S. Bettadpur, author H. Save, and author G. Kruizinga, https://doi.org/10.2514/1.46086 journal journal Journal of Spacecraft and Rockets volume 47, pages 371 (year 2010)NoStop [Huang et al.(2022)Huang, Li, Cai, Fan, and Huang]rs14164030 author author Z. Huang, author S. Li, author L. Cai, author D. Fan, and author L. Huang, https://doi.org/10.3390/rs14164030 journal journal Remote Sensing volume 14 (year 2022)NoStop [Armano et al.(2018)Armano et al.]Armano:2018ucz author author M. Armano et al., https://doi.org/10.1103/PhysRevD.97.122002 journal journal Phys. Rev. D volume 97, pages 122002 (year 2018), https://arxiv.org/abs/1806.08581 arXiv:1806.08581 [astro-ph.IM] NoStop [Dong et al.(2009)Dong, Liao, Jia, and Xia]Dong2009 author author F. Dong, author H. Liao, author C. Jia, and author X. Xia, https://doi.org/10.1007/s11431-009-0142-0 journal journal Science in China Series E: Technological Sciences volume 52, pages 1446 (year 2009)NoStop [Guzman Cervantes et al.(2008)Guzman Cervantes, Steier, Wanner, Heinzel, and Danzmann]guzman2008subtraction author author F. Guzman Cervantes, author F. Steier, author G. Wanner, author G. Heinzel, and author K. Danzmann, https://doi.org/10.1007/s00340-007-2923-0 journal journal Applied Physics B volume 90, pages 395 (year 2008)NoStop [Peng et al.(2021)Peng et al.]peng2021system author author X. Peng et al., https://doi.org/10.1142/S0217751X21400261 journal journal Int. J. Mod. Phys. A volume 36, pages 2140026 (year 2021)NoStop [Kalman(1960)]10.1115/1.3662552 author author R. E. Kalman, https://doi.org/10.1115/1.3662552 journal journal Journal of Basic Engineering volume 82, pages 35 (year 1960)NoStop [Sunahara(1970)]10.1115/1.3425006 author author Y. Sunahara, https://doi.org/10.1115/1.3425006 journal journal Journal of Basic Engineering volume 92, pages 385 (year 1970)NoStop [Bucy and Senne(1971)]bucy1971digital author author R. S. Bucy and author K. D. Senne, https://doi.org/10.1016/0005-1098(71)90121-X journal journal Automatica volume 7, pages 287 (year 1971)NoStop [Rauch et al.(1965)Rauch, Tung, and Striebel]ruach1965maximum author author H. E. Rauch, author F. Tung, and author C. T. Striebel, https://doi.org/10.2514/3.3166 journal journal AIAA journal volume 3, pages 1445 (year 1965)NoStop [Al-Baali and Fletcher(1985)]doi:10.1057/jors.1985.68 author author M. Al-Baali and author R. Fletcher, https://doi.org/10.1057/jors.1985.68 journal journal Journal of the Operational Research Society volume 36, pages 405 (year 1985)NoStop [Levenberg(1944)]levenberg1944method author author K. Levenberg, https://doi.org/10.1090/qam/10666 journal journal Quarterly of applied mathematics volume 2, pages 164 (year 1944)NoStop [Marquardt(1963)]10.2307/2098941 author author D. W. Marquardt, http://www.jstor.org/stable/2098941 journal journal Journal of the Society for Industrial and Applied Mathematics volume 11, pages 431 (year 1963)NoStop [Cochran(1952)]cochran1952chi2 author author W. G. Cochran, https://doi.org/10.1214/aoms/1177729380 journal journal The Annals of mathematical statistics , pages 315 (year 1952)NoStop [Fruhwirth(1987)]Fruhwirth:1987fm author author R. Fruhwirth, https://doi.org/10.1016/0168-9002(87)90887-4 journal journal Nucl. Instrum. Meth. A volume 262, pages 444 (year 1987)NoStop
http://arxiv.org/abs/2307.01043v1
20230703142030
Component-separated, CIB-cleaned thermal Sunyaev--Zel'dovich maps from $\textit{Planck}$ PR4 data with a flexible public needlet ILC pipeline
[ "Fiona McCarthy", "J. Colin Hill" ]
astro-ph.CO
[ "astro-ph.CO" ]
numbers,sort compressnatbib ./
http://arxiv.org/abs/2307.03314v1
20230706220138
Photoinduced Anomalous Supercurrent Hall Effect
[ "A. V. Parafilo", "V. M. Kovalev", "I. G. Savenko" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
Center for Theoretical Physics of Complex Systems, Institute for Basic Science (IBS), Daejeon 34126, Korea Rzhanov Institute of Semiconductor Physics, Siberian Branch of Russian Academy of Science, Novosibirsk 630090, Russia Novosibirsk State Technical University, Novosibirsk 630073, Russia Department of Physics, Guangdong Technion – Israel Institute of Technology, 241 Daxue Road, Shantou, Guangdong, China, 515063 Technion – Israel Institute of Technology, 32000 Haifa, Israel Guangdong Provincial Key Laboratory of Materials and Technologies for Energy Conversion, Guangdong Technion–Israel Institute of Technology, Guangdong 515063, China We predict a photoinduced Hall effect in an isotropic conventional two-dimensional superconductor with a built-in supercurrent exposed to a circularly-polarized light. This second-order with respect to the electromagnetic field amplitude effect occurs when the frequency of the field exceeds the double value of the superconducting gap. It reveals itself in the emergence of a Cooper-pair condensate flow in the direction transverse to the initial built-in supercurrent, which arises to compensate for the light-induced electric current of quasiparticles photoexcited across the gap. The initial supercurrent breaks both the time-reversal and inversion symmetries, while the presence of dilute disorder in the sample provides the breaking of the Galilean invariance. We develop a microscopic theory of the supercurrent Hall effect in the case of weak disorder and show, that the Hall supercurrent is directly proportional to the quasiparticle recombination time, which can acquire large values. Photoinduced Anomalous Supercurrent Hall Effect I. G. Savenko August 1, 2023 =============================================== Introduction.— The measurement of optical response in superconductors is a powerful experimental technique to explore their quantum properties, which are in the focus of research for more than fifty years <cit.>. Despite such an extensive period of time, the study of interaction of electromagnetic (EM) fields with superconductors remains a challenging topic since, as it is known, superconducting (SC) samples usually expel external EM fields <cit.>. Beside fundamental importance, the research on light-controlled transport of Cooper pairs is aimed at applications <cit.>. Examples of possible (but not yet implemented) light-matter interaction phenomena in superconductors include various nonlinear and higher-order response effects <cit.>, in particular, electric field-induced enhancement of SC properties <cit.>, giant second-harmonic generation under supercurrent injection <cit.>, and light-mediated superconductivity <cit.>, to name a few. Another route, which we inspect in this Letter, would be a photoinduced anomalous Hall effect – a possibility to manipulate a dc supercurrent flow by utilizing the Hall-like response. Anomalous Hall effect in non-SC samples represents a stationary transport phenomenon, which constitutes the emergence of a transverse component of electric current in the absence of an external magnetic field <cit.>. The examples are the spin Hall effect, where spin-orbit interaction plays the role of the magnetic field, the valley Hall effect <cit.> in two-dimensional (2D) Dirac materials <cit.>, and the photoinduced anomalous Hall effect, actively studied in various systems <cit.>. Is it possible to find a setup for such an effect to involve Cooper pairs? Before answering this question, let us briefly review the optical response theory in superconductors. As it is known, in clean single-band Bardeen-Cooper-Schrieffer (BCS) superconductors <cit.>, the presence of particle-hole and inversion symmetries does not allow for momentum-conserving optical transitions <cit.>. The optical excitations under inversion symmetry can occur if account for either impurity scattering <cit.> or multiband structure <cit.>. The first theoretical analysis of the dynamical conductivity of superconductors exposed to EM fields with the frequency exceeding the SC gap belongs to Mattis and Bardeen <cit.>, who considered the `dirty case'. They have shown, that in the absence of electron scattering on impurities (`clean case'), the optical transitions across the SC gap exerted by a uniform light are forbidden. The reason is that the hole-like and electron-like states are orthogonal to each other, and thus, they give vanishing matrix elements describing the optical transitions across the SC gap. Soon, the Mattis–Bardeen theory was tested in a number of experiments <cit.>; it was also generalized to the case of strong electron-phonon interaction <cit.> and superconductors with an arbitrary electron mean free path <cit.>. Nevertheless, the optical transition can still take place in clean superconductors with broken inversion symmetry or in the presence of spin-orbit interaction <cit.>. Inversion symmetry here can be broken in the presence of a built-in supercurrent <cit.>. However, the presence of a supercurrent is not a sufficient condition for the optical transitions to occur since the Galilean invariance in parabolic single-band superconductors suppresses the transitions <cit.>. Two ways to break the Galilean symmetry are known: (i) accounting for the non-parabolicity of electronic bands <cit.> and (ii) accounting for the electron-impurity scattering. Both these scenarios have been realized for the frequencies exceeding the double value of the SC gap. Recently, it became clear <cit.> that various relaxation time parameters may play an importantant role depending on the ratio between the EM field frequency and SC gap. It was shown, that at low frequencies, the optical conductivity in the presence of a supercurrent is proportional to the inelastic electron relaxation time, and not to the elastic one. At low frequencies and temperatures close to the SC critical temperature T_c, the inelastic time is determined by the energy relaxation processes of quasiparticles. The energy relaxation time being much larger than the elastic one may result in giant optical conductivity and power absorption in the presence of a supercurrent-carrying state <cit.>. In this Letter, we show that even in a single-band BCS superconductor <cit.>, the breaking of both inversion and time-reversal symmetries by means of a built-in supercurrent, and the breaking of Galilean invariance by (weak) electron-impurity scattering results in photoinduced transport of Cooper-pair condensate in the direction transverse to the built-in supercurrent. Hereby we define the photoinduced anomalous suppercurrent Hall effect. At the temperatures T ≪Δ with Δ the SC order parameter, the equilibrium density of quasiparticiples above the gap is negligibly small in the absence of external radiation. Therefore, we expect that the photoinduced Hall response should be determined by the inelastic quasiparticle relaxation time τ_R associated with the recombination processes of quasiparticles across the gap. It is important to note, that large τ_R at sufficiently low temperatures provides large values of the supercurrent, opening a way for the experimental verification of its existence. The idea behind the supercurrent Hall effect can be roughly explained using phenomenological arguments. Let us consider a 2D layer with a built-in stationary supercurrent generated either by, e.g., a transport current or an external applied magnetic field. The supercurrent is the consequence of nonzero supermomentum p_s of the Cooper pairs, associated with the phase difference of the condensate at the edges of the sample. Furthermore, if an isotropic 2D superconductor in supercurrent-carrying regime is normally illuminated by an external EM radiation characterized by the in-plain vector potential 𝒜(t)=𝒜exp(-iω t)+𝒜^*exp(iω t), the photoinduced stationary current of quasiparticles excited across the SC gap in the most general form reads as j=a_ω|𝒜|^2 p_s+b_ω[𝒜(𝒜^*· p_s)+(𝒜· p_s)𝒜^*] +ic_ω[ p_s×[𝒜×𝒜^*]]. The first term in Eq. (<ref>) gives the longitudinal (aligned along the supercurrent flow) photoexcited current density; the second term contains both the longitudinal and transverse quasiparticle current density responses; the third term gives only the transverse response. Anisotropic contributions characterized by the terms proportional to coefficients b_ω and c_ω are induced by linearly and circularly polarized radiation, respectively. Note, that Eq. (<ref>) is valid only for relatively small values of the supercurrent density, |p_s|v_F≪Δ, where v_F is electron Fermi velocity. Interested in the Hall transport, we focus on the transverse component of the current (<ref>), j_y=b_ω(𝒜_x𝒜_y^*+𝒜^*_x𝒜_y)p_s+ic_ω(𝒜_x𝒜_y^*-𝒜^*_x𝒜_y)p_s by choosing the direction of condensate flow along the x-axis, p_s=(p_s,0), as in Fig. <ref>. Using the terminology of the two-fluid model, we call j_y the photoexcited current of quasiparticles contributing to the normal component of electron fluid. It provides an accumulation of carriers of charge at the transverse boundaries of the sample. In the case of a non-SC material, such an accumulation results in the emergence of the Hall electric field. In the case of a SC material, instead, the electric field cannot penetrate the SC sample. Therefore, the transverse quasiparticle current j_y should be accompanied by an induced transverse condensate flow j_s in such a way, that the Hall electric field is compensated, thus j_s+j_y=0, and the net transverse electric current vanishes. Furthermore, even though the net current vanishes, the emergence of j_s produces the condensate phase difference on the transverse boundaries of the sample, Δϕ_H∝-j_yw, where w is the width of the 2D SC sample across p_s in y direction. The Hall-like condensate phase difference Δϕ_H directly relates to the coefficients b_ω and c_ω, which determine the quasiparticle optical response across the SC gap. In what follows, let us build a microscopic theory to find c_ω, thus considering circularly polarized EM field. Photoinduced current density.— In the absence of relaxation processes, the Hamiltonian of a 2D superconductor with an isotropic s-type BCS pairing exposed to an external EM field reads (in ħ=k_B=c=1 units) Ĥ= ([ ξ( p- p_s-e𝒜(t)) Δ; Δ -ξ( p+ p_s+e𝒜(t)) ]). Here, ξ(p)≡ξ_p=p^2/2m-E_F is the electron kinetic energy measured from the Fermi energy E_F, and Δ we assume real-valued. The current density operator and the current density obey the standard relations, ĵ=-δĤ/δ𝒜,        j(t)=-i {ĵ 𝒢̂^<(t,t)}, where 𝒢̂^<(t,t) is a lesser component of the Green's function defined by the matrix equation (i∂_t-Ĥ)𝒢̂(t-t')=δ(t-t') in the Nambu and Keldysh representation. Expanding Eq. (<ref>) up to the first-order with respect to p_s and to the second order with respect to 𝒜(t) yields a set of Feynman diagrams for the stationary current shown in Fig. <ref>. Clean case.— In the absence of impurities, a single-band superconductor with parabolic electron dispersion possesses the Galilean invariance with or without a built-in supercurrent. Consequently, optical absorption vanishes in both cases. Meanwhile, the second-order stationary response is a consequence of photoabsorption across the gap. Thus, it vanishes in clean case both in the absence and presence of the supercurrent. The inspection of all the diagrams in Fig. <ref> confirms this statement (See Supplemental Material <cit.>), except one diagram shown in Fig. <ref> (l). The calculation of this diagram gives a nonzero current density, which seemingly violates the Galilean invariance of the theory. To restore the Galilean invariance, one has to account for additional terms reflecting the BCS electron-electron interaction-induced vertex corrections <cit.>. The optical absorption and the photoinduced electric current (<ref>) acquire finite values when the Galilean invariance is violated <cit.>. We consider the case when it happens due to the presence of electron-impurity scattering in the sample. Another important ingredient in this problem is the relaxation processes of the photoexcited quasiparticles. These processes restrict the infinite accumulation of the photoexcited quasiparticles above the SC gap leading to a stationary regime with stationary but nonequilibrium distribution function of photoexcited quasiparticles. Impure case.— To find the photocurrent in the presence of impurities, we should account for electron scattering and relaxation processes associated with the transition of photoexcited quasiparticles to the SC condensate in the Green's function in Eq. (<ref>). The quasiparticle Green's function in the Born approximation averaged over the disorder but in the absence of the EM field and supercurrent reads ĝ^R_ϵ(p)=1/η_ϵϵ-ξ_pτ̂_z-Δη_ϵτ̂_x, η_ϵ=1+Θ[Δ-|ϵ|]/2τ_i√(Δ^2-ϵ^2)+isign(ϵ)Θ[|ϵ|-Δ]/2τ_i√(ϵ^2-Δ^2). Here, index R stands for `retarded' (advanced Green's function can be found as ĝ^A_ϵ(p)=(ĝ^R_ϵ(p))^∗), Θ(x) is a Heaviside theta-function, and τ_i is impurity scattering time. The Green's function (<ref>) can be written as a sum of two projections to the electron- and hole-like states as follows, ĝ^R_ϵ=Â_p/ϵ-ϵ_p+i/2τ_p+ B̂_p/ϵ+ϵ_p+i/2τ_p, Â_p=([ u^2 uv; uv v^2 ])+iγ([ 1/2 uv; uv 1/2 ])≡Â_0+iγΓ̂_A, B̂_p=([ v^2 -uv; -uv u^2 ])-iγ([ 1/2 -uv; -uv 1/2 ])≡B̂_0-iγΓ̂_B, u^2=1/2(1+ξ_p/ϵ_p), v^2=1/2(1-ξ_p/ϵ_p), where ϵ_p=√(ξ_p^2+Δ^2) is a quasiparticles dispersion, γ^-1=2τ_i|ξ_p| is impirity-related factor renormalizing the Bogoliubov coefficients u and v. All relaxation processes are characterized by the parameter 1/τ_p=1/τ_i|ξ_p|/ϵ_p+1/τ_R, where the first term in r.h.s. of Eq. (<ref>) describes the quasiparticle relaxation due to the scattering off impurities, while the second term accounts for the recombination back to the condensate, characterized by the parameter τ_R. In what follows, we assume that under external EM radiation, photoexcited quasiparticles are generated at the gap edge, 2Δ≫ω-2Δ>0 and possess the momentum p≈ p_F (|ξ_p|→ 0). In this case, the dominant inelastic process is the recombination across the SC gap is characterized by τ_R since τ_p=τ_iτ_Rϵ_p/(τ_R|ξ_p|+τ_iϵ_p)≈τ_R at |ξ_p|→ 0. At temperatures T≪Δ and at the bottom of the quasiparticle branch, the recombination time may acquire large values since <cit.> 1/τ_R∝T/E_Fτ_ie^-2Δ/T. Indeed, any recombination process represents a transition across the gap after forming the Cooper pair back to the SC condensate. The intensity of this process is proportional to the thermal density of quasipaticles above the gap. This density is small at low temperatures, which is reflected by the exponential factor in Eq. (<ref>). The influence of two types of relaxation processes on the Green's function (<ref>) is twofold. On one hand, both of them shift the Green's function pole in the complex plane (see the imaginary part in the denominator of Eq. (<ref>)). On the other hand, the impurity scattering processes give main contribution to renormalization of the Bogoliubov coefficients u and v since the coefficient γ is large at p≈ p_F (ξ_p→0). Therefore we replace the matrices Â_0 and B̂_0 by Â_p and B̂_p in the numerator of Eq. (<ref>). It should be noted, that although τ_R≫τ_i, taking into account τ_R in Eq. (<ref>) is still crucial. As we will see below, the photoexcited electric current is determined by a large but finite value of τ_R, while the photocurrent diverges in the limit τ_R→∞. To analyse the current in a superconductor with impurities, we again address the Feynman diagrams in Fig. <ref>. The diagrams (f)-(i) give only longitudinal contribution, while the diagrams (a)-(c) result in both longitudinal and transverse response if exposed to linearly polarized EM field. The remaining diagrams (d),(e),(j)-(l) in Fig. <ref> describe the longitudinal and transverse response in both the cases of linear and circular light polarization. For circularly-polarized light, diagrams (d) and (e) reveal the major contribution in the vicinity of the resonance ω≈2Δ  <cit.>. Let us summarize here the calculation of diagrams (d) and (e) <cit.>. An analytical expression for the current density manifested by panel (d) [diagram (e) gives the same contribution] reads j=ie^3/m∑_ϵ,pv(v𝒜^∗)(p_s𝒜) f̃_ϵ,ω^- {ĝ^R_ϵ[ĝ^R_ϵ-ω-ĝ^A_ϵ-ω]τ̂_zĝ^A_ϵ} +ie^3/m∑_ϵ,pv(v𝒜)(p_s𝒜^∗) f̃_ϵ,ω^+ {ĝ^R_ϵ[ĝ^R_ϵ+ω-ĝ^A_ϵ+ω]τ̂_zĝ^A_ϵ}, where f̃^±_ϵ,ω=(f_ϵ-f_ϵ±ω) with f_ϵ the Fermi-Dirac distribution function, and we omit the momentum dependence of the Green's functions for brevity. Performing the integration over ϵ in the limit of zero temperature gives j=-2e^3/m∑_pv(v𝒜^∗)(p_s𝒜)Sp[Â_pB̂_0τ̂_zÂ^∗_p]/(2ϵ_p-ω)^2+(1/τ_p)^2 +2e^3/m∑_pv(v𝒜)(p_s𝒜^∗)Sp[B̂_pÂ_0τ̂_zB̂^∗_p]/(2ϵ_p-ω)^2+(1/τ_p)^2. Deriving above expression, we used the resonant approximation, considering ω≈2ϵ_p≈2Δ and keeping only resonant terms in Eq. (<ref>) [see the denominators (2ϵ_p-ω)^2+(1/τ_p)^2 with ω>0]. In the framework of this approximation and at zero temperature, f_ϵ=ϵ_p→ 0 and f_ϵ-ω≈-ϵ_p→ 1. Using 𝒜=𝒜_0(1, iσ), where σ=± 1 indicates left/right polarization of EM field, and taking the traces in Eq. (<ref>) yields the transverse current density, j_y=8e^3/mσ p_s 𝒜_0^2 ∑_pγ v_y^2u^2v^2(u^2-v^2)/(2ϵ_p-ω)^2+(1/τ_p^2). The integration over momentum p here can be performed in general form <cit.>. However, the formula can be additionally simplified using the substitution [(2ϵ_p-ω)^2+1/τ_p^2]^-1→πτ_pδ(ω-2ϵ_p), where τ_p≈τ_R at p≈ p_F (|ξ_p|→0). Restoring the dimensionality, we find the transverse photocurrent, j_y=σe^3/2mħ^2p_s 𝒜_0^2 Δ^2/ħ^2ω^2τ_R/τ_iΘ(ħω-2Δ), which is determined by large parameter τ_R/τ_i. Formula (<ref>) is illustrated in Fig. <ref> (see thin black line) together with the transverse photocurrent described by Eq. (<ref>) for various τ_R. Discussion.— The theory is fully gauge-invariant in xy plane. Developing the formalism, we used the gauge φ=0 for the EM field with a normal incidence to the 2D plane. Thus, we considered the second-order response, which is transverse to the direction of EM wave propagation, and none of the collective modes in the SC sample is excited. In the case of oblique incidence of the external EM field, the long-range Coulomb forces appear in the sample. As a consequence, the gauge invariance requires to take into account collective excitations of the order parameter <cit.>. Furthermore, we assume that the Hall-like condensate phase difference is a consequence of the relation j_s+j_y=0 between the photoexcited quasiparticle current j_y and the induced Hall supercurrent j_s. At the same time, this relation is based on the statement that there is no electric field inside the SC sample. A more precise microscopic study shows, that the electric field penetrates into the SC sample up to the distance λ, which is of the same order as the SC coherence length <cit.>. Therefore, our theory is valid for the sample widths w≫λ. In the opposite case w≃λ, the photoinduced Hall supercurrent becomes spatially-dependent, j_s=j_s(y), across the built-in supercurrent direction in full analogy with the anomalous Hall transport in p_x+ip_y superconductors <cit.>. Such a regime requires a separate theoretical investigation. Also we assume that the given built-in supercurrent does not affect the characteristics of the relaxation processes, which play essential role in the theory. This assumption is valid since we only consider linear in p_s photocurrent Eq. (<ref>). In principal, the electron-impurity scattering time and the quasiparticle recombination time being scalars may only acquire corrections proportional to p^2_s and higher orders, which is beyond our consideration. Conclusion.— We developed a theory of a photoresponse in a single-band 2D isotropic superconductor with a built-in supercurrent, accounting for a random impurity potential, which destroys the Galilean invariance. We predicted a photoinduced second-order transport phenomenon – the emergence of a transverse photoinduced supercurrent, and demonstrated, that its magnitude is primarily determined by the quasiparticle recombination time. The supercurrent Hall effect opens a way to manipulate the direction of superconducting condensate flow via optical tools without external magnetic fields. A resent active study of the SC diode effect shows the importance of this phenomenon both from fundamental physics and from the industrial applications perspectives. Since the diode effect is fundamental for quantum logic, our work opens perspectives for modeling mesoscopic superconductors-based logical elements. Acknowledgements.— We were supported by the Institute for Basic Science in Korea (Project No. IBS-R024-D1), Ministry of Science and Higher Education of the Russian Federation (Project FSUN-2023-0006), and the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”. V.M.K. is grateful to O.V. Kibis for valuable discussions. apsrev4-2
http://arxiv.org/abs/2307.00886v1
20230703093332
Detection of persistent current correlation in cavity-QED
[ "Bogdan R. Bułka" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Institute of Molecular Physics, Polish Academy of Sciences, ul. M. Smoluchowskiego 17, 60-179 Poznań, Poland We simulated the radiative response of the cavity quantum electrodynamics (QED) inductively coupled to the ring pierced by magnetic flux, and analyzed its spectral dependence to get insight into persistent current dynamics. Current fluctuations in the ring induce changes in the microwave resonator: shifting the resonant frequency and changing its damping. We use the linear response theory and calculate the current response function by means of the Green function technique. Our model contains two quantum dots which divide the ring into two arms with different electron transfers. There are two opposite (symmetric and asymmetric) components of the persistent current, which interplay can be observed in the response functions. The resonator reflectance shows characteristic shifts in the dispersive regime and avoided crossings at the resonance points. The magnitude of the resonator frequency shift is greater for coupling to the arm with higher transparency. Fluctuations of the symmetric component of the persistent current are relevant for a wide range of the Aharovov-Bohm phase ϕ, while the asymmetric component becomes dominant close to ϕ≈π (when the total persistent current changes its orientation). Detection of persistent current correlation in cavity-QED Bogdan R. Bułka August 1, 2023 ========================================================= § INTRODUCTION Recent technological developments in circuit quantum electrodynamics (cQED) offers efficient microwave resonators formed by superconducting Josephson junctions. This technique has been successfully applied to studies charge, spin and current dynamics in various nanodevices: single spins in doped crystals, quantum dot systems, superconducting qubits, nanomechanical oscillators or magnonic nanostructures <cit.>. One could get insight into exotic condensed matter states, such as the Kondo resonance and Majorana bound states <cit.>, or to perform coherent manipulation of the Andreev states in superconducting qubits <cit.>. Here, we want to show how cQED can be applied to measure correlations of the persistent current in a metallic ring pierced by magnetic flux. If the ring size is small, smaller than the phase coherence length of electron waves (L ≪ L_ϕ), then quantum interference play a crucial role and manifest itself in electron transport. The persistent current was studied in many papers, both theoretically and experimentally (see the review <cit.> and references therein). However, an issue of fluctuations of the persistent current was undertaken only in serval theoretical papers <cit.>. In this paper we want to simulate the radiative response of the cavity QED inductively coupled to the ring with two quantum dots (2QD) connected by two different junctions, to get insight into quantum interference effects and persistent current dynamics. § 2QD RING AND PERSISTENT CURRENT Fig. <ref> presents our model of the 2QD ring pierced by magnetic flux. Electrons in the ring are described by the Hamiltonian Ĥ = ∑_i=1,2 σ=↑,↓ε_ic^†_iσ c_iσ + ∑_σ=↑,↓(t_12 c^†_2σ c_1σ+t_21 c^†_1σ c_2σ) , where the first term corresponds the quantum dots with the single-level energy, ε_i. The second term is related with electron hopping between the dots: t_12=t_L e^i ϕ/2+t_R e^-i ϕ/2, t_21=t_L e^-i ϕ/2+t_R e^i ϕ/2, t_L and t_R is the hopping through the left (L) and right (R) arm of the ring. The hopping parameters include the phase shift ϕ =2πΦ/Φ_0, due to presence of the magnetic flux Φ, where Φ_0=ħ/e denotes the one-electron flux quantum. The persistent current operator is given by Î_p =e/ħ∂Ĥ/∂ϕ=(Î_L+Î_R)/2, where Î_L=ie/ħ t_L ∑_σ(e^iϕ/2 c^†_2σ c_1σ-e^-iϕ/2c^†_1σ c_2σ), Î_R=ie/ħ t_R ∑_σ(e^iϕ/2 c^†_1σ c_2σ-e^-iϕ/2c^†_2σ c_1σ) are the current operators in the left and the right arm of the ring. We use the Green function technique (details are in Appendix) to caclulate the average currents, Eq.(<ref>)-(<ref>), I_L=I_R=2e/ħ t_L t_R sin(ϕ)/2Δ[f(E_+)-f(E_-)], where E_±=ϵ±Δ denote the energy levels, Δ=[δ^2+t_L^2+t_R^2+2t_L t_R cos(ϕ)]^1/2, ϵ=(ε_1+ε_2)/2, δ=(ε_1-ε_2)/2 and f(ω) is the Fermi factor. The average currents are equal in both arms, but their transparencies are different, there are various competing local currents. To have insight into these processes we rewrite Δ=[δ^2+(t_L+t_R)^2cos^2(ϕ/2)+(t_L- t_R)^2 sin^2(ϕ/2)]^1/2 and the persistent current I_p= 2e/ħ∑_ν=±f(E_ν)∂ E_ν/∂ϕ = 2e/ħ[ (t_L+t_R)^2 sin(ϕ)/8Δ-(t_L-t_R)^2 sin(ϕ)/8Δ] 10em ×[f(E_+)-f(E_-)]. There are two opposite persistent currents related with the symmetric and asymmetric coupling. In the following sections, we will show that these competing processes can play an important role in current fluctuations. § CURRENT CORRELATIONS The current-current correlation function (sometimes referred to as noise power) is defined as <cit.> S_α,α'(ω_p) ≡∫ dτ e^iω_pτ⟨{δÎ_α(τ),δÎ_α'(0)}⟩, where δÎ_α(τ)=Î_α(τ)- I_α describes current fluctuation from its average value in the arm α=L,R. The Green functions are used to determine S_α,α'(ω_p) – see <ref>. To get insight into electron dynamics in the considered circuit we perform the spectral decomposition of the spectral density of the power noise [the integrant in (<ref>)]. One finds various relaxation processes but we take the dominant ones in the limit of the weak coupling γ to the thermal environment. The zero-frequency correlator is given by S_LL(0)=2e^2/ħ2 t_L^2 t_R^2 sin^2(ϕ)/γΔ^2F_0, where S_LL(0)=S_RR(0)=S_LR(0)=S_RL(0) – compare with <cit.>. Here, we denoted the function F_0= f(E_+)[1- f(E_+)]+f(E_-)[1- f(E_-)]. Persistent current fluctuations were considered by Cedraschi and Buttiker <cit.> in a similar system, in a metallic ring with a single quantum dot. They showed that the dynamics of the ring can be considered as a two-level system and its spectral densities exhibit two peaks. In our 2QD ring the spectral densities show similar features, with two delta peaks at ω=± 2Δ related with interlevel transitions. Therefore, for ω_p=2Δ one can calculate the integral in (<ref>) and express the correlation functions as S_LL(2Δ)= 2e^2/ħ t_L^2 [Δ^2 - t_R^2 sin^2(ϕ)]/γΔ^2 F_2Δ, S_LR(2Δ)= 2e^2/ħt_L t_R [Δ^2 cos(ϕ) + t_L t_R sin^2(ϕ)]/γΔ^2 F_2Δ, S_RR(2Δ)= 2e^2/ħ t_R^2 [Δ^2 - t_L^2 sin^2(ϕ)]/γΔ^2 F_2Δ, S_tot(2Δ)= 2e^2/ħ|t_12|^2/γF_2Δ, where F_2Δ= f(E_+)[1- f(E_-)]+f(E_-)[1- f(E_+)]. Fig.<ref> presents these functions. The parameters of the 2QD ring are taken: ϵ=0, δ=0, t_L/ħ =2200 MHz, t_R/ħ =1100 MHz and γ/ħ=125 MHz (close to the parameters of a double quantum dot capacitively coupled to a transmission line resonator <cit.>). Further in the paper, all quantities will be expressed in units of MHz to get a relation with an experiment. The total current correlation (the black curve) monotonically decreases and reaches its minimum at ϕ=π, when the persistent current changes its sign. The auto-correlation functions, S_LL(2Δ) and S_RR(2Δ), are non-monotonic functions with two minima at ϕ_±=π±ϕ_0, where ϕ_0=2 arctan(√(|t_L+t_R|/|t_L-t_R|) for the considered case with δ=0. The cross correlation function S_LR(2Δ) changes its sign exactly at the same values ϕ_±. It means that two opposite components of the persistent current compete with each other and their dynamics can be seen in the current correlation functions. The asymmetric part of the persistent current dominates in the noise power in the range ϕ_-<ϕ<ϕ_+, while the symmetric part is relevant outside this range. The next section is devoted to the detection of these processes. § RESPONSE FUNCTION §.§ Dervitation of the response function We assume that our 2QD ring is inductively coupled to a superconducting microwave resonator (Fig.<ref>) and their interaction is described within a semiclassical the input-output theory. <cit.> For the single sided resonator one can calculate the reflection coefficient <cit.> S_11≡a_out/a_in =-ω_p - ω_r + i (κ_i - κ_e)/2- g^2χ^r(ω_p)/ω_p - ω_r + i(κ_i +κ_e)/2-g^2χ^r(ω_p), where ω_r-ω_p is the detuning of the resonator frequency from the probe frequency ω_p (the cavity drive frequency), κ_i and κ_e denote internal and external resonator dissipation rates. χ^r denotes the response function and g is the coupling of the nanosystem with the resonator. In an experiment the complex reflection coefficient S_11=|S_11|e^iφ=I +i Q is determined, with its amplitude |S_11| as well as the phase φ or equivalently the field quadratures I and Q. For the weak coupling one can assume that the microwave cavity causes small fluctuations of the magnetic flux δΦ_α(t), which are described by the perturbation Hamiltonian Ĥ'(t)=∑_αÎ_α δΦ_α(t). Using the linear response theory one gets the current-current response function (the current susceptibility) as χ^r_αα'(ω_p)≡ -i/ħ∫ dτ e^iω_pτθ(τ) ⟨[Î_α(τ),Î_α'(0)]⟩. This is the key quantity of interest in measurement of the reflection coefficient. §.§ Response of the 2QD ring The response function χ^r_α,α'(ω_p) for our 2QD ring is calculated by means of the Green function technique – see <ref>. Next, we calculate the resonator reflectance |S_11| as a map with respect of (ϕ, ω_p) – the result is exhibited in Fig. <ref> for the left and the right arm of the ring (the upper and the bottom plot, respectively). We take the resonator parameters κ_i/2π= 1 MHz, and κ_e/2π= 4 MHz (as in the experiment <cit.>) and the coupling g^2=λ^2Φ_0^2 with λ=0.05 <cit.>. For the considered 2QD ring the excitation energy is in the range: 2200 MHz < 2Δ/ħ < 6600 MHz. We assume the resonator frequency ω_r/2π= 5000 MHZ, which crosses the excitation spectrum of the 2QD ring (shown as the green dashed curve in the figure). Fig. <ref> shows characteristic shifts in the dispersive regime and avoided crossings at the resonance points. Notice that response of the left and the right arm are different, due to their different transparencies (large and small in the L and R arm, respectively) – compare with the current correlations S_LL and S_RR in Fig. <ref>. From the reflection spectra, Eq.(<ref>), one can also extract the frequency shift δω_r^αα' and the damping ratio δκ^αα' of the resonator. They are related with the real and the imaginary part of the response function δω_r^αα' =(g^2/ħ)[χ^r_αα'(ω_r)], δκ^αα' =-(g^2/ħ)[χ^r_αα'(ω_r)]. Fig. <ref> shows these quantities calculated for various components of the response function. Close to the resonance one can see a characteristic Lorentzian shape of the response function, with δω_r^αα'=0 and a maximum for δκ^αα'. The amount of the frequency shift δω_r^LL and δω_r^RR is different (an order of magnitude smaller in the R arm). It is seen that δω_r^RR is a non-monotonic function and reaches its minimum at ϕ_±=π±ϕ_0. This is manifestation of interplay of two opposite persistent currents in the 2QD system. We also present the plot for δω_r^LR, corresponding to the cross current correlations. This is an interesting result, showing a negative frequency shift in the region ϕ_-<ϕ<ϕ_+ where the asymmetric component of the persistent current dominates. Unfortunately, a direct measurement of the cross current response is impossible. One can measure the response for the total current in a circuit with a double coupling to both arms of the ring. Using δω_r^LL and δω_r^RR (determined in earlier measurements) one can extract the cross correlation component. § SUMMARY We have shown that two opposite persistent currents are present in the asymmetric ring and their interplay can be detected by measurement the radiative response of the inductively coupled microwave resonator in cQED. The symmetric component dominates in a wide range of ϕ, while the asymmetric component is relevant close to ϕ≈π and its range increases with the asymmetry factor. Their fluctuations play different roles, which can be seen in the response functions, in the frequency shift of the resonator δω_r^αα' and its damping factor δκ^αα'. It is worth to mention that the considered model in many aspects is similar to the model of the Josephson junction with a quantum dot inside <cit.>. There, one can expect also interplay of two opposite Josephson currents which can be detected by cQED. § GREEN FUNCTION METHOD To calculate the average current and the current-current correlation functions we use the lesser and the greater Green function: Ĝ^<(ω)= f(ω)[Ĝ^a(ω)-Ĝ^r(ω)] and Ĝ^>(ω)= [f(ω)-1][Ĝ^a(ω)-Ĝ^r(ω)] , where f(ω) denotes is the Fermi factor, while the retarded and advanced Green functions are expressed as Ĝ^r,a(ω)= [[ ħω±iγ -ε_1 t_21; t_12 ħω±iγ -ε_2 ]]^-1. Here, a small parameter γ is introduced to take into account thermal dissipation processes in the system. §.§ Calulation of power noise The current-current correlation function S_αα', Eq.(<ref>), is a sum of two-particle averages. These averages are decoupled by means of Wick’s theorem to products of single particle averages, and the result is S_αα'= (4e^2/ħ)[t_12^αt_12^α' a_12,12- t_12^αt_21^α' a_12,21 -t_21^αt_12^α' a_21,12 +t_21^αt_21^α' a_21,21], where t_12^L=t_L e^i ϕ/2, t_12^R=t_R e^-i ϕ/2, t_21^L=t_L e^-i ϕ/2, t_21^R=t_R e^-i ϕ/2 and t_12=t_12^L+t_12^R, t_21=t_21^L+t_21^R. Using the Green functions we express the coefficients as a_12,12= ∫dω/2π[G^<_2,1(ω) G^>_2,1(ω_+) +G^<_2,1(ω_+) G^>_2,1(ω)], a_21,21= ∫dω/2π [G^<_1,2(ω) G^>_1,2(ω_+) +G^<_1,2(ω_+) G^>_1,2(ω)], a_12,21= ∫dω/2π [G^<_1,1(ω) G^>_2,2(ω_+) +G^<_1,1(ω_+) G^>_2,2(ω)], a_21,12= ∫dω/2π [G^<_2,2(ω) G^>_1,1(ω_+) +G^<_2,2(ω_+) G^>_1,1(ω)], where G^<(>)_i,j denote the elements of the lesser (greeter) Green function and ω_+=ω+ω_p. §.§ Calulation of response function The current susceptibility χ^r_αα', Eq.(<ref>), is also determined by means of the Green function technique, following Ref. <cit.>. In principle, this function can be calculated by other techniques (e.g. see <cit.>). Its Fourier transform can be expressed as χ^r*_α,α'= (4e^2/ħ^2)[t_12^αt_12^α' b_12,12- t_12^αt_21^α' b_12,21 -t_21^αt_12^α' b_21,12 +t_21^αt_21^α' b_21,21], where b_12,12= -i∫dω/2π G^<_12(ω)[ G^r_12(ω_+) + G^a_12(ω_-)], b_12,21= -i∫dω/2π[G^<_11(ω) G^r_22(ω_+) +G^<_22(ω) G^a_11(ω_-)], b_21,12= -i∫dω/2π[G^<_22(ω) G^r_11(ω_+) +G^<_11(ω) G^a_22(ω_-)], b_21,21= - i∫dω/2πG^<_21(ω)[ G^r_21(ω_+) + G^a_21(ω_-)] and ω_±=ω±ω_p. 99 Cottet2017 A. Cottet, M. C. Dartiailh, M. M. Desjardins, T. Cubaynes, L. C. Contamin, M. Delbecq, J. J. Viennot, L. E. Bruhat, B. Doucot, and T. Kontos, https://doi.org/10.1088/1361-648x/aa7b4d J. Phys.: Condens. Matter 29 (2017) 433002. Burkard2020 G. Burkard, M. J. Gullans, X. Mi, and J. R. Petta, https://doi.org/10.1038/s42254-019-0135-2Nat. Rev. Phys. 2 (2020) 129-140. Clerk2020 A. A. Clerk, K. W. Lehnert, P. Bertet, J. R. Petta, and Y. Nakamura, https://doi.org/10.1038/s41567-020-0797-9 Nat. Phys. 16, 257 (2020). Blais2021 A. Blais, A. L. Grimsmo, S. M. Girvin, and A. Wallraff, https://link.aps.org/doi/10.1103/RevModPhys.93.025005Rev. Mod. Phys. 93 (2021) 025005. Janvier2015 C. Janvier, L. Tosi, L. Bretheau, Ç. Ö. Girit, M. Stern, P. Bertet, P. Joyez, D. Vion, D. Esteve, M. F. Goffman, H. Pothier, and C. Urbina, https://www.science.org/doi/abs/10.1126/science.aab2179 Science 349 (2015) 1199–1202. Bleszynski2009 A. C. Bleszynski-Jayich, W. E. Shanks, B. Peaudecerf, E. Ginossar, F. von Oppen, L. Glazman, and J. G. E. Harris,https://www.science.org/doi/abs/10.1126/science.1178139 Science 326 (2009) 272–275. Cedraschi2000 P. Cedraschi, V. V. Ponomarenko, and M. Büttiker, https://link.aps.org/doi/10.1103/PhysRevLett.84.346Phys. Rev. Lett. 84 (2000) 346–349. Cedraschi2001 P. Cedraschi, and M. Büttiker, https://link.aps.org/doi/10.1103/PhysRevB.63.165312Phys. Rev. B 63 (2001) 165312. Moskalets2010 M. Moskalets, https://doi.org/10.1063/1.3521568Low Temp. Phys. 36 (2010) 982–989. Semenov2010 A. G. Semenov, and A. D. Zaikin, http://dx.doi.org/10.1088/0953-8984/22/48/485302J. Phys.: Condens. Matter 22 (2010) 485302. Semenov2011 A. G. Semenov, and A. D. Zaikin, https://link.aps.org/doi/10.1103/PhysRevB.84.045416Phys. Rev. B 84 (4) (2011) 045416. Komnik2014 A. Komnik, and G. W. Langhanke, http://link.aps.org/doi/10.1103/PhysRevB.90.165107Phys. Rev. B 90 (2014) 165107. BlanterButtiker Y. Blanter, and M. Büttiker, https://www.sciencedirect.com/science/article/pii/ S0370157399001234Phys. Rep. 336 (2000) 1–166. Stockklauser2015 A. Stockklauser, V. F. Maisi, J. Basset, K. Cujia, C. Reichl, W. Wegscheider, T. Ihn, A. Wallraff, and K. Ensslin, https://link.aps.org/doi/10.1103/PhysRevLett.115.046802Phys. Rev. Lett. 115 (2015) 046802. Walls2008 D. Walls and G. J. Milburn, Input–Output Formulation of Optical Cavities https://doi.org/10.1007/978-3-540-28574-8_7(Springer, Berlin, 2008), pp.127-141. Kratochwil2021 B. Kratochwil, J. V. Koski, A. J. Landig, P. Scarlino, J. C. Abadillo-Uriel, C. Reichl, S. N. Coppersmith, W. Wegscheider, M. Friesen, A. Wallraff, T. Ihn, and K. Ensslin, https://link.aps.org/doi/10.1103/PhysRevResearch.3.013171 Phys. Rev. Research 3 (2021) 013171. Hays2021 M. Hays, Realizing an Andreev Spin Qubit: Exploring Sub-gap Structure in Josephson Nanowires Using Circuit QED https://books.google.pl/books?id=DiSMzgEACAAJ(Springer Theses, Springer International Publishing, 2021). Metzger2021 C. Metzger, S. Park, L. Tosi, C. Janvier, A. A. Reynoso, M. F. Goffman, C. Urbina, A. Levy Yeyati, and H. Pothier, https://link.aps.org/doi/10.1103/PhysRevResearch.3.013036Phys. Rev. Res. 3 (2021) 013036. Hermansen2022 C. Hermansen, A. Levy Yeyati, and J. Paaske, https://link.aps.org/doi/10.1103/PhysRevB.105.054503Phys. Rev. B 105 (2022) 054503. Martin-Rodero2011 A. Martín-Rodero, and A. L. Yeyati, https://doi.org/10.1080/00018732.2011.624266Adv. Phys. 60 (2011) 899–958. Cottet2020 A. Cottet, Z. Leghtas, and T. Kontos, https://link.aps.org/doi/10.1103/PhysRevB.102.155105 Phys. Rev. B 102 (2020) 155105.
http://arxiv.org/abs/2307.02705v1
20230706004752
Integral fluctuation theorems and trace-preserving map
[ "Zhiqiang Huang" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "quant-ph" ]
[email protected] State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China The detailed fluctuation theorem implies the symmetry on the generating function of the entropy production probability. The integral fluctuation theorem follows immediately from this symmetry and normalization of the probability. In this paper, we rewrite the generating function with complete positive maps and show that the integral FT is determined by the trace-preserving property of constructed maps. We demonstrate the convenience of this framework by discussing the eigenstate fluctuation theorem and the heat exchange between two systems. This set of methods is also applicable to the generating function of quasi-probability, and we find that the Petz recovery map can arise naturally from this framework. We also briefly discuss generating functions for multitime processes, which may be helpful in studying the generalization of the fluctuation-dissipation theorem. Integral fluctuation theorems and trace-preserving map Zhiqiang Huang August 1, 2023 ====================================================== § INTRODUCTION As the generalized second law of thermodynamics from a microscopic perspective, the fluctuation theorem (FT) is of fundamental importance in non-equilibrium statistical mechanics. The FT focuses mainly on the irreversibility of entropy production, which implies the fundamental symmetry G(λ)=G^tr(i-λ) on the generating function <cit.>. Taking λ=i, this symmetry gives the integral FT, which implies the generalized second law. Furthermore, this symmetry can also be used to derive the generalized fluctuation-dissipation relation. Therefore, the calculation of the generator function can yield many important results. The integral FT is directly related to the normalization of the backward processes, which in turn depends on the trace preservation (TP) property of the backward map. In this sense, the generating function should be determined by the properties of some mapping. The modified propagator approach is one such method <cit.>. It incorporates measurements and Fourier transforms into the modified propagator. Here we try to use the process tensors and their equivalents <cit.> to help reconstruct the form of the generating function. In these approaches, the evolution and the measurements are quite different parts. Therefore, we will incorporate measurements and Fourier transforms into quantum operations. These quantum operations together with the evolution form a complete positive (CP) map. If this map is trace preserving when λ=i, then G(i)=1 and the integral FT holds. Since this map is completely determined by the forward evolution and the measurements, and is related to an inverse map, it can be used to construct the backward processes. To illustrate the convenience of this framework, we will discuss the integral FT for a small system coupled to a substantially larger but finite bath. As already proved by <cit.>, the integral FT holds in both of the long- and short-time regimes, even when the initial state of the bath is a single energy eigenstate of a many-body system. Its proof is quite complex and uses various forms of eigenstate thermalization hypothesis (ETH). This prevents further development of the approaches used therein and also makes it difficult to tighten the errors. We will use the framework here to give a clearer and simpler proof. The gist of the proof is as follows: The error of the integral FT is also determined by the interaction between the system and the environment. According to the Lieb-Robinson bound <cit.> or the operator growth hypothesis <cit.>, the influence range of the interaction is limited under the (imaginary) time evolution. For the short-time regime, the expanded operator is still unable to distinguish the pure state of the many-body system from the canonical ensemble. For the long-time regime, the expanded energy width is limited, and the expanded operator is unable to distinguish these states. Another important class of fluctuation theorems concerns the heat exchange between two baths <cit.>. As can be expected, the integral FT should hold in the short-time regime, even if the initial states of the two baths are single energy eigenstate of the many-body systems. As for the long-time regime, the temperature difference between the two baths made it impossible to determine the steady state of the whole system. The influence of the interaction is therefore considerable. We don't think that the fluctuation theorems hold in the long-time regime. The FTs for quantum channel <cit.> has established another general framework of quantum FTs. Using the Petz recovery map, it gives the detailed FT for two-point measurement quasiprobabilities. Since the quantum channel can emerge naturally from the open system formalism, there must be some relationship between the quantum channel FTs and the unitary evolution FTs. If the global unitary operation U has a global fixed point, <cit.> shows that both methods can give the same entropy production. Using the framework of this paper, we find that the Petz recovery map can be obtained naturally by considering the generating function of quasi-probabilities. This provides a different perspective on the relationship between the two methods. The advantage of the process tensor is that it is convenient to study multitime processes and multipoint measurements. The FTs beyond two-point measurements has received extensive attention <cit.>. Since the memory effect can lead to a negative entropy production rate <cit.>, research in this area helps to deepen the understanding of the second law of thermodynamics. The integral FT under multitime processes may also be helpful to study the generalization of the fluctuation-dissipation theorem for the n-point <cit.>. The paper is organised as follows. In <ref>, we first briefly introduce the operator-state formalism. Then, we rewrite the generating function with complete positive maps and briefly discuss the property of a key mapping in the thermodynamic fluctuation theorem for open systems. In <ref> we use the above mapping to discuss the eigenstate fluctuation theorem. In <ref> we discuss the heat exchange between two systems. In <ref>, we discuss the integral FT for the quantum channel and the multitime process. The section <ref> concludes. § THE INTEGRAL FLUCTUATION THEOREM AND TRACE-PRESERVING MAP §.§ Preliminaries Let us introduce the operator state |O) for an operator O on the Hilbert space ℋ_S of quantum states of a system S, which belongs to a new Hilbert space 𝖧 with an inner product defined by (O_1|O_2):=Tr(O_1^† O_2). An orthonormal basis for 𝖧 is |Π_ij) where Π_ij=|i⟩⟨j|, as one can check that (Π_kl|Π_ij)=δ_ikδ_jl. The completeness of this basis {|Π_ij)} is 1_𝖧=∑_ij|Π_ij)(Π_ij|. The conjugate relation reads (O_1|O_2)^†:=Tr(O_1^† O_2)^†=Tr(O_2^† O_1) =(O_2|O_1)=(O_1^†|O_2^†). The unitary superoperator acting on the operator state gives 𝒰|O):=|U O U^†). The conjugate relation shows that (O_1|𝒰|O_2)^†=Tr(O_1 U O_2^† U^† ) =(O_1^†|𝒰|O_2^†) =Tr(O^†_2 U^† O_1 U )= (O_2 |𝒰^† |O_1). §.§ The generating function and completely positive map In Crooks FT, the forward entropy production is exponentially more likely than the reverse P_F(Δσ)/P_B(-Δσ)=e^Δσ, where P_F is the forward probability distribution of entropy production and P_B is the backward one. They are given by the joint probability of two-point measurement P_F(Δσ)=∑_σ_t,σ_0δ(Δσ -(σ_t-σ_0))P_F(σ_t,σ_0). Its generating function G_F(λ):=∫_-∞^∞d Δσ e^iλΔσP_F(Δσ) is often used. The Crooks FT implies the fundamental symmetry on the generating function G_F(λ)=∫_-∞^∞d Δσ e^iλΔσ+ΔσP_B(-Δσ)=G_B(i-λ). An integral FT immediately follows from the normalization of P_B ⟨e^-Δσ|=⟩ G_F(i)= G_B(0)=1. Combining it with Jensen's inequality ⟨e^-X|≥⟩e^-⟨X|⟩, we can get a generalization of the second law of thermodynamics ⟨Δσ|≥⟩ln⟨e^-X|=⟩0. The joint probability of two-point measurement can be obtained from the process state <cit.> P_F(Δσ)=∑_σ_t,σ_0δ(Δσ -(σ_t-σ_0))(O^(σ_t,σ_0)|𝒮) where |𝒮)=𝒰 |ρ_0⊗Φ). The corresponding generating function can also be rewritten as G_F(λ):=∑_σ_0,σ_t (e^iλ(σ̂_t-σ̂_0)|Π_σ_t⊗Π_σ_0)(O^(σ_t,σ_0)|𝒮). The factor e^-iλσ̂ rescaling the measurement results indeed. So, as we'll point out next, it's related to the rescaling map. Here we only consider the cases where the real part of λ is zero, it is easy to verify that ∑_σ_0 (e^-iλσ̂_0| Π_σ_0)| Π_σ_0)( Π_σ_0| =𝒥_e^- σ̂_0^iλ/2∘ℳ_σ_0, where ℳ_σ_0=∑_σ_0 | Π_σ_0)( Π_σ_0| is a dephasing map and 𝒥^α_O(·):=O^α (·)O^α† is the rescaling map. The λ with a non-trivial real part can also induce a superoperator, but the corresponding superoperator will not be CP, which is not the focus of this paper. It is worth pointing out that the rescaling map here is very similar to the re-weighting approach used in <cit.>, where this approach is used to prepare the thermal state. With <ref>, we can rewrite <ref> as G_F(λ)=(I|𝒥_e^- σ̂_t^-iλ/2∘ℳ_σ_t∘𝒰(t)∘𝒥_e^- σ̂_0^iλ/2∘ℳ_σ_0|ρ(0)) or the conjugated form G_F(λ)=(I|𝒥_ρ(0)^1/2∘ℳ^†_σ_0∘𝒥_e^- σ̂_0^iλ/2†∘𝒰^†(t)|e^i λσ̂_t)^† where ℳ_σ_t is omitted since it shares the same basis with e^i λσ̂_t. If {Π_σ_0} is the diagonal basis of ρ(0), then ℳ_σ_0 can also be omitted. In <ref>, the generating function is completely rewritten with operator state and completely positive maps. As will be shown later, the integral FT will be determined by the TP property of these maps. Consider an isolated, possible driven, quantum system evolves according to the unitary evolution. Take σ̂(t)=-lnρ̂_S(t), then we have e^-iλσ̂_0=ρ̂^iλ_S(0). In such cases, the ℳ_σ_0 can be omitted 𝒥_e^- σ̂_0^iλ/2∘ℳ_σ_0|ρ_0)=𝒥_e^- σ̂_0^iλ/2|ρ_0). Combining <ref>, we get G_F(i)=(I|𝒥_e^- σ̂_t^1/2∘𝒰∘𝒥_e^- σ̂_0^-1/2 | ρ_0) =(ρ̂_S(t)|𝒰(t)|I)=1. Therefore, for a closed system, the entropy production ln (ρ̂_S(0)/ρ̂_S(t)) satisfies integral FT. Since ⟨-lnρ̂_S(t)|=⟩S(ρ_S(t)), the average of this entropy production is the same as the change of the Gibbs-von Neumann entropy of the system. In addition, it is easy to find that (I|𝒥_ρ(0)^1/2∘ℳ^†_σ_0∘𝒥_e^- σ̂_0^-1/2†=(I|. So according to <ref>, we have G_F(i)=(I|𝒰^†(t)|ρ̂_S(t))^†=1, where 𝒰^†(t) is exactly the time-reversed evolution and integral FT are guaranteed by the TP property of the backward evolution, or it can also be attributed to the TP property of constructed map 𝒥_ρ(0)^1/2∘ℳ^†_σ_0∘𝒥_e^- σ̂_0^-1/2†∘𝒰^†(t). Now consider an open quantum system S interacting with an environment E; S and E form a closed system with unitary evolution. Supposing there is no correlation between the initial system and the environment, we can set σ̂(t)=-lnρ̂_S(t)-lnρ̂_E, and then we have e^-iλσ̂_0=ρ̂^iλ_S(0)⊗ρ̂^iλ_E. Similar to the circumstances of closed systems, it is easy to prove that open systems also satisfy integral FT G_F(i)=(I|𝒰^†_SE(t)|ρ̂_S(t)⊗ρ̂_E)^†=1. This integral FT is completely general, but only contains quantities like entropy, which require a lot of information. Thermodynamic quantities such as work, free energy, and heat did not appear, so the connection to thermodynamics could not be established. Only by assuming that the initial state of the heat bath is the canonical ensemble ρ̂_E=e^-βĤ_E/Z_E can we set σ̂(t)=β (Ĥ_E- F_E)-lnρ̂_S(t). If we do not make any assumptions about the initial environment state, but still set σ̂(t) as <ref>, then we have G_F(λ)={𝒥_ρ_S(t)⊗ρ^can_E^-iλ/2∘𝒰_SE(t) ∘𝒥_ρ_S(0)⊗ρ^can_E^iλ/2∘ℳ_σ_0^E[ρ_S(0)⊗ρ_E(0)]}. Since the environment ρ_E(0) can deviate from the canonical ensemble ρ^can_E, the integral FT will have errors and G_F(i) can deviate from 1. The deviation of the environment state is not a sufficient condition for the violation of integral FT, it is also related to the system-environment interaction. For nondriven systems, the Hamiltonian of the total system can be written in general as H=H_S+H_E+H_I. If there is no system-environment interaction H_I=0, then the overall generating function can be separated into a system part and an environment part G_F(λ)= G^S_F(λ) G^E_F(λ). Since the system state does not deviate, we have G^S_F(i)=1. And the environment part gives G^E_F(λ)={𝒥_ρ^can_E^-iλ/2∘𝒰_E(t) ∘𝒥_ρ^can_E^iλ/2∘ℳ_σ_0^E[ρ_E(0)]}. Since [U_E(t),ρ^can_E]=0, there is always G^E_F(i)=1, no matter whether ρ_E(0) deviates from ρ^can_E or not. This inspires us to decompose the rescaling map in <ref> as follows 𝒥_ρ_S(t)⊗ρ^can_E=𝒥_ρ^can_S⊗ρ^can_E∘𝒥^-1_ρ^can_S∘𝒥_ρ_S(t). 𝒥^-1_ρ^can_S∘𝒥_ρ_S(t) is a local superoperator of the system. Using decomposition (<ref>), we can rewrite the generating function as G_F(λ)=[𝒥_ρ_S(t)^-iλ/2∘𝒥_ρ_S^can^iλ/2∘𝒩_β(t) ∘𝒥_ρ_S^can^-iλ/2∘𝒥_ρ_S(0)^iλ/2(ρ_S(0)⊗ρ'_E(0))], where ρ'_E(0)=ℳ_σ_0^E[ρ_E(0)] is the environment density matrix resulting from the decoherence of the environment due to the measurement. If ρ_E(0) is the energy eigenstate of Ĥ_E, then follows from <ref>, we have ρ'_E(0)=ρ_E(0). The key part of the formula (<ref>) is the following map 𝒩_β(t)(·):= 𝒥^-iλ /2_ρ^can_S⊗ρ^can_E∘𝒰_SE(t)∘𝒥^iλ /2_ρ^can_S⊗ρ^can_E(·) =𝒰_SE^0(-βλ/2)∘𝒰_SE(t)∘𝒰_SE^0(βλ/2)(·) =ℳ_λ(t)(·)ℳ_λ^†(t), where ℳ_λ(t)=U^0_SE(-βλ/2)[U_SE(t)]U^0_SE(βλ/2) and U^0_SE(τ)=exp(-i(H_S+H_E)τ). When λ is a pure imaginary number, the mapping 𝒰_SE^0(-βλ/2) is just the imaginary time evolution <cit.>. Unlike the real time evolution, imaginary time evolution is CP but not TP. From the definition (<ref>), we have 𝒰^0†_SE(τ)= 𝒰^0_SE(-τ^*). Therefore, the imaginary time evolution is self-conjugate. Since U_SE(t)=e^-i(H_0+H_I)t and [U^0_SE,H_0]=0, we can rewrite <ref> as ℳ_λ(t)=e^-i[H_0+H_I(βλ/2)]t, where H_I(τ):=U^0_SE(-τ) H_I U^0_SE(τ). Notice that it is different from the Hermitian operator 𝒰^0_SE(-τ) H_I=U^0_SE(-τ) H_I U^0†_SE(-τ). H_I(βλ/2) is a pseudo-Hermitian operator <cit.>. It is easy to verify that with ρ =e^2 H_0 Im(τ), we have ρ H_I(τ) ρ^-1=H^†_I(τ). According to <ref>, if there is a strict energy conservation condition [H_0,H_I]=0 <cit.>, or the temperature tends to infinity, there will be ℳ_λ(t)=U_SE(t). And then we have 𝒩_β(t)=𝒰_SE(t). Now let's return to the discussion of integral FT violations. Using <ref>, we can estimate the error of the integral FT as follows δ G_F(i)=(ρ_S(t)⊗ I_E|𝒥_ρ_S^can^-1/2∘𝒩_β(t) ∘𝒥_ρ_S^can^1/2|I_S⊗δρ'_E(0)). When the evolution time tends to zero, it is obvious that 𝒥_ρ_S^can^-1/2∘𝒩_β(t) ∘𝒥_ρ_S^can^1/2=I, and then we have δ G_F(i)=(ρ_S(t)|I_S)× (I_E|δρ'_E(0))=0. If the evolution time is short, it is foreseeable that the error can be bounded, as we will discuss in the next section. Our proof borrows from ideas in <cit.>. It is worth pointing out that if we replace the 𝒩_β(t) in <ref> with 𝒰_SE(t), then we will get the δ G_S(i) in <cit.>. Therefore, 𝒩_β(t)-𝒰_SE(t) is directly related to the interaction-induced error δ G_I in <cit.>. § EIGENSATE FLUCTUATION THEOREM §.§ Short-time regime When the evolution time is short, the two measurements about the system can only be effected by a small piece of environment due to the limitation of information flow speed. Considering the ETH, it is difficult to distinguish the overall pure state environment from the canonical bath in this small area. These make the integral FT approximately valid. Below we prove this statement in detail by calculating the bound of δ G_F(i). We first consider the properties of superoperator 𝒩^†_β(t). The evolution map can be written in the integral form 𝒰_SE(t)=𝒰_SE(t)∘𝒰^0_SE(0)=𝒰^0_SE(t)+∫_0^t dτ∂/∂_τ𝒰_SE(τ )∘𝒰^0_SE(t-τ)=𝒰^0_SE(t)+∫_0^t dτ𝒰_SE(τ )∘ℒ_I∘𝒰^0_SE(t-τ) where ℒ_I (·):=-i[H_I,·]. Using rotating wave approximation and off-diagonal ETH, <cit.> show that the condition [H_0,H_I]=o(1) holds. Therefore, consistent with them, here we only take the first-order perturbation approximation 𝒰_SE(t)≃𝒰^0_SE(t)+ ∫_0^t dτ𝒰^0_SE(τ)∘ℒ_I∘𝒰^0_SE(t-τ). Using this approximation, it is easy to show that 𝒩^†_β(t)(O_S⊗ I_E)= 𝒥^iλ /2_ρ^can_S⊗ρ^can_E∘𝒰^†_SE(t)∘𝒥^-iλ /2_ρ^can_S⊗ρ^can_E(O_S⊗ I_E) ≃𝒰^0†_SE(t)(O_S⊗ I_E)+∫_0^t dτ𝒰^0†_SE(t-τ)∘𝒥^iλ /2_ρ^can_S⊗ρ^can_E∘ℒ^†_I∘𝒥^-iλ /2_ρ^can_S⊗ρ^can_E (𝒰^0†_SE(τ)O_S⊗ I_E) =𝒰^0†_S(t)(O_S⊗ I_E)+∫_0^t dτ𝒰^0†_SE(t-τ)∘𝒰_SE^0†(-βλ/2)∘ℒ^†_I∘𝒰_SE^0†(βλ/2)∘𝒰^0†_SE(τ-t) (𝒰^0†_S(t)O_S⊗ I_E) =𝒰^0†_S(t)(O_S⊗ I_E)+∫_0^t dτℒ^†_I(t -τ-βλ/2) (𝒰^0†_S(t)O_S⊗ I_E) where ℒ^†_I(τ) O=i [ H_I(τ)O-O H^†_I(τ)]. The first term on the right side of the last equation of (<ref>) only contains the local evolution of the system, and its contribution to G_F(λ) is [𝒥_ρ_S(t)^-iλ/2∘𝒥_ρ_S^can^iλ/2∘𝒰^0_S(t)∘𝒥_ρ_S^can^-iλ/2∘𝒥_ρ_S(0)^iλ/2(ρ_S(0)⊗ρ'_E(0))]=[𝒥_ρ_S(t)^-iλ/2∘𝒰^0_S(t)∘𝒥_ρ_S(0)^iλ/2(ρ_S(0))]λ=i=1. Now we analyze the contribution of the second term. Divide the environment into two parts, B_0 is the part close to the system, and B_1 is the part far away from the system. When the temperature is high and the evolution time is short, the scale of B_0 can be larger than the propagation range of Lieb-Robinson velocity v_LR(t+βλ/2), but much smaller than the overall environment Scale L_E. The Hamiltonian of the environment can also be divided as H_E=H_B_0+H_B_1+H_B_0 B_1. We define the truncated Hamiltonian H^T_0:=H_S+H_B_0, the corresponding time evolution operator U^T_SE=e^-i(H_0,T+H_I)t and the corresponding superoperator 𝒩^T_β(t):=𝒥^-iλ /2_ρ^can_S⊗ρ^can_B_0∘𝒰^T_SE(t)∘𝒥^iλ /2_ρ^can_S⊗ρ^can_B_0. Following the similar procedure in <ref>, we have 𝒩^T†_β(t)(O_S⊗ I_E)≃𝒰^0†_S(t)(O_S⊗ I_E)+∫_0^t dτℒ^T†_I(t -τ-βλ/2) (𝒰^0†_S(t)O_S⊗ I_E), where ℒ^T†_I(τ )O=i [ H^T_I(τ )O-O H^T†_I(τ )]. H^T_I(τ):=e^iH^T_0τ H_I e^-iH^T_0τ is also a pseudo-Hermitian operator. With 𝒩^T_β(t), we can define a corresponding generating function G^T_F(λ). The difference between it and G_F(λ) is δ G_LR(λ):= G_F(λ)- G^T_F(λ)=∫_0^t dτ(ρ^1+iλ_S(0)⊗ρ'_E(0)|𝒥_ρ_S^can^-iλ/2 ∘(ℒ^†_I(t-τ -βλ/2)-ℒ^T†_I(t -τ-βλ/2)) ∘𝒰^0†_S(t)∘𝒥_ρ_S^can^iλ/2|ρ^-iλ_S(t)⊗ I_E)^†. Using this expression, we can bound the difference δ G_LR(i) =i∫_0^t dτ[ρ_S^can⊗ρ'_E(0)δ H_I(t-τ -iβ /2)𝒥_ρ_S^can^-1/2∘𝒰^0†_S(t)ρ_S(t)]+h.c. ≤ 2∫_0^t dτe^-i H_S(t-τ -iβ /2 )δ H_I(t-τ -iβ /2)e^i H_S(t-τ -iβ /2 )×𝒰^0†_S(τ)ρ_S(t)⊗ρ'_E(0)≤ 2∫_0^t dτδ H'_I(t-τ -iβ /2), where δ H'_I(τ)=e^iH_Eτ H_I e^-iH_Eτ-e^iH_B_0τ H_I e^-iH_B_0τ. Here, in the second line, we have used the Cauchy-Schwartz inequality. The Lieb-Robinson bound shows that δ H'_I(τ)≤ Cτ^ℓ/ℓ! <cit.>, where C depends on the supports and the norm of H_I. The ℓ is the distance between H_I and ∂ B_0. Therefore, we have δ G_LR(i) =o(1) if t+iβ/2≤ℓ/v_LR. The generating function G^T_F(λ) gives G^T_F(λ):=(ρ^-iλ_S(t)⊗ I_E|𝒥_ρ_S^can^iλ/2∘𝒩^T_β(t)∘𝒥_ρ_S^can^-iλ/2∘𝒥_ρ_S(0)^(1+iλ)/2|I_S⊗ρ'_E(0))=(O_B_0⊗ I_B_1|ρ'_E(0)), where (O_B_0|=([ρ_S(t)⊗ρ^can_B_0]^-iλ|𝒰^T_SE(t)∘𝒥^iλ /2_ρ^can_B_0∘𝒥_ρ_S(0)^(1+iλ)/2|I_S) is a B_0 local operator. For λ=i, using a similar approximation to <ref> for 𝒰^T_SE(t), we have O_B_0≃I_B_0+ ∫_0^t dτ (ρ_S(t)| 𝒰^0_S(τ) ∘𝒥^1 /2_ρ^can_B_0∘ℒ_I∘𝒥^-1 /2_ρ^can_B_0|I_S)≤I_B_0+2 ∫_0^t dτe^-β H_B_0/2H_I e^β H_B_0/2×ρ_S(t). According to lemma 3.1 in <cit.>, we can continue to bound the above formula and prove that O_B_0=Θ(1). Combining it with the canonical typicality <cit.> or weak ETH <cit.>, we have (O_B_0⊗ I_B_1|ρ'_E(0))=(O_B_0⊗ I_B_1|ρ^can_E)+O(N^-γ), where N is the size of the environment. Finally, in the large-environment limit, by combining <ref>, we obtain δ G_F(i)≤ G_F(i)-G^T_F(i)+G^T_F(i)-G^can,T_F(i) +G^can,T_F(i)-G^can_F(i) =o(1), where can refers to replacing ρ'_E(0) in the corresponding generating function with the canonical ensemble ρ^can_E. Another interesting approach is replacing ρ'_E(0) with ρ^can_B_0⊗ρ_B_1. Then the corresponding generating function gives G^SC,T_F(λ):=(O_B_0⊗ I_B_1|ρ^can_B_0⊗ρ_B_1)λ=i=1. Now the error of integral FT can be bounded as δ G_F(i)≤ G_F(i)-G^T_F(i)+G^T_F(i)-G^SC,T_F(i), where the second term is related to (O_B_0⊗ I_B_1|ρ'_E(0)-ρ^can_B_0⊗ρ_B_1). The density matrix ρ_B_1 can be chosen freely, which can be used to tighten the bound. To conclude, the first term of <ref> depends on the locality of the map 𝒩_β(t) and can be restricted by Lieb-Robinson bound. The second term of <ref> depends on the distinguishability of the initial environment states and can be restricted by ETH. §.§ Long-time regime When the evolution time is long, the system can generally obtain all the information of the environment, so that it can distinguish the pure state environment from the thermal state environment. Therefore, under the long-term evolution, the integral fluctuation theorem may have a large deviation. However, although large deviations can occur, it can only be maintained for a short period of time. This is because the total system will approach equilibrium when the evolution time is long. For most times, the state of the total system is close to some fixed steady state <cit.>. Considering the ETH, it is difficult to distinguish these steady states with local observations. These make the integral FT approximately valid in the long-time average. Now let's get down to the specifics. For any quantity O(t), its long-time average is defined as follows O(t):=lim_T→∞1/T∫_0^T O(t)dt. When the evolution time is short, we use perturbation theory for the time evolution 𝒰_SE(t). This is not appropriate when the evolution time is long. One way is to use the perturbation theory for the eigenstates of H as in <cit.>. Here we try another approach. We notice that when the Hamiltonian is the same, the imaginary-time evolution and the real-time evolution are commutative. So there will be 𝒩_β(t)= 𝒩_I(-βλ/2)∘𝒰_SE(t)∘𝒩^†_I(βλ/2), where 𝒩_I(-βλ/2):=𝒰_SE^0(-βλ/2)∘𝒰_SE(βλ/2). The imaginary time evolution can also be written in the integral form 𝒰_SE(it)=𝒰^0_SE(0)∘𝒰_SE(it) =𝒰^0_SE(it)+∫_0^t dτ∂/∂_τ𝒰^0_SE(it-iτ)∘𝒰_SE(iτ ) =𝒰^0_SE(it)[1+∫_0^t dτ𝒰^0_SE(-iτ)∘ℒ^A_I∘𝒰_SE(iτ )], where ℒ^A_I (·):={H_I,·}. Take the first-order perturbation approximation to the imaginary time evolution, we obtain 𝒩_I(-it)≃ 1+∫_0^t dτ𝒰^0_SE(-iτ)∘ℒ^A_I∘𝒰^0_SE(iτ ). Similar to <ref>, we can define the truncated superoperator 𝒰_SE^0,T and 𝒩^T_I(-βλ/2). Similar to the approximation (<ref>), we have 𝒩^T_I(-it) :=𝒰_SE^0,T(-it)∘𝒰^T_SE(it) ≃ 1+∫_0^t dτ𝒰^0_SE(-iτ)∘ℒ^A_I∘𝒰^0_SE(iτ ). Again we define the truncated map 𝒩^T'_β(t):= 𝒩^T_I(-βλ/2)∘𝒰_SE(t)∘𝒩^T†_I(βλ/2). Lieb-Robinson bound can still be used to estimate the error of truncation G_F(i)- G^T'_F(i). The long-time average can be evaluated as G_F(i)- G^T'_F(i)≤G_F(i)- G^T'_F(i)=δ G'_LR(i). The detailed calculation of the error of truncation is omitted here since it is very similar to that considered in <ref>. Here, the main difference is that the propagation range of Lieb-Robinson velocity becomes v_LRβ/2. Under the long-time average, the unitary evolution 𝒰^†_SE(t) gives the dephasing map <cit.>. However, the measurement of the final state here is evolution dependent. So the long-time average of the truncated generating function needs to be divided into two parts G^T'_F(i)=G^T'_F_1(i)+G^T'_F_2(i), where G^T'_F_1(i)=∑_a(ρ_S^can⊗ρ_E(0)| 𝒩^T_I(iβ/2)|Π_a)^† ×(Π_a|𝒩^T†_I(-iβ /2)∘𝒥_ρ_S^can^-1/2|ρ_S(t)⊗ I_E)^† and G^T'_F_2(i)=∑_a,b a≠ b(ρ_S^can⊗ρ_E(0)|𝒩^T_I(iβ/2)|Π_ab)^†× (Π_ab| 𝒩^T†_I(-iβ /2)∘𝒥_ρ_S^can^-1/2|_E Π_ab⊗ I_E)^† × (Π_ab|ρ_S(0)⊗ρ_E(0))^†. The energy width of ρ_S^can⊗ρ_E(0) is narrower than Θ(N_S^a), where N is the number of sites and 1/2≤ a≤ 1. According to the theorem 2.1 in <cit.>, the superoperator 𝒩^T_I(-β/2) at most increase the energy width by Θ(N_B_0). According to the theorems 2.2 and 2.3 in <cit.>, the difference between the eigenstates of H_0 and H causes the energy width to increase by Θ(H_I) at most. For the above reasons, the a,b in <ref> can be limited to the energy shell [E-Δ E/2,E+Δ E/2], where Δ E=Θ(N_B_0)+Θ(N_S^a)+Θ(H_I) and E=(ρ_S^can⊗ρ_E(0)|H_0). The weights of eigenstates with energies above this range are suppressed exponentially. Moreover, since 𝒩^T†_I(-iβ /2)∘𝒥_ρ_S^can^-1/2|ρ_S(t)⊗ I_E)=|O_SB_0⊗ I_B_1) gives an SB_0 local operator. ETH implies that it is difficult to distinguish states in the energy shell with this local operator. Therefore, we can replace (Π_a| with (ρ^can_SB_0⊗ρ_B_1| and introduce only a small error. Now ∑_a |Π_a)(Π_a| can be approximately replaced by |I_SE)(ρ^can_SB_0⊗ρ_B_1|, and then we need to consider (ρ_S^can⊗ρ_E(0)|𝒩^T_I(iβ/2)|I_SE). The operator state 𝒩^T_I(iβ/2)|I_SE) also gives a SB_0 local operator. So we can replace (ρ_S^can⊗ρ_E(0)| with (ρ_S^can⊗ρ^can_B_0⊗ρ'_B_1| and introduce only a small error. Finally, we have G^T'_F_1(i)≈ (ρ_S^can⊗ρ^can_B_0⊗ρ'_B_1| 𝒩^T_I(iβ/2)|I)^† ×(ρ^can_SB_0⊗ρ_B_1|𝒩^T†_I(-iβ /2)∘𝒥_ρ_S^can^-1/2|ρ_S(t)⊗ I_E)^† =(ρ^can_SB_0⊗ρ'_B_1| I)^†×(I_S⊗ρ^can_B_0⊗ρ_B_1|ρ_S(t)⊗ I_E)^†=1. Now consider G_F2(i). The energy width of ρ_S(0)⊗ρ_E(0) depends on the initial state, but it can be safely assumed to be narrower than Θ(N_S). After considering the difference between the eigenstates of H_0 and H, the a,b in <ref> should also be in the energy shell [E'-Δ E'/2,E'+Δ E'/2], where Δ E'=Θ(N_B_0)+Θ(H_I) and E'=(H_0|ρ_S(0)⊗ρ_E(0)). The a,b should be in the intersection of the two energy shells. The weights of eigenstates with energies above this range are suppressed exponentially. Since 𝒩^T†_I(-iβ /2)∘𝒥_ρ_S^can^-1/2|_E Π_ab⊗ I_E) also gives an SB_0 local operator. The off-diagonal ETH tells (Π_ab| O_SB_0⊗ I_E)=O(D'^-1/2). The remaining part of <ref> can be evaluated as follows ∑_a,b a≠ b| (ρ_S^can⊗ρ_E(0)|𝒩^T_I(iβ/2)|Π_ab) (Π_ab|ρ_S(0)⊗ρ_E(0)) | ≤Re [(ρ_S^can⊗ρ_E(0)|𝒩^T_I(iβ/2)|ρ_S(0)⊗ρ_E(0))] +Im [(ρ_S^can⊗ρ_E(0)|𝒩^T_I(iβ/2)|ρ_S(0)⊗ρ_E(0))] ≤ 2ρ_S(0)⊗ρ_E(0)×𝒩^T_I(iβ/2) (ρ_S^can⊗ρ_E(0)) ≤ 2 (1+2∫_0^β/2 dτH_I(iτ))=Θ(1). In the second line, we have inserted the nonnegative part of a=b and used the triangle inequality. In the fourth line have used Cauchy-Schwartz inequality. And we have used the approximation (<ref>) and the lemma 3.1 of <cit.> in the fifth line. Combining it with <ref>, we obtain G^T'_F'_2(i)≤Θ(1) ×D'^-1/2 Finally, in the large-environment limit, by combining <ref>, we have δ G_F(i)≤G_F(i)- G^T'_F(i) +G^T'_F_1(i)-1+G^T'_F_2(i) =o(1). To conclude, the first term of <ref> depends on the locality of the imaginary time evolution part of the map 𝒩_β(t) and can be restricted by Lieb-Robinson bound. The second and third terms of <ref> depend on the distinguishability of the states in a typicality energy shell and can be restricted by ETH. § FLUCTUATION THEOREM FOR DIRECT HEAT EXCHANGE BETWEEN TWO SYSTEMS We consider the interaction of two systems A and B. Suppose that the initial state of two systems is the canonical ensemble with different temperatures ρ̂^can_X=e^-β_X Ĥ_X/Z_X and X=A,B, then we should choose σ̂(t)=β_A (Ĥ_A- F_A)+β_B (Ĥ_B- F_B). If we do not make any assumptions about the initial environment state, but still set σ̂(t) as <ref>, then we have G_F(λ)={𝒥_ρ^can_A⊗ρ^can_B^-iλ/2∘𝒰_AB(t) ∘𝒥_ρ^can_A⊗ρ^can_B^iλ/2∘ℳ_i[ρ_A(0)⊗ρ_B(0)]}. Since the two systems can deviate from the canonical ensemble ρ^can, the integral FT will have deviations. The key part of the formula (<ref>) is the following map 𝒩_β_A,β_B(t)(·):= 𝒥^-iλ /2_ρ^can_A⊗ρ^can_B∘𝒰_AB(t)∘𝒥^iλ /2_ρ^can_A⊗ρ^can_B(·) =𝒰_AB^0(β_A,β_B,-λ/2)∘𝒰_AB(t)∘𝒰_AB^0(β_A,β_B,λ/2) where U^0_AB(β_A,β_B,τ):=exp(-i(H_A β_Aτ+H_Eβ_Bτ)). Since the temperature of the two systems is different, the duration of imaginary time evolution time is also different. We define H_I(τ_A,τ_B):=U^0_A(-τ_A) U^0_B(-τ_B)H_I U^0_A(τ_A) U^0_B(τ_B). Similar to <ref>, it is easy to prove 𝒩^†_β_A,β_B(t)I_AB≃ I_AB +∫_0^t dτℒ^†_I(t -τ-β_A λ/2,t -τ-β_B λ/2) I_AB, where ℒ^†_I(τ_A,τ_B) O=i [ H_I(τ_A,τ_B)O-O H^†_I(τ_A,τ_B)]. Divide the X into two parts, X_0 is the part close to the sites of H_I, and X_1 is the part far away from H_I. When the temperature is high and the evolution time is short, the scale of X_0 can be larger than the propagation range of Lieb-Robinson velocity v_LR(t+β_X λ/2), but much smaller than the overall Scale L_X. The Hamiltonian of the environment can also be divided as H_X=H_X_0+H_X_1+H_X_0 X_1. We define the truncated Hamiltonian H^T_0:=H_A_0+H_B_0, the corresponding time evolution operator U^T_AB=e^-i(H_0,T+H_I)t and the corresponding superoperator 𝒩^T_β(t):=𝒥^-iλ /2_ρ^can_A_0⊗ρ^can_B_0∘𝒰^T_AB(t)∘𝒥^iλ /2_ρ^can_A_0⊗ρ^can_B_0. Similar to <ref>, here we have 𝒩^T†_β_A,β_B(t)I_AB≃ I_AB +∫_0^t dτℒ^T†_I(t -τ-β_A λ/2,t -τ-β_B λ/2) I_AB. The generating function G^T_F(λ) gives G^T_F(λ):=( I_AB|𝒩^T_AB(t)|ρ_A(0)⊗ρ_B(0)) =(O_A_0B_0⊗ I_A_1B_1|ρ_A(0)⊗ρ_B(0)), where (O_A_0B_0|=( I_A_0B_0|𝒩^T_AB(t). Combining it with the canonical typicality <cit.> or weak ETH <cit.>, we have (O_A_0B_0⊗ I_A_1B_1|ρ_A(0)⊗ρ_B(0)) =O(N_A^-γ)+O(N_B^-γ)+G^SC,T_F(λ), where the generating function G^SC,T_F(λ):=(O_A_0B_0⊗ I_A_1B_1|ρ^can_A_0⊗ρ^can_B_0⊗ρ_A_1⊗ρ_B_1). It is easy to verify G^SC,T_F(i)=1. Now the error of integral FT can be bounded as δ G_F(i)≤ G_F(i)-G^T_F(i)+G^T_F(i)-G^SC,T_F(i). Therefore, the error should be vanished in the thermodynamic limit. Since the temperature of the two systems is different, the corresponding term ρ^can_A⊗ρ^can_B is quite different from the imaginary time evolution U_SE(it), so the perturbation expansion in <ref> does not apply here. On the other hand, from the perspective of evolution, since A and B are both large systems, their equilibrium time may be very long, and they cannot reach a steady state quickly. Therefore, the conclusion that integral FT holds for long time average is unlikely here. § THE INTEGRAL FLUCTUATION THEOREM BEYOND TWO-POINT MEASUREMENTS §.§ Integral fluctuation theorem from quasi-measurements From <ref> we can see that, generally speaking, the entropy production in the fluctuation theorems of open systems are also closely related to the state of the environment. However, when the evolution map has a global fixed point, the entropy flux can be written solely in terms of system-related quantities <cit.>. Now let us briefly consider these cases within the framework here. For systems with global fixed point 𝒰_SE(ρ_S^*⊗ρ_E)=ρ^*_S⊗ρ_E, we have 𝒥^1 /2_ρ^*_S⊗ρ_E∘𝒰_SE(t)∘𝒥^-1 /2_ρ^*_S⊗ρ_E =𝒰_SE(t). Supposing there is no correlation between the initial system and environment, if the environment is not measured, substituting <ref> into <ref>, we can get G_F(λ)=(I_S⊗ρ_E|𝒥_e^- σ̂^S_t^-iλ/2∘𝒥_ρ^*_S^1/2∘𝒰(t) ∘𝒥_ρ^*_S^-1/2∘𝒥_e^- σ̂^S_0^iλ/2∘ℳ^S_i|ρ_S(0)⊗ I_E). However, it is not possible to choose a suitable σ̂_S(t) such that G_F(i)=1. The two-point measurement of the local system cannot give the integral fluctuation theorem. The previous work <cit.> proved a general Crooks FT for open quantum process, where only local measurements of the system are required, but this measurement is no longer the usual two-point measurement. In order to obtain the so-called quasiprobability, quasi-measurements need to be introduced <cit.>. It is a measure of the density matrix and can be reconstructed from positive-operator valued measurements. After adding quasi-measurements, the generating function can be written as G_F(λ):=∑_σ_0 ,σ_t, ij, k'l' (e^iλ(σ̂_t+τ̂'-σ̂_0-τ̂)| Π_σ_t⊗Π_σ_0⊗Π_ij⊗Π_k'l')(O^(σ_t,σ_0,ij,k'l')|𝒮), where {|i⟩} is the diagonal basis of τ̂ and {|k'⟩} is the diagonal basis of τ̂'. Unlike <ref>, according to the completeness relation, we have ∑_ij (e^-iλτ̂| Π_ij)| Π_ij)( Π_ij| =𝒥_e^- τ̂^iλ/2. Similar to the procedure used in <ref>, we can rewrite the generating function as G_F(λ)=(I_S|𝒥_e^- σ̂^S_t^-iλ/2∘𝒥_e^-τ'^-iλ/2∘𝒩(t) ∘𝒥_e^-τ^iλ/2∘𝒥_e^- σ̂^S_0^iλ/2∘ℳ^S_i|ρ_S(0)), where 𝒩(t)=(I^E|𝒰(t)|ρ_E). We can set σ̂(t)=-lnρ̂_S(t) and τ=τ'=lnρ^*_S, then the generating function gives G_F(i-λ)=(ρ_S(t)|𝒥_e^- σ̂^S_t^iλ/2∘𝒥_e^-τ'^iλ/2∘𝒥_ρ^*_S^-1/2∘𝒩(t) ∘𝒥_ρ^*_S^1/2∘𝒥_e^-τ^-iλ/2∘𝒥_e^- σ̂^S_0^-iλ/2 |I_S). From this formula, we can naturally obtain a backward process map ℛ(t)=𝒥_ρ^*_S^1/2∘𝒩^†(t) ∘𝒥_𝒩(ρ^*_S)^-1/2, which is just the Petz recovery map used in <cit.>. From <ref>, we know that the integral FT is guaranteed by the TP property of the Petz recovery map G_F(i)=(I|ℛ(t)|ρ_S(t))^†=1. Different from <ref>, the entropy production here depends only on the local state of the system and does not need to know the state of the environment. §.§ Generating function for multitime quantum process Let's briefly discuss the generating function for multitime quantum process. Since there can be multiple entropy production for multipoint measurements, the corresponding generating function will also have multiple parameters. Referring to <ref> and <cit.>, we can define the generating function for the multitime quantum process as follows G_F(λ_n-1:1):=∑_σ_n:1 (e^i∑_kλ_k(σ̂_k+1-σ̂_k)|⊗_m=1^nΠ_σ_m)(O^(σ_n:1)|𝒮). Applying a similar procedure as <ref>, the generating function can be rewritten as G_F(λ_n-1:1)=(I|𝒥_e^- σ̂_n^-iλ_n-1/2∘ℳ_σ_n∘𝒰_n-1∘… ∘𝒥_e^- σ̂_2^i(λ_2-λ_1)/2∘ℳ_σ_2∘𝒰_1∘𝒥_e^- σ̂_1^iλ_1/2∘ℳ_σ_1|ρ(0)). As before, ℳ_σ_n can be omitted. If {Π_σ_0} is the diagonal basis of ρ(0), then ℳ_σ_0 can be omitted too. As for intermediate measurements, whether they can be omitted depends on whether the measurement is invasive or not. For detailed discussion, see <cit.>. The existence of multiple entropy production does not imply that they all have fluctuation relations <cit.>. But, as long as there is a fluctuation theorem, there will be a corresponding generating function symmetry. For example, the forward probability of entropy production Δσ=σ_n-σ_1 is P_F(Δσ)=∑_σ_n:1δ(Δσ -(σ_n-σ_1))(O^(σ_n:1)|𝒮). If it satisfies a Crooks FT, then the generating function has the following symmetry G_F(λ_n-1:1=λ)= ∫_-∞^∞d Δσ e^iλΔσP_F(Δσ) =∫_-∞^∞d Δσ e^iλΔσ+ΔσP_B(-Δσ) = G_B(λ_n-1:1=i-λ). To give another example, the forward probability of entropy production Δσ'=σ_n-σ_n-1 is P'_F(Δσ')=∑_σ_n:1δ(Δσ' -(σ_n-σ_n-1))(O^(σ_n:1)|𝒮). If it satisfies a Crooks FT, then the generating function has G_F(λ_n-2:1=0,λ_n-1=λ)= ∫_-∞^∞d Δ'σ e^iλΔσ'P'_F(Δσ') = G_B(λ_n-2:1=0,λ_n-1=i-λ). § CONCLUSION AND DISCUSSION In this paper, we have reconstructed the generating function with complete positive maps. In this framework, the integral FT is determined by the trace preservation property of the constructed map. We have discussed the eigenstate fluctuation theorem through this framework. The error of the fluctuation theorem can be decomposed into two parts. One part comes from the non-locality of the map 𝒩_β(t) and can be bounded by the Lieb-Robinson bound. Another part comes from distinguishability of the states and can be restricted by ETH. Both errors vanish in the thermodynamic limit, so that the fluctuation theorem holds even when the initial state of the bath is a single energy eigenstate of a many-body system. We also discussed the heat exchange of two systems, and found that the integral FT is still approximately valid in the short-time regime. However, in the long-time regime, the long time average of the integral FT may not hold. Referring to the fluctuation theorems for the quantum channel, we have discussed the generating function of the quasi-probability. The Petz recovery map can be obtained naturally from the generating function. The integral FT is guaranteed by the TP property of the Petz recovery map. We briefly discussed the generating function for the multitime process, which is a multiparameter function. Its symmetry does not necessarily increase as the the parameter increases, but depends on whether the corresponding entropy production satisfies the fluctuation theorem. Evolution and measurement are the two cores of the fluctuation theorem. In this paper they are unified into a complete positive map. This pure form helps to simplify the calculation of generating functions. In particular, for the measurement of thermodynamic quantities, the corresponding mapping is the imaginary time evolution. The commutativity between the imaginary time evolution and the real time evolution can effectively simplify the calculations. In addition, the generation function under multitime processes may also be helpful to study the generalization of the fluctuation-dissipation theorem. Another advantage of the multitime process fluctuation theorem is that it can be used to study the entropy production rate. The second law of thermodynamics implies complete irreversibility, and the Poincaré recurrence theorem corresponds to complete reversibility. The fluctuation theorem can help us to obtain irreversibility from reversible evolution, while the memory effect can weaken irreversibility and give negative entropy production rates. Studying the influence of ETH on the fluctuation theorem under multitime evolution can allow us to further understand the influence and limits of the memory effect, so as to understand the role of the thermodynamic limit in the second law of thermodynamics and the Poincaré recurrence theorem. Z.H. is supported by the National Natural Science Foundation of China under Grants No. 12047556. 99 EHM09 M. Esposito, U. Harbola, S. Mukamel, Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems, Reviews of Modern Physics, 81 (2009) 1665-1702. PT F. A. Pollock, C. Rodríguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Non-Markovian quantum processes: Complete framework and efficient characterization, Phys. Rev. A 97, 012127 (2018). PTR2 S. Milz and K. Modi, Quantum stochastic processes and quantum non-Markovian phenomena, PRX Quantum 2, 030201 (2021). H23 Z. Huang, Multiple entropy production for multitime quantum processes, arXiv preprint arXiv:2305.03965 (2023). IKS17 E. Iyoda, K. Kaneko, T. Sagawa, Fluctuation Theorem for Many-Body Pure Quantum States, Physical Review Letters, 119 (2017) 100601. IKS22 E. Iyoda, K. Kaneko, T. Sagawa, Eigenstate fluctuation theorem in the short- and long-time regimes, Physical Review E, 105 (2022) 044106. HHKL21 J. Haah, M.B. Hastings, R. Kothari, G.H. Low, Quantum algorithm for simulating real time evolution of lattice Hamiltonians, SIAM Journal on Computing, DOI (2021) FOCS18-250-FOCS218-284. PCASA19 D. E. Parker, X.-y. Cao, A. Avdoshkin, T. Scaffidi, and E. Altman, A universal operator growth hypothesis, Phys. Rev. X 9, 041017 (2019). JW04 C. Jarzynski, D.K. Wójcik, Classical and Quantum Fluctuation Theorems for Heat Exchange, Physical Review Letters, 92 (2004) 230602. KK19 H. Kwon, M.S. Kim, Fluctuation Theorems for a Quantum Channel, Physical Review X, 9 (2019) 031029. LP21 G.T. Landi, M. Paternostro, Irreversible entropy production: From classical to quantum, Reviews of Modern Physics, 93 (2021) 035008. MLL20 K. Micadei, G.T. Landi, E. Lutz, Quantum Fluctuation Theorems beyond Two-Point Measurements, Physical Review Letters, 124 (2020) 090602. HV10 J.M. Horowitz, S. Vaikuntanathan, Nonequilibrium detailed fluctuation theorem for repeated discrete feedback, Physical Review E, 82 (2010) 061120. LRJ12 S. Lahiri, S. Rana, A.M. Jayannavar, Fluctuation theorems in the presence of information gain and feedback, Journal of Physics A: Mathematical and Theoretical, 45 (2012) 065002. CS18 P.A. Camati, R.M. Serra, Verifying detailed fluctuation relations for discrete feedback-controlled quantum dynamics, Physical Review A, 97 (2018) 042127. JPW14 V. Jakšić, C.A. Pillet, M. Westrich, Entropic Fluctuations of Quantum Dynamical Semigroups, Journal of Statistical Physics, 154 (2014) 153-187. BB20 J.-F. Bougron, L. Bruneau, Linear Response Theory and Entropic Fluctuations in Repeated Interaction Quantum Systems, Journal of Statistical Physics, 181 (2020) 1636-1677. WH02 E. Wang, U. Heinz, Generalized fluctuation-dissipation theorem for nonlinear response functions, Physical Review D, 66 (2002) 025008. HMSSS22 Z. Holmes, G. Muraleedharan, R.D. Somma, Y. Subasi, B. Şahinoǧlu, Quantum algorithms from fluctuation theorems: Thermal-state preparation, Quantum, 6 (2022) 825. VGC04 F. Verstraete, J.J. García-Ripoll, J.I. Cirac, Matrix Product Density Operators: Simulation of Finite-Temperature and Dissipative Systems, Physical Review Letters, 93 (2004) 207204. BF09 B. Bagchi, A. Fring, Minimal length in quantum mechanics and non-Hermitian Hamiltonian systems, Physics Letters A, 373 (2009) 4307-4310. DMK22 R. Dann, N. Megier, R. Kosloff, Non-Markovian dynamics under time-translation symmetry, Physical Review Research, 4 (2022) 043075. AKL16 I. Arad, T. Kuwahara, Z. Landau, Connecting global and local energy distributions in quantum spin models on a lattice, Journal of Statistical Mechanics: Theory and Experiment, 2016 (2016) 033301. DLL18 A. Dymarsky, N. Lashkari, H. Liu, Subsystem eigenstate thermalization hypothesis, Physical Review E, 97 (2018) 012140. GLMSW17 L.P. García-Pintos, N. Linden, A.S.L. Malabarba, A.J. Short, A. Winter, Equilibration Time Scales of Physically Relevant Observables, Physical Review X, 7 (2017) 031027. FMP20 P. Figueroa-Romero, K. Modi, F.A. Pollock, Equilibration on average in quantum processes with finite temporal resolution, Physical Review E, 102 (2020) 032144. METTPSH20 S. Milz, D. Egloff, P. Taranto, T. Theurer, M.B. Plenio, A. Smirne, S.F. Huelga, When Is a Non-Markovian Quantum Process Classical?, Physical Review X, 10 (2020) 041049. H22b Z. Huang, X.-K. Guo, Leggett-Garg inequalities for multitime processe, arXiv preprint arXiv:2211.13396 (2022).
http://arxiv.org/abs/2307.02242v1
20230705123514
Multi-IRS-Enabled Integrated Sensing and Communications
[ "Yuan Fang", "Siyao Zhang", "Xinmin Li", "Jie Xu", "Shuguang Cui" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Multi-IRS-Enabled Integrated Sensing and Communications Yuan Fang, Siyao Zhang, Xinmin Li, Jie Xu, and Shuguang Cui Y. Fang is with the Future Network of Intelligence Institute (FNii) and the School of Science and Engineering (SSE), The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]). S. Zhang is with the School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621000, China, and the FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]). X. Li is with the School of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China, and the FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]). J. Xu and S. Cui are with SSE and FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected], [email protected]). J. Xu is the corresponding author. August 1, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper studies a multi-intelligent-reflecting-surface-(IRS)-enabled integrated sensing and communications (ISAC) system, in which multiple IRSs are installed to help the base station (BS) provide ISAC services at separate line-of-sight (LoS) blocked areas. We focus on the scenario with semi-passive uniform linear array (ULA) IRSsfor sensing, in which each IRS is integrated with dedicated sensors for processing echo signals, and each IRS simultaneously serves one sensing target and one communication user (CU) in its coverage area. In particular, we suppose that the BS sends combined information and dedicated sensing signals for ISAC, and we consider two cases with point and extended targets, in which each IRS aims to estimate the direction-of-arrival (DoA) of the corresponding target and the complete target response matrix, respectively. Under this setup, we first derive the closed-form Cramér-Rao bounds (CRBs) for parameters estimation under the two target models. For the point target case, the CRB for AoA estimation is shown to be inversely proportional to the cubic of the number of sensors at each IRS, while for the extended target case, the CRB for target response matrix estimation is proportional to the number of IRS sensors. Next, we consider two different types of CU receivers that can and cannot cancel the interference from dedicated sensing signals prior to information decoding. To achieve fair and optimized sensing performance, we minimize the maximum CRB at all IRSs for the two target cases, via jointly optimizing the transmit beamformers at the BS and the reflective beamformers at the multiple IRSs, subject to the minimum signal-to-interference-plus-noise ratio (SINR) constraints at individual CUs, the maximum transmit power constraint at the BS, and the unit-modulus constraints at the multiple IRSs. To tackle the highly non-convex SINR-constrained max-CRB minimization problems, we propose efficient algorithms based on alternating optimization and semi-definite relaxation, to obtain converged solutions. Finally, numerical results are provided to verify the benefits of our proposed designs over various benchmark schemes based on separate or heuristic beamforming designs. Integrated sensing and communications (ISAC), semi-passive intelligent reflecting surfaces (IRS), Cramér-Rao bound (CRB), joint transmit and reflective beamforming. § INTRODUCTION Integrated sensing and communications (ISAC) has been recognized as one of the usage scenarios for future sixth-generation (6G) wireless networks <cit.>, in which the spectrum resources, wireless infrastructures, and radio signals are reused for the dual role of radar sensing and wireless communications. On the one hand, ISAC allows the cellular base stations (BSs) and mobile devices to jointly design the transmit signals and waveforms to perform both sensing and communications, thus properly managing the co-channel interference between them and accordingly enhancing the resource utilization efficiency. On the other hand, ISAC can also enable the coordination between sensing and communications, such that the sensing information can be exploited to facilitate communications by, e.g., reducing the conventional training overhead, and the communications among different nodes can be utilized to support networked sensing from different views in a large scale <cit.>. As such, ISAC is expected to significantly improve the sensing and communication performances for supporting various new applications such as auto-driving, extended reality, and unmanned aerial vehicles (UAVs) <cit.>. To make ISAC a reality, extensive research efforts have been devoted to studying innovative ISAC system designs such as the waveform and beamforming optimization (see <cit.> and the references therein). However, due to the complicated radio propagation environments, the practical ISAC performance may degrade seriously when the transmission links are blocked by obstacles such as trees and buildings. Intelligent reflecting surface (IRS) provides a viable solution to enhance both sensing and communication performances by reconfiguring the radio environments via reflecting the incident signals with properly controlled phases and/or amplitudes <cit.>. In particular, IRSs can be used in wireless communications networks to help enhance the signal coverage, increase the desired signal strength, mitigate the undesired interference, reshape the communication channel rank, and accordingly boost the communication performance <cit.>. Furthermore, IRSs can also be employed in wireless sensing systems to create virtual line-of-sight (LoS) links to see targets in the non-line-of-sight (NLoS) regions, provide multi-view sensing for facilitating the target detection and estimation, and enable networked target localization <cit.>. While there have been extensive works investigating IRS-enabled wireless communications <cit.>, there have been only a handful of prior works studying IRS-enabled sensing <cit.> and IRS-enabled ISAC <cit.>. First, the IRS-enabled sensing can be generally implemented in two different ways, namely the passive IRS sensing <cit.> and semi-passive IRS sensing <cit.>, in which the IRS is not and is installed with the dedicated sensors for signal reception, such that the sensing signal processing is implemented at the BS remotely and at the IRS directly, respectively. In particular, the authors in <cit.> studied the passive IRS sensing for estimating the target angle with respect to the IRS based on the echo signals over the BS-IRS-target-IRS-BS link. In <cit.>, the authors first analyzed the Cramér-Rao bounds (CRBs) for the BS to estimate the angle of point target or the target response matrix of extended target, and then proposed to jointly optimize the active transmit beamforming and the passive reflective beamforming to minimize the CRBs. By contrast, the authors in <cit.> studied the semi-passive IRS sensing for localizing the targets close to the IRS by using the echo signals over the BS-IRS-target-IRS link, in which the IRS passive reflection matrix was optimized to maximize the received echo signal power at the IRS. Next, the works <cit.> considered the IRS-enabled ISAC by considering passive IRS sensing, and <cit.> studied the ISAC scenario with an IRS assisting the communications only. In particular, by considering the passive IRS sensing, the work <cit.> maximized the minimum reflected beampattern gains towards multiple sensing angles with respect to the IRS by jointly optimizing the transmit and reflective beamforming. In <cit.>, the authors studied a scenario where the target sensing is corrupted by multiple clutters with signal-dependent interference, in which the radar signal-to-interference-plus-noise ratio (SINR) was maximized. The work <cit.> proposed to jointly design the communication and sensing waveform to minimize the total transmit power at BS while ensuring the constraints on both communication and radar SINR as well as the cross-correlation pattern design requirements. However, the above prior works on IRS-enabled sensing and IRS-enabled ISAC mainly focused on the case with one single IRS, which has a limited coverage area due to the round-trip pathloss. In practice, each BS may have multiple sensing coverage holes due to the distributed obstructions. Therefore, it is important to exploit multiple IRSs to cooperate in helping BS cover different LoS blocked areas. In the literature, the multi-IRS-enabled wireless communications have been extensively studied, in which multiple IRSs are deployed for enhancing the coverage areas via either independent single-hop reflections <cit.> or cooperative multi-hop reflections <cit.>. For instance, the authors in <cit.> studied a downlink multi-IRS multi-user communication system with each IRS serving one user, in which the joint beamforming was optimized to maximize the minimum SINR among all users. The work <cit.> studied a downlink wireless communication system with multiple distributed IRSs, in which the on-off control at the IRSs were optimized jointly with the resource allocation to maximize the system energy efficiency while ensuring the minimum achievable rate requirements at the users. The authors in <cit.> further investigated the impact of secondary reflections in an uplink multi-IRS-aided multi-user communication system, in which the passive beamforming at these IRSs were optimized to maximize the weighted-sum rate or the minimum rate at multiple users. Furthermore, the authors in <cit.> studied the IRS-enabled multi-refection between a BS and multiple users, in which the active beamforming at the BS and the passive beamforming at multiple IRSs were jointly designed based on deep reinforcement learning. In addition, <cit.> investigated a multi-route multi-hop cascaded IRS communication system, with the objective of maximizing achievable sum rate at the multiple users via joint design of active and cascaded passive beamforming. In <cit.> and <cit.>, the authors studied the optimal multi-IRS-reflection path selection for multi-IRS beam routing problem. On the other hand, multiple IRSs can be employed to enhance the signal coverage of BS via single reflection of each IRS. Despite the research progress, the above prior works only studied the multi-IRS-enabled wireless communications. To our best knowledge, how to efficiently implement multi-IRS-enabled sensing and multi-IRS-enabled ISAC has not been well investigated in the literature yet. This thus motivates our investigation in this work. This paper studies the multi-IRS-enabled ISAC system, in which multiple IRSs are deployed at different locations to assist the BS to provide seamless ISAC services via their single reflections.[How to use multi-reflection or multi-hop links of multiple IRSs for sensing and ISAC is also an interesting direction, which, however, is beyond the scope of this paper and is left for future work.] The multi-IRS-enabled ISAC is expected to enhance the coverage areas of both sensing and communications with reduced hardware and operational costs <cit.>. However, as compared to the single-IRS-enabled counterpart, the multi-IRS-enabled ISAC introduces new technical challenges due to the involvement of multiple IRSs. In particular, as the same transmit signals from the BS need to be reflected by different IRSs to serve their respective areas, it is difficult to quantify the fundamental target sensing performances (e.g., for target estimation or target detection) at different IRSs. Furthermore, as there are multiple IRSs associated with one BS, how to coordinate the reflective beamforming at all the IRSs together with the transmit optimization at the BS is also an important but difficult task. More specifically, this paper investigates the multi-IRS-enabled ISAC system consisting of one BS and multiple semi-passive IRSs each equipped with a uniform linear array (ULA), in which each IRS employs reflective beamforming to serve one communication user (CU) and sense one target at the same time. To facilitate ISAC, the BS sends dedicated sensing signals combined with information signals. As the dedicated sensing signals can be known a priori, we consider two different types of CU receivers that are and are not able to cancel the interference caused by the sensing signals, respectively. Besides, we also consider two cases with point and extended targets, for which each IRS aims to estimate the direction-of-arrival (DoA) of its corresponding target, and the complete response matrix between each IRS and its corresponding target, respectively. The main results of this paper are summarized as follows. * First, different from most prior works on IRS-enabled ISAC that used the transmit beampattern and radar SINR as the sensing performance measures, we consider the CRB for target estimation as the metric to characterize the fundamental sensing performance, which serves as the variance lower bound of any practical biased estimators <cit.>. In particular, for the point and extended targets, we derive the closed-form CRB for estimating the target DoA and the complete target response matrix, respectively. The derived CRB for DoA estimation is shown to be inversely proportional to the cubic of the number of sensors at each IRS for the point target case, and the CRB for target response matrix estimation is proportional to the number of sensors at each IRS for the extended target case. * Then, under the two target models and by considering two different CU types, we minimize the maximum CRB for targets estimation at all IRSs to achieve fair and optimized sensing performance, via jointly optimizing the transmit beamformers at the multiple BS and the reflective beamformers at the IRSs, subject to the minimum SINR constraints at individual CUs, the maximum transmit power constraint at the BS for transmit beamforming, and the unit-modulus constraints at the IRSs for reflective beamforming. The four SINR-constrained max-CRB minimization problems are highly non-convex due to the coupling of transmit beamforming at BS and passive beamforming at IRSs as well as the unit-modulus constraints. To tackle these problems, we propose efficient algorithms to obtain converged solutions based on alternating optimization and semi-definite relaxation (SDR). * Finally, numerical results show that our proposed designs perform close to the sensing performance upper bound (or CRB lower bound) by the sensing only design, and outperform various benchmark schemes based on transmit beamforming only or with zero-forcing (ZF) beamforming. It is also shown that the sensing signal interference cancellation is essential in further enhancing the ISAC performance. The rest of the paper is organized as follows. Section II presents the multi-semi-passive-IRS-enabled ISAC system model. Section III derives the closed-form estimation CRBs for both point and extended target cases. Section IV and Section V present the joint transmit and reflective beamforming solutions to the SINR constrained max-CRB minimization problems for the cases with point and extended targets, respectively. Section VI presents numerical results to validate the performance of our proposed joint beamforming designs as compared to other benchmarks. Finally, Section VII concludes the paper. Notations: The circularly symmetric complex Gaussian (CSCG) distribution with mean μ and covariance A is denoted as 𝒞𝒩(μ,A). The notations (·)^T, (·)^*, (·)^H, and tr(·) denote the transpose, conjugate, conjugate-transpose, and trace operators, respectively. I_L stands for the identity matrix of size L × L. Re(·) and Im(·) denote the real and imaginary parts of the argument, respectively. |·| and arg{·} denote the absolute value and angle of a complex element, respectively. vec(·) denotes the vectorization operator, 𝔼(·) denotes the expectation operation, and diag(x) denotes a diagonal matrix with the diagonal entries specified by vector x. rank(X) denotes the rank value of matrix X and [·]_l,p denotes the (l,p)-th element of a matrix. j denotes the imaginary unit. 𝒜\ a denotes the set after removing the element a in 𝒜. § SYSTEM MODEL We consider a multi-IRS-enabled ISAC system as shown in Fig. <ref>, which consists of an ISAC BS with M antennas and K semi-passive IRSs. Each IRS is equipped with a ULA of N reflecting elements and N_s receive antenna elements for sensing. Let 𝒦 = {1,…,K} denote the set of IRSs and 𝒩 = {1,…,N} denote the set of elements at each IRS. In practice, the coverage of each IRS is limited, and thus we assume that each IRS is deployed to provide ISAC coverage at one separate area by ignoring the interference across different areas <cit.>. Furthermore, it is assumed that there are one sensing target and one signal-antenna CU at the coverage area of each IRS, and the direct links between the BS and the targets/CUs are blocked as assumed in prior work <cit.>. We consider the quasi-static channel model, in which the wireless channels remain unchanged over the transmission block of interest. Let T denote the block duration or the radar dwell time. Let G_k∈ℂ^N × M denote the channel matrix between the BS and IRS k, and h_k∈ℂ^N× 1 denote the channel vector between IRS k and the CU, which are assumed to be known by the system via proper channel estimation (see, e.g., <cit.>). We consider the transmit beamforming at the BS and the reflective beamforming at the IRSs. In particular, the BS sends combined information and sensing signals to facilitate ISAC <cit.>. Let s_k[t] denote the information signal for CU k at time symbol t, and w_k∈ℂ^M× 1 denote the corresponding transmit beamforming vector. Then, the information signal vector for all CUs is denoted as s[t] = [s_1[t],…,s_K[t]]^T with s[t] ∼𝒞𝒩(0,I_K). Let s_0[t] ∈ℂ^M× 1 denote the dedicated sensing signal, which is randomly generated independent of s[t], with zero mean and covariance matrix R_0=𝔼(s_0[t]s_0^H[t])≈1/T∑_t=1^Ts_0[t]s_0^H[t]≽ 0.[The statistical and sample covariance matrices are assumed to be approximately the same by considering T to be sufficiently large.] By combining the information and sensing signals, the transmit signal at the BS is x[t] = ∑_k ∈𝒦w_ks_k[t] + s_0[t], for which the covariance matrix is given by R_x = ∑_k ∈𝒦W_k+ R_0, where W_k = w_kw_k^H with rank(W_k) = 1 and W_k≽ 0. By letting P_sum denote the maximum transmit power at the BS, then we have the maximum transmit power constraint at the BS as 𝔼(x[t]^2) =tr(R_x) = tr(∑_k ∈𝒦W_k+ R_0) ≤ P_sum. Furthermore, we consider that each IRS k can adaptively adjust the reflection coefficients of its elements based on the channel conditions. Let ϕ_k = [ϕ_k,1,…,ϕ_k,n,…, ϕ_k,N]^T denote the complex reflection coefficients imposed by IRS k, where |ϕ_k,n|=1 and arg{ϕ_k,n}∈ (0,2π], ∀ n ∈{1,…,N}. First, we consider the wireless communication from the BS to the CUs assisted by the IRSs. By omitting the direct link from the BS to each CU k, the received signal by CU k through the BS-IRS k-CU k link at symbol t is y_k[t] = h_k^HΦ_kG_kx[t]+z_k[t] =h_k^HΦ_kG_kw_ks_k[t]_User's desired signal+∑_k' ∈𝒦\ kh_k^HΦ_kG_kw_k's_k'[t]_Inter-user interference + h_k^HΦ_kG_ks_0[t]_Dedicated sensing signal interference +z_k[t], where Φ_k = diag(ϕ_k), and z_k[t]∼𝒞𝒩(0,σ^2) denotes the additive white Gaussian noise (AWGN) at CU k that may include the background interference among different IRSs. As the sensing signal s_0[t] can be generated offline, it can be known prior to transmission. As such, we consider two different types of CU receivers depending on whether they have the capability of cancelling the interference by sensing signals s_0[t]. * Type-I CU receiver: Such CU receivers do not have the capability of canceling the interference by s_0[t] and thus treat such interference as noise. In this case, the received SINR of CU k is γ_k^(I) ({W_k},{Φ_k},R_0) = h_k^HΦ_kG_kW_kG_k^HΦ_k^Hh_k/∑_k' ∈𝒦\ kh_k^HΦ_kG_kW_k'G_k^HΦ_k^Hh_k+h_k^HΦ_kG_kR_0G_k^HΦ_k^Hh_k+σ _k^2. * Type-II CU receiver: Such CU receivers are dedicated designed for ISAC, which can cancel the interference from s_0[t] before decoding information signal s_k[t]. In this case, the received SINR of CU k is γ_k^(II) ({W_k},{Φ_k}) = h_k^HΦ_kG_kW_kG_k^HΦ_k^Hh_k/∑_k' ∈𝒦\ kh_k^HΦ_kG_kW_k'G_k^HΦ_k^Hh_k + σ _k^2. Next, we consider the target estimation at the IRSs. In particular, we consider two cases with point and extended targets, which are introduced in detail as follows. §.§.§ Point Target Case Each point target is assumed to have a small spatial range and can be viewed as a far apart unstructured point for the IRS. In this case, the round-trip target response matrix from IRS k to its corresponding target to IRS k is given by Ê_k = β_kã(θ_k)a_k^T(θ_k), where β_k denotes the complex coefficient accounting for the radar cross-section of the target and the round-trip path-loss, a_k(θ_k) denotes the array steering vector of reflecting elements at IRS k with DoA θ_k and ã_k(θ_k) denotes the steering vector at the sensors of IRS k. Here, we have a_k(θ_k) = [1,e^j 2π/λd sin(θ_k),…,e^j2π/λ (N-1)dsin(θ_k)]^T, ã_k(θ_k) = [1,e^j 2π/λd_s sin(θ_k),…,e^j2π/λ (N_s-1)d_ssin(θ_k)]^T, where λ denotes the carrier wavelength, and d and d_s denote the spacing of consecutive reflection elements and sensor elements at IRS k, respectively. In this case, the complex coefficient β_k and the target DoA θ_k are unknown parameters to be estimated at each IRS k. §.§.§ Extended Target Case Extended target refers to an object or an entity that occupies an area rather than being represented as a single point <cit.>. Unlike a point target that is typically modeled as a single point in space, an extended target can be an object with non-negligible size, such as a vehicle or a group of targets <cit.>. The echo signal from an extended target consists of multiple scatterers within the target volume and varies depending on its size, shape, orientation, and material properties. As a result, the point target model in (<ref>) only reflects a single point scatterer behavior, but is not suitable to characterize extended targets. For tractable theoretical analysis, we use Ê_k∈ℂ^N_s × N to denote the complete target response matrix from IRS k to extended target to IRS k, which reflects the comprehensive effect of all scatterers within the target volume. In this case, the complete target response matrix Ê_k corresponds to the parameters to be estimated at each IRS k. Based on the two target models, we consider the target sensing at each semi-passive IRS k by using its dedicated sensors for receiving the echo signals reflected by the targets <cit.>. In particular, the received echo signal at the sensors of each IRS k from target k at time symbol t is y̅_k[t] =Ê_kΦ_kG_kx[t] + z̅_k[t], where z̅_k[t] ∼𝒞𝒩(0,σ_s^2I_N_s) denotes the AWGN at the sensor receiver of IRS k. By defining X = [x[1],…,x[T]], Y̅_k = [y̅_k[1],…,y̅_k[T]], and Z̅ = [z̅_k[1],…,z̅_k[t]], we have Y̅_k = Ê_kΦ_kG_kX+ Z̅_k. We suppose that each IRS k is aware of the channel matrix G_k and the BS's transmitted signal X via proper channel estimation and signalling from the BS. Accordingly, based on the received echo signal Y̅_k in (<ref>), each IRS k needs to estimate the DoA θ_k and the complex coefficient β_k as unknown parameters for the point target case, and needs to estimate the complete target response matrix Ê_k for the extended target case. § ESTIMATION CRB DERIVATION In this section, we derive the estimation CRB as the sensing performance metric for each IRS k to estimate the unknown parameters based on (<ref>), which provides the variance lower bound for any unbiased estimators. §.§ Point Target Case For the point target case, the target response matrix Ê_k is particularly specified by (<ref>). Thus, the echo signal received at IRS k in (<ref>) is re-expressed as Y̅_k = β_kE_kΦ_kG_kX+ Z̅_k, where E_k = ã_k(θ_k)a_k^T(θ_k). Let ξ_k = [θ_k,β_k]^T denote the three real parameters to be estimated, where β_k = [Re{β_k},Im{β_k}]. By vectorizing Y̅_k in (<ref>), we have ỹ_k = vec(Y̅_k) = η̃_k + z̃_k, where η̃_k = vec(β_kE_kΦ_kG_kX) and z̃_k = vec (Z_k) ∼𝒞𝒩(0,σ_s^2I_N_s T). First, we derive the Fisher information matrix (FIM) J_k∈ℂ^3 × 3 for estimating the vector μ from (<ref>), which is given by <cit.> [J_k]_l,p = 2/σ_s^2Re{∂η̃_k^H/∂ [ξ_k]_l∂η̃_̃k̃/∂ [ξ_k]_p}, l,p ∈{1,2,3}. Recall that the statistic covariance matrix R_x = ∑_k ∈𝒦W_k + R_0 is also approximated as the sample covariance matrix at the BS. Based on (<ref>), the FIM is derived in Lemma <ref> as follows. The FIM J_k for estimating ξ_k is given by J_k = [[ J_θ_k,θ_k J_θ_k,β_k; J_θ_k,β_k^T J_β_k,β_k ]], where J_θ_k,θ_k= 2T|β_k|^2/σ_s^2tr(Ė_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k)) , J_θ_k,β_k = 2T/σ_s^2Re{β_k^*tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k))[1,j]}, J_β_k,β_k = 2T/σ_s^2tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HE_k^H(θ_k))I_2, with Ė_k(θ_k) = ∂E_k/∂θ_k denoting the partial derivative of E_k with respect to θ_k. See Appendix <ref>. Next, based on the FIM J_k, we are particularly interested in the CRB for each IRS k to estimate θ_k,[As the complex coefficient β_k is affected by both the radar cross section and the round-trip pathloss, it is difficult for IRS k to extract target information from β_k. Therefore, only the CRB for estimating θ_k is considered.] which is obtained as CRB_k(θ_k) = [J_k^-1]_1,1= [J_θ_k,θ_k - J_θ_k,β̃_kJ_β̃_k,β̃_k^-1J_θ_k,β̃_k^T]^-1. For E_k = ã_k(θ_k)a_k^T(θ_k), it follows that Ė_k(θ_k) = ã̇_k(θ_k)a_k^T(θ_k)+ã_k(θ_k)ȧ_k^T(θ_k) =j 2π/λd_s cos (θ_k) (D_N_sã_k(θ_k)a_k^T(θ_k)+ã_k(θ_k)a_k^T(θ_k)D_N), where D_N = diag(0,1,…,N-1). Based on (<ref>) and (<ref>), we have the following proposition. The CRB for each IRS to estimate θ_k is given by CRB_k(θ_k)=σ_s^2/2 T|β_k|^2(tr(Ė_k(θ_k)Φ_kG_kR_x G_k^HΦ_k^HĖ_k^H(θ_k))-|tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k))|^2/tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HE_k^H(θ_k))) =σ_s^2 λ^2/8 T|β_k|^2π^2 d_s^2 cos^2(θ_k)(( N_s - 1)N_s( N_s + 1)/12 tr( ϕ_k^TU_kϕ_k^*) + N_str( ϕ_k^TD_NU_kD_Nϕ_k^*) - N_s| tr( ϕ_k^TU_kD_Nϕ_k^*)|^2/ tr( ϕ_k^TU_kϕ_k^*)), where U_k = A_kG_kR_xG_k^HA_k^H and A_k = diag (a_k(θ_k)). See Appendix <ref>. Based on the closed-form CRB expression in (<ref>), we have the following proposition. When the number of sensing elements N_s at each IRS becomes sufficiently large, the CRB of estimating θ_k for the point target case decreases inversely proportional to the cubic of N_s, i.e., CRB_k(θ_k) ∝1/N_s^3. See Appendix <ref>. §.§ Extended Target Case Next, we consider the extended target case, in which the objective of each IRS k is to estimate the complete target response matrix Ê_k∈ℂ^N_s × N. By vectorizing the received echo signal in (<ref>) at IRS k, we have ŷ_k = vec(Y̅_k) = η̂_k + z̃_k, where η̂_k = vec (Ê_kΦ_kG_kX) = ((Φ_kG_kX)^T⊗I_N_s)ĥ_k, and ĥ_k = vec(Ê_k). Based on the definition of FIM in (<ref>) and the received echo signal model in (<ref>), the FIM for estimating ĥ_k is given in the following lemma. The FIM F_k for estimating ĥ_k from (<ref>) is given by F_k = [[ F_Re{ĥ_k},Re{ĥ_k} F_Re{ĥ_k},Im{ĥ_k}; F_Im{ĥ_k},Re{ĥ_k} F_Im{ĥ_k},Im{ĥ_k} ]], where F_Re{ĥ_k},Re{ĥ_k} =F_Im{ĥ_k},Im{ĥ_k} =2/σ_s^2Re{(Φ_kG_kX)^* (Φ_kG_kX)^T⊗I_N_s}, F_Re{ĥ_k},Im{ĥ_k} =-F_Im{ĥ_k},Re{ĥ_k} =2/σ_s^2Re{(Φ_kG_kX)^* (Φ_kG_kX)^T⊗I_N_s}. See Appendix <ref>. Based on (<ref>), we have the following proposition. The CRB for estimation the target response matrix Ê_k by IRS k is given by CRB_k(Ê_k) =tr(F_k^-1) =N_sσ_s^2/Ttr((Φ_k^*G_k^*R_x^*G_k^TΦ_k^T)^-1) =N_sσ_s^2/Ttr((G_kR_x^HG_k^H)^-1). This proposition can be verified following a similar procedure as in <cit.>. Therefore, the detailed proof is omitted for brevity. It is observed from (<ref>) that the CRB_k(Ê_k) only depends on the transmit covariance R_x or the transmit beamformers. Besides, the condition M ≥rank(R_x)≥ N ≥rank(G_k) must hold to ensure CRB_k(Ê_k) to be bounded, such that the target response matrix Ê_k is estimable. Moreover, we have the following proposition to reveal the relationship between CRB and the number of sensors N_s at each IRS. When the number of sensing elements N_s at each IRS becomes sufficiently large, the CRB of estimating Ê_k for the extended target case increases proportional to N_s, i.e., CRB_k(Ê_k) ∝N_s. See Appendix <ref>. By comparing Propositions <ref> versus <ref>, it is observed that deploying more reflecting elements at IRS is beneficial to enhance the performance for target AoA estimation (in terms of lower CRB) for the point target case, but leads to higher CRB for the extended target case. This is due to the fact that higher array gain can be exploited for sensing in the former case, but more target parameters need to be estimated in the latter case. § JOINT BEAMFORMING FOR CRB MINIMIZATION WITH POINT TARGETS In this section, we jointly optimize the transmit beamforming at the BS and the reflective beamforming at the multiple IRSs to minimize the estimation CRB CRB_k(θ_k) in (<ref>) for the point target case, subject to the maximum transmit power constraint at the BS and the SINR requirements at individual CUs. To ensure the fair optimization of sensing performance, we particularly minimize the maximum CRB among the K IRSs. As such, for Type-I and Type-II CU receivers, the SINR-constrained max-CRB minimization problems are formulated as (P1-I) and (P1-II), respectively. (P1-I): min_{W_k},{Φ_k},R_0 max_k ∈𝒦 CRB_k(θ_k) s.t. γ_k^(I) ({W_k},{Φ_k},R_0) ≥Γ_k, ∀ k ∈𝒦 tr(∑_k ∈𝒦W_k + R_0) ≤ P_sum R_0≽0 W_k≽0, ∀ k ∈𝒦 rank(W_k) = 1, ∀ k ∈𝒦 |ϕ_k,n| = 1, ∀ n ∈𝒩, ∀ k ∈𝒦. (P1-II): min_{W_k},{Φ_k},R_0 max_k ∈𝒦 CRB_k(θ_k) s.t. γ_k^(II) ({W_k},{Φ_k},R_0) ≥Γ_k, ∀ k ∈𝒦 (<ref>), (<ref>), (<ref>), (<ref>),and (<ref>). In problems (P1-I) and (P1-II), (<ref>), (<ref>) and (<ref>) denote the SINR constraints at the CUs, the transmit power constraint at the BS, and the unit-modular constraints at the IRSs, respectively. Notice that problems (P1-I) and (P1-II) are both non-convex. This is due to the fact that their objective functions are non-convex and the SINR constraints in (<ref>) and (<ref>), the unit-modulus constraints in (<ref>), and the rank-one constraints in (<ref>) are all non-convex. To tackle the non-convexity, we propose alternating-optimization-based algorithms to solve them, in which the transmit beamforming {W_k}/R_0 at the BS and the reflective beamforming {Φ_k} at IRSs are optimized in an alternating manner. Note that problems (P1-I) and (P1-II) are similar, with the only difference lying on the SINR γ_k^(I) in (<ref>) for Type-I receivers versus γ_k^(II) in (<ref>) for Type-II receivers. As a result, the proposed algorithm for solving problem (P1-I) can be directly generalized to solve (P1-II) by replacing γ_k^(II) with γ_k^(I). In the following, we focus on solving problem (P1-I), and omit the details for solving (P1-II). In the following, we obtain the optimal transmit beamformers {W_k} and R_0 for problem (P1-I) under given fixed reflective beamformers {Φ_k} in Section IV-A, and then optimize {Φ_k} for (P1-I) under given {W_k} and {R_0} in Section IV-B. §.§ Optimal Transmit Beamforming to (P1-I) with Given Reflective Beamforming First, we aim to optimize the transmit beamforming {W_k} and R_0 at the BS with given {Φ_k}, for which the optimization problem is expressed as (P2): min_{W_k},R_0 max_k ∈𝒦 CRB_k(θ_k) s.t. (<ref>),(<ref>),(<ref>),(<ref>)and (<ref>). In (P2), we use the CRB formula in (<ref>) for transmit beamforming optimization. Note that problem (P2) is still non-convex due to the non-convexity of the objective function, the SINR constraints in (<ref>), and the rank constraints in (<ref>). In the following, we use the SDR technique to find the optimal solution. First, we define G̃_k = G_k^HΦ_k^Hh_kh_k^HΦ_kG_k, and equivalently express the SINR constraints in (<ref>) as 1/Γ_ktr(G̃_kW_k) ≥∑_k' ∈𝒦\ ktr(G̃_kW_k') + tr(G̃_kR_0)+ σ _k^2, ∀ k∈𝒦. Furthermore, it is observed from (<ref>) that CRB_k(θ_k)= κ/f_2,k ({W_k},R_0) - f_3,k ({W_k},R_0), where κ = σ_s^2 λ^2/8 T|β_k|^2π^2 d_s^2 cos^2(θ_k) is a constant, and f_2,k ({W_k},R_0) = ( N_s - 1)N_s( N_s + 1)/12 tr( ϕ_k^TB_k(∑_k ∈𝒦W_k + R_0)B_k^Hϕ_k^*) + N_str( ϕ_k^TD_NB_k(∑_k ∈𝒦W_k + R_0)B_k^HD_Nϕ_k^*), f_3,k ({W_k},R_0) = N_s| tr( ϕ_k^TB_k(∑_k ∈𝒦W_k + R_0)B_k^HD_Nϕ_k^*)|^2/ tr( ϕ_k^TB_k(∑_k ∈𝒦W_k + R_0)B_k^Hϕ_k^*), with B_k = A_kG_k. As such, problem (P2) is can be equivalently solved via solving the following problem: (P2.1): max_{W_k}, R_0 min_k ∈𝒦 f_2,k ({W_k},R_0) - f_3,k ({W_k},R_0) s.t. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). To solve problem (P2.1), we introduce the auxiliary variables ν̃_1 and {ν̃_2,k}. Accordingly, problem (P2.1) is equivalently reformulated as (P2.2): max_{W_k}, R_0, ν̃_1, {ν̃_2,k} ν̃_1 s.t. f_2,k ({W_k},R_0)- ν̃_2,k≥ν̃_1, ∀ k ∈𝒦 ν̃_1≥ 0 ν̃_2,k≥ f_3,k ({W_k},R_0), ∀ k ∈𝒦 (<ref>), (<ref>), (<ref>), (<ref>),and (<ref>). To deal with the non-convex constraints in (<ref>), we transform them into a set of linear matrix inequality (LMI) constraints in (<ref>) base on the Schur’s complement. [[ tr( ϕ_k^TB_k(∑_k ∈𝒦W_k + R_0)B_k^Hϕ_k^*) √(N_s) tr( ϕ_k^TB_k(∑_k ∈𝒦W_k + R_0)B_k^HD_Nϕ_k^*); √(N_s) tr( ϕ_k^TD_NB_k(∑_k ∈𝒦W_k^H + R_0^H)B_k^Hϕ_k^*) ν̃_2,k ]] ≽ 0, ∀ k ∈𝒦. As a result, problem (P2.2) is equivalent to the following convex problem: (P2.3): max_{W_k}, R_0, ν̃_1, {ν̃_2,k} ν̃_1 s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). Furthermore, we remove the rank constraints in (<ref>), and accordingly obtain the relaxed version of (P2.3) as (SDR2.3). Note that (SDR2.3) is a convex semi-definite program (SDP), which can be optimally solved by convex solvers such as CVX <cit.>. Let {W_k} and R_0 denote the obtained optimal solution to (SDR2.3). As the obtained optimal solution {W_k} may not be of rank-one, we construct the optimal rank-one solutions of {W_k^⋆} and the corresponding R_0^⋆ to problem (P2.3) and thus (P2) by using the following proposition. Based on the obtained optimal solution {W_k} and R_0 to (SDR2.3), the optimal solution to problem (P2.3) and (P2) is given by W_k^⋆ = w_kw_k^H, ∀ k∈𝒦, with w_k = (h̃_k^HW_kh̃_k)^-1/2W_kh̃_k, where h̃_k^H = h_k^HΦ_kG_k, and the corresponding optimal solution of R_0^⋆ is given by R_0^⋆ = R_0+∑_k ∈𝒦W_k-∑_k∈𝒦W_k^⋆. See Appendix <ref>. It is shown in Proposition <ref> that the solutions {W_k^⋆} and R_0^⋆ in (<ref>) and (<ref>) are actually optimal for problem (SDR2.2). Therefore, the SDR is tight between (P2.2) and (SDR2.2), and thus we obtain the optimal solution to (P2). §.§ Reflective Beamforming Optimization for (P1-I) with Given Transmit Beamforming Next, we optimize the reflective beamforming {Φ_k} or {ϕ_k} for problem (P1-I) with given {w_k} and R_0. In this case, the optimization problem is expressed as (P3): min_{ϕ_k} max_k ∈𝒦 CRB_k(θ_k) s.t. (<ref>),(<ref>), in which we use the CRB formulas in (<ref>) for reflective beamforming optimization. Notice that problem (P3) can be equivalently decomposed into K subproblems given by (P3.1.k): max_ϕ_k ( N_s - 1)N_s( N_s + 1)/12 tr( ϕ _k^TU_kϕ_k^*) + N_str( ϕ_k^TD_NU_kD_Nϕ_k^*) - N_s| tr( ϕ_k^TU_kD_Nϕ_k^*)|^2/ tr( ϕ_k^TU_kϕ_k^*) s.t. (<ref>),(<ref>). Problem (P3.1.k) is still non-convex due to the non-convexity of the objective function, the SINR constraints in (<ref>), and the unit-modulus constraints in (<ref>). In the following, we use the SDR technique to handle problem (P3.1.k). Towards this end, we define H_k = diag(h_k^H) and Θ_k = ϕ_k^*ϕ_k^T, where Θ_k≽0 and rank(Θ_k) = 1, ∀ k∈𝒦. Then, the SINR at CU k becomes γ̃_k^(I) ({Θ_k}) = tr( H_kG_kw_kw_k^HG_k^HH_k^HΘ_k)/∑_k' ∈𝒦\ ktr(H_kG_kw_k'w_k'^HG_k^HH_k^HΘ_k) +tr(H_kG_kR_0G_k^HH_k^HΘ_k) + σ_k^2. By substituting (<ref>) into the constraints in (<ref>) and skipping the rank-one constraints, problem (P3.1.k) is relaxed as (SDR3.1.k): max_Θ_k ( N_s - 1)N_s( N_s + 1)/12 tr( U_kΘ_k) + N_str( D_NU_kD_NΘ_k) - N_s| tr( U_kD_NΘ_k)|^2/ tr( U_kΘ_k) s.t. ∑_k' ∈𝒦\ ktr(H_kG_kw_k'w_k'^HG_k^HH_k^HΘ_k) +tr(H_kG_kR_0G_k^HH_k^HΘ_k) + σ _k^2 -1/Γ_k tr( H_kG_kw_kw_k^HG_k^HH_k^HΘ_k)≤ 0 [Θ_k]_n,n = 1, n ∈𝒩 Θ_k≽0. Furthermore, by introducing an auxiliary variable τ_k, problem (SDR3.1.k) is equivalently re-expressed as (SDR3.2.k): max_Θ_k, τ_k ( N_s - 1)N_s( N_s + 1)/12 tr( U_kΘ_k)+ N_str( D_NU_kD_NΘ_k) - τ_k s.t. [[ τ_k √(N_s) tr( U_kD_NΘ_k); √(N_s) tr( Θ_k^HD_NU_k^H) tr( U_kΘ_k) ]] ≽ 0 (<ref>),(<ref>), and (<ref>). Here, the constraints in (<ref>) is obtained from τ_k ≥N_s| tr( U_kD_NΘ_k)|^2/ tr( U_kΘ_k) by using Schur's complement. Note that problem (SDR3.2.k) is a convex SDP which can be optimally solved by CVX. Let Θ_k^⋆ denote the obtained optimal solution to (SDR3.2.k). Notice that the obtained solution Θ_k^⋆ may not be of rank-one, and as a result, it may not be the optimal solution to problem (P3.1.k) or (P3). To tackle this issue, we further use the Gaussian randomization to construct a high-quality solution to problem (P3.1.k) based on the obtained {Θ_k^⋆}. Specifically, we first generate a number of randomly vectors r_k∼𝒞𝒩(0,I_N), and then construct a number of rank-one solutions as ϕ_k = e^jarg{(Θ_k^⋆)^1/2r_k}. Then, we find the desirable solution of ϕ_k that minimizes CRB_k(θ_k) among all random generated ϕ_k's. As a result, problem (P3) is finally solved. In summary, the alternating-optimization-based algorithm for solving (P1-I) is implemented by solving problems (P2) and (P3) alternately. Notice that problem (P2) is optimally solved and the solution to (P3) would lead to a decreasing max-CRB with a sufficient number of Gaussian randomizations. As such, the alternating optimization leads to monotonically non-increasing max-CRB values over iterations, and thus is ensured to converge. Furthermore, the alternating-optimization-based algorithm can be performed in a distributed way in practice. In particular, in each iteration, the BS can first optimize the transmit beamforming by solving problem (P2), and then each IRS can optimize its own reflective beamforming by solving problem (P3.1.k) in a distributed manner. This is thus very efficient in practical implementation. § JOINT BEAMFORMING FOR CRB MINIMIZATION WITH EXTENDED TARGETS In this section, we jointly optimize the transmit beamforming {W_k} and R_0 at the BS and the reflective beamforming {Φ_k} at the IRSs to minimize the maximum estimation CRB among the IRSs, subject to the maximum transmit power constraint at the BS and the SINR requirements at individual CUs. For Type-I and Type-II CU receivers, the SINR-constrained max-CRB minimization problems are formulated as (P4-I) and (P4-II) in the following, respectively. (P4-I): min_{W_k},{Φ_k},R_0 max_k ∈𝒦 CRB_k(Ê_k) s.t. (<ref>),(<ref>),(<ref>), (<ref>),(<ref>), and (<ref>). (P4-II): min_{W_k},{Φ_k},R_0 max_k ∈𝒦 CRB_k(Ê_k) s.t. (<ref>),(<ref>),(<ref>), (<ref>),(<ref>), and (<ref>). Problems (P4-I) and (P4-II) are both non-convex. This is due to the fact that their objective functions, the SINR constraints in (<ref>) and (<ref>), the unit-modulus constraints in (<ref>), and the rank-one constraints in (<ref>) are all non-convex. To tackle the non-convexity, we propose an alternating-optimization-based algorithm to solve them by optimizing {W_k}/R_0 and {Φ_k} alternately. Notice that problems (P4-I) and (P4-II) have similar structures. Therefore, in the following, we only focus on solving problem (P4-I) and omit the details for solving (P4-II). §.§ Optimal Transmit Beamforming to (P4-I) with Given Reflective Beamforming First, we aim to optimize the transmit beamforming {W_k} and dedicated signal covariance R_0 with given {Φ_k}. In this case, the optimization problem is formulated as (P5): min_{W_k},R_0 max_k ∈𝒦 CRB_k(Ê_k) s.t. (<ref>),(<ref>), (<ref>), (<ref>), and (<ref>). By introducing an auxiliary variable ũ, problem (P5) is equivalent to the following problem. (P5.1): min_{W_k}, R_0, ũ ũ s.t. ũ≥N_sσ_s^2/Ttr((G_k(∑_k=1^KW_k + R_0)^HG_k^H)^-1) , ∀ k (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). We then use the SDR technique to handle problem (P5.1). By removing the rank-one constraints in (<ref>), we obtain the relaxed version of (P5.1) as (SDR5.1). Note that problem (SDR5.1) is a convex SDP problem that can be solved by CVX. Let {W_k} and R_0 denote the obtained optimal solution to problem (SDR5.1). The optimal rank-one solution {W_k^⋆} and the corresponding R_0^⋆ to problems (P5) and (P5.1) can be constructed via the following proposition. Based on the obtained optimal solution {W_k} and R_0 to (SDR5.1), the optimal solution of {W_k} to problem (P5) and (P5.1) is given by W_k^⋆ = w_kw_k^H, ∀ k∈𝒦, with w_k = (h̃_k^HW_kh̃_k)^-1/2W_kh̃_k, and the corresponding R_0^⋆ is given by R_0^⋆ = R_0+∑_k ∈𝒦W_k-∑_k∈𝒦W_k^⋆. The proof is similar to that of Proposition <ref>, and thus is omitted. §.§ Reflective Beamforming Optimization for (P4-I) with Given Transmit Beamforming Next, we optimize the reflecting coefficients {ϕ_k} in problem (P4-I) with given {W_k} and R_0. Note that the objective function is independent of {ϕ_k}. As a result, any {ϕ_k} that guarantee the SINR constraints in (<ref>) is a feasible solution to problem (P4-I). As the reflective beamformers {ϕ_k} may affect the optimization of {W_k} and R_0 by changing the achieved maximum SINR, we propose to optimize {ϕ_k} to maximize the SINR at CUs for expanding the feasible region of problem (P5). Note that the SINR of each CU k only depends on its corresponding IRS k. Therefore, we optimize the reflective beamforming at each IRS k to maximize the SINR at the corresponding CU k. The corresponding optimization problem becomes (P6): max_Φ_k | h_k^HΦ_kG_kw_k|^2/∑_k' k^K|h_k^HΦ_kG_kw_k'|^2 +h_k^HΦ_kG_kR_0G_k^HΦ_k^Hh_k+ σ _k^2 s.t. (<ref>). Furthermore, by replacing the objective function in problem (P6) by γ̃_k^(I) in (<ref>) and skipping the rank-one constraints on Θ_k, we have a relaxed version of (P6) as (SDR6): max_Θ_k tr( H_kG_kw_kw_k^HG_k^HH_k^HΘ_k)/∑_k' k^Ktr(H_kG_kw_k'w_k'^HG_k^HH_k^HΘ_k) +tr(H_kG_kR_0G_k^HH_k^HΘ_k)+ σ _k^2 s.t. [Θ_k]_n,n = 1, n = 1,…,N Θ_k≽0. Problem (SDR6) is still a non-convex problem as the objective function is non-convex. To tackle this issue, we define T_1 = H_kG_kw_kw_k^HG_k^HH_k^H, T_2 = ∑_k' k^KH_kG_kw_k'w_k'^HG_k^HH_k^H +H_kG_kR_0G_k^HH_k^H. Then, problem (SDR6) is equivalent to the following optimization problem: (SDR6.1): max_Θ_k tr(T_1Θ_k)/tr(T_2Θ_k) + σ _k^2 s.t. [Θ_k]_n,n = 1, n = 1,…,N Θ_k≽0. By further defining M_k = Θ_k/tr(T_2Θ_k) + σ _k^2 and t_k = 1/tr(T_2Θ_k) + σ _k^2, we transform the optimization of Θ_k as that of M_k and t_k. As such, problem (SDR6.1) is transformed into the following problem: (SDR6.2): max_M_k,t_k tr(T_1M_k) s.t. [M_k]_n,n - t_k= 0, n = 1,…,N M_k≽0 t_k ≥ 0 tr(T_2M_k)+σ_k^2t_k = 1. Note that the problem (SDR6.2) is an SDP problem and can be solved by CVX. Let M_k^* and t_k^* denote the obtained optimal solution to (SDR6.2). Then, the optimal solution Θ_k^* to (SDR6) is calculated by Θ_k^* = M_k^*/t_k^*. Now, it remains to find a high-quality solution to (P6). As the obtained solution Θ_k^* may not be of rank-one, it may not be the optimal solution to problem (P6). To overcome this case, the Gaussian randomization method is used to construct a high-quality rank-one solution to problems (P6). Similar to (<ref>), we first generate a number of random vectors r_k∼𝒞𝒩(0,I_N), and construct a number of rank-one candidates as ϕ_k = e^jarg{(Θ_k^*)^1/2r_k}. Then, we find the desirable solution of ϕ_k that maximizes the objective function in problem (P6) among all randomly generated ϕ_k's. As a result, problem (P6) is finally solved. In summary, the alternating-optimization-based algorithm for solving (P4-I) is implemented by solving problems (P5) and (P6) alternately. Similarly as for (P1-I), the alternating optimization leads to monotonically non-increasing max-CRB values for problem (P4-I) over iterations, and thus is ensured to converge. Furthermore, the alternating-optimization-based algorithm can also be performed in a distributed way in practice. § NUMERICAL RESULTS This section provides numerical results to validate the effectiveness of our proposed designs. In the simulation, we set the carrier frequency as 6 GHz and the bandwidth as 1 MHz. We adopt the Rician fading channel model with the K-factor being 5 dB for wireless channels between the BS and the IRSs and those between the IRSs and the CUs. We also set the noise power spectrum density as -174 dBm/Hz, the SINR constraints to be Γ_k = Γ, ∀ k ∈𝒦, and the radar dwell time as T = 100. In particular, we consider that there are one BS and two IRSs as shown in Fig. <ref>, which are located at [0,0] meters (m), [-30,30]m, and [30,30]m, respectively. Their associated CUs are located at [-35,25]m and [30,25]m, and targets are located at [-35,22]m and [35,27]m, respectively. For comparison, we consider the following benchmark schemes: * Transmit beamforming (BF) only: The IRSs implement random reflection coefficients. Accordingly, we only optimize the transmit beamforming at the BS, e.g., by solving problems (P2) for the point target case and by solving problem (P5) for the extended target case, when the Type-I CU receivers are implemented. * ZF-BF: First, we design the transmit information beamformers based on the ZF principle. In particular, by defining H̃ = [h̃_1,⋯,h̃_K] and W̃^ZF = H̃(H̃^HH̃)^-1, we set the ZF beamformer for each CU k as w_k^ZF = √(p_k)w̃_k^ZF/w̃_k^ZF, where w̃_k^ZF denotes the k-th column of W̃^ZF and p_k is the transmit power for CU k. Next, we optimize the transmit power {p_k}, the sensing beamformers R_0, and the reflective beamformers {Φ_k} by alternating optimization, similarly as in Sections IV and V for point and extended targets, respectively. * Sensing only: This schemes corresponds to the proposed design without SINR constraints for communications, by equivalently setting Γ_k = 0, ∀ k∈𝒦, in problems (P1-I), (P1-II), (P4-I), and (P4-II) for the four considered cases, respectively. This thus serves as performance upper bound for our proposed design, in terms of the lower bound of the estimation CRB. Fig. <ref> shows the achieved max-CRB versus the maximum transmit power P_sum at the BS for the point target case, where we set Γ = 10 dB. It is observed that our proposed design performs close to the performance upper bound by the sensing only scheme, and outperforms other benchmark schemes over the whole regime of P_sum. It is also observed that the proposed design with Type-II receivers achieves lower CRB than that with Type-I receivers. This shows the benefits of sensing-signal-interference cancellation. In particular, it is shown in simulations that for problem (P1-II) with Type-II receivers, the obtained R_0 is always non-zero for facilitating sensing without interfering with CU receivers; while for problem (P1-I) with Type-I receivers, the obtained R_0 is generally zero for avoiding the interference towards CU receivers. Fig. <ref> shows the achieved max-CRB versus the SINR constraint Γ at each CU for the point target case. It is observed that in the low SINR regime, the performance achieved by our proposed design with both types of CU receivers is close to that by the sensing only scheme. As the SINR constraint increases, the performance gap between the proposed design with Type-II CU receivers and that with Type-I CU receivers is observed to increase. This is due to the fact that when the SINR constraint becomes large, the BS needs to allocate more transmit power to the information signals, which may suffer from more severe interference from dedicated sensing signals when Type-I CU receivers are employed. This shows that the sensing-interference cancellation of Type-II receivers are particularly appealing when both communication and sensing requirements become stringent. Fig. <ref> shows the achieved max-CRB versus the maximum transmit power P_sum at the BS for extended target case, where we set Γ = 20 dB. It is observed that our proposed design performs most close to the performance upper bound by the sensing only scheme, and outperforms the transmit beamforming only scheme and ZF beamforming scheme especially when the maximum transmit power P_sum is low. As the maximum transmit power P_sum increases, the CRB performances by our proposed design and the transmit beamforming only scheme gradually approach the performance upper bound. This is due to the fact that the CRB performance mainly depends on the BS transmit beamforming optimization when the maximum transmit power is high. It is also observed that the proposed design with Type-II receivers achieves lower CRB than that with Type-I receivers when the transmit power is low. It shows that performing the sensing interference cancellation is less significant for the extended target case than that for the point target case. Fig. <ref> shows the achieved max-CRB versus the SINR constraint Γ at each CU for extended target case. It is observed that in the low SINR regime, the performance achieved by our proposed design with both two types of CU receivers is close to that by the sensing only scheme. This phenomenon is similar to the point target case. It is also observed that the ZF beamforming scheme performs worst among all schemes. This is different form the observation for the point target case, which indicates the importance of transmit beamforming optimization for minimizing the max-CRB in the extended target case. § CONCLUSION This paper studied a multi-IRS-enabled ISAC system, in which multiple semi-passive IRSs are deployed to provide ISAC services at separate areas. We derived the closed-form CRBs for each IRS to estimate target parameters based on the ISAC signals, by considering two cases with point and extended targets, respectively. These derived CRBs provides insights on the multi-semi-passive-IRS-enabled sensing. Furthermore, we proposed two efficient joint transmit and reflective beamforming designs to minimize the maximum CRB at all IRSs while ensuring the communication requirements, by considering two different types of CU receivers. Numerical results showed the effectiveness of our proposed designs, as compared to various benchmarks without such joint optimization. It was also shown that the sensing signal interference cancellation at Type-II receivers is essential in further enhancing the ISAC performance, especially for the point target case. § PROOF OF LEMMA <REF> According to (<ref>), we have the following the partial derivations: ∂η̃/∂θ_k = β_kvec(Ė_k(θ_k)Φ_kG_kX), ∂η̃/∂β_k = [1,j] ⊗vec(E_kΦ_kG_kX), where Ė_k(θ_k) = ∂E_k/∂θ_k denotes the partial derivative of E_k with respect to θ_k. Consequently, the elements of FIM in (<ref>) are given by J_θ_k,θ_k = 2/σ_s^2Re{(β_kvec(Ė_k(θ_k)Φ_kG_kX))^Hβ_kvec(Ė_k(θ_k)Φ_kG_kX)} =2|β_k|^2/σ_s^2Re{tr((Ė_k(θ_k)Φ_kG_kX)^HĖ_k(θ_k)Φ_kG_kX)} =2T|β_k|^2/σ_s^2tr(Ė_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k)), J_θ_k,β_k = 2/σ_s^2Re{(β_kvec(Ė_k(θ_k)Φ_kG_kX))^H [1,j] ⊗vec(E_kΦ_kG_kX) } =2T/σ_s^2Re{β_k^*tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k))[1,j]}, J_β_k,β_k = 2/σ_s^2Re{([1,j] ⊗vec(E_kΦ_kG_kX))^H[1,j] ⊗vec(E_kΦ_kG_kX)} =2T/σ_s^2Re{ [1,j]^H[1,j]( vec(E_kΦ_kG_kX))^Hvec(E_kΦ_kG_kX)} =2T/σ_s^2tr(E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HE_k^H(θ_k))I_2. Therefore, Lemma <ref> is finally proved. § PROOF OF PROPOSITION <REF> First, by substituting (<ref>)-(<ref>) into (<ref>), we obtain (<ref>). Then, we substitute (<ref>) into (<ref>)-(<ref>) to obtain the following results: Ė_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k) = 4π ^2d_s^2cos^2(θ_k )/λ^2(D_N_sã_k(θ_k)a_k^T(θ_k)+ã_k(θ_k)a_k^T(θ_k)D_N)Φ_kG_kR_xG_k^HΦ_k^H ·(a_k^*(θ_k)ã_k^H(θ_k)D_N_s+D_Na_k^*(θ_k)ã_k^H(θ_k)) =4π ^2d_s^2cos^2(θ_k )/λ^2(( N_s - 1)N_s( 2N_s - 1)/6 tr( ϕ_k^TU_kϕ_k^*) + ( N_s - 1)N_s/2 tr( ϕ_k^TU_kD_Nϕ_k^*). . + ( N_s - 1)N_s/2tr( ϕ_k^TD_NU_kϕ_k^*)+ N_str( ϕ_k^TD_NU_kD_Nϕ_k^*) ), E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HĖ_k^H(θ_k)= j 2π/λd_s cos (θ_k)ã_k(θ_k)a_k^T(θ_k) Φ_kG_kR_xG_k^HΦ_k^H(a_k^*(θ_k)ã_k^H(θ_k)D_N_s+D_Na_k^*(θ_k)ã_k^H(θ_k)) =j 2π/λd_s cos (θ_k)(( N_s - 1)N_s/2 tr( ϕ_k^TU_kϕ_k^*) + N_s tr( ϕ_k^TU_kD_Nϕ_k^*)), E_k(θ_k)Φ_kG_kR_xG_k^HΦ_k^HE_k^H(θ_k) = ã_k(θ_k)a_k^T(θ_k)Φ_kG_kR_xG_k^HΦ_k^Ha_k^*(θ_k)ã_k^H(θ_k) = N_str( ϕ_k^TU_kϕ_k^*), where U_k = A_kG_kR_xG_k^HA_k^H and A_k = diag (a_k(θ_k)). Substituting (<ref>)-(<ref>) into (<ref>) yields Proposition <ref>. § PROOF OF PROPOSITION <REF> Based on the closed-form CRB in (<ref>) and by considering N_s to be sufficiently large, we have lim_N_s→∞ N_s^3CRB(θ_k) = N_s^3σ_s^2 λ^2/8 T|β_k|^2π^2 d_s^2 cos^2(θ_k)(( N_s - 1)N_s( N_s + 1)/12 tr( ϕ_k^TU_kϕ_k^*) ) = 3σ_s^2 λ^2/2 T|β_k|^2π^2 d_s^2 cos^2(θ_k) tr( ϕ_k^TU_kϕ_k^*) . Therefore, lim_N_s→∞ N_s^3CRB(θ_k) equal to a determined value independent of N_s. As a result, Proposition <ref> is proved. § PROOF OF LEMMA <REF> We first decomposed the estimation parameter ĥ_k into real and imaginary parts that are denoted by Re{ĥ_k} and Im{ĥ_k}, respectively. As a result, the FIM is equivalently expressed as F_k = [[ F_Re{ĥ_k},Re{ĥ_k} F_Re{ĥ_k},Im{ĥ_k}; F_Im{ĥ_k},Re{ĥ_k} F_Im{ĥ_k},Im{ĥ_k} ]]. Then, we derive each part in the FIM as F_Re{ĥ_k},Re{ĥ_k} = 2/σ_s^2Re{∂η̃_k^H/∂Re{ĥ_k}∂η̃_̃k̃/∂Re{ĥ_k}} =2/σ_s^2Re{((Φ_kG_kX)^T⊗I_N_s)^H((Φ_kG_kX)^T⊗I_N_s) } =2/σ_s^2Re{(Φ_kG_kX)^* (Φ_kG_kX)^T⊗I_N_s}, F_Im{ĥ_k},Im{ĥ_k} = 2/σ_s^2Re{∂η̃_k^H/∂Im{ĥ_k}∂η̃_̃k̃/∂Im{ĥ_k}} =2/σ_s^2Re{((Φ_kG_kX)^T⊗I_N_s)^H((Φ_kG_kX)^T⊗I_N_s) } =2/σ_s^2Re{(Φ_kG_kX)^* (Φ_kG_kX)^T⊗I_N_s}, F_Re{ĥ_k},Im{ĥ_k} =-F_Im{ĥ_k},Re{ĥ_k} = 2/σ_s^2Im{∂η̃_k^H/∂Re{ĥ_k}∂η̃_̃k̃/∂Im{ĥ_k}} =2/σ_s^2Re{((Φ_kG_kX)^T⊗I_N_s)^H((Φ_kG_kX)^T⊗I_N_s) } =2/σ_s^2Re{(Φ_kG_kX)^* (Φ_kG_kX)^T⊗I_N_s}. As a result, the Lemma <ref> is obtained. § PROOF OF PROPOSITION <REF> Based on the closed-form CRB in (<ref>) and by considering N_s to be sufficiently large, we have lim_N_s→∞CRB_k(Ê_k)/N_s = σ_s^2/Ttr((G_k^*R_x^*G_k^T)^-1). Thus, lim_N_s→∞CRB_k(Ê_k)/N_s is equal to a constant value independent of N_s. As a result, Proposition <ref> is proved. § PROOF OF PROPOSITION <REF> First, we have 1/Γ_kh̃_k^HW_k^⋆h̃_k=1/Γ_kh̃_k^HW_kh̃_k ≥∑_k' k^Kh̃_k^HW_k'h̃_k +h̃_k^HR_0h̃_k+ σ _k^2 = ∑_k' k^Kh̃_k^HW_k'^⋆h̃_k +h̃_k^HR_0^⋆h̃_k+ σ _k^2, ∀ k. As a result, the SINR constraints in (<ref>) are still satisfied with (<ref>)-(<ref>). Furthermore, for any z∈ℂ^M× 1, it holds that z^H(W_k-W_k^⋆) z = z^HW_kz - |z^HW_kh̃_k|^2(h̃_k^HW_kh̃_k)^-1. According to the Cauchy-Schwarz inequality, we have |z^HW_kh̃_k|^2≤(z^HW_kz)(h̃_k^HW_kh̃_k), and thus z^H(W_k-W_k^⋆) z≥ 0 also holds. As a result, we have W_k-W_k^⋆≽ 0, i.e., W_k-W_k^⋆ is positive semidefinite. Therefore, it follows that R_0^⋆ is also positive semi-definite. As a result, {W_k^⋆} and R_0^⋆ are optimal to problem (P1.1). This completes the proof. IEEEtran
http://arxiv.org/abs/2307.00839v1
20230703082555
Observability of the Schr{ö}dinger equation with subquadratic confining potential in the Euclidean space
[ "Antoine Prouff" ]
math.AP
[ "math.AP" ]
We consider the Schrödinger equation in 𝐑^d, d ≥ 1, with a confining potential growing at most quadratically. Our main theorem characterizes open sets from which observability holds, provided they are sufficiently regular in a certain sense. The observability condition involves the Hamiltonian flow associated with the Schrödinger operator under consideration. It is obtained using semiclassical analysis techniques. It allows to provide with an accurate estimation of the optimal observation time. We illustrate this result with several examples. In the case of two-dimensional harmonic potentials, focusing on conical or rotation-invariant observation sets, we express our observability condition in terms of arithmetical properties of the characteristic frequencies of the oscillator. 35J10, 35Q40, 35S05, 47D08, 81Q20, 81S30, 93B07 This project originated and was initially developed while visiting Universidad Politécnica de Madrid during the academic year 2020-2021 and the spring 2022. I thank this institution for its hospitality. I also thank Fabricio Macià for introducing me to this problem and for sharing many ideas and material, including unpublished works with Shu Nakamura <cit.>. I am also grateful to Matthieu Léautaud for countless discussions and for his helpful comments on a preliminary version of this paper. This work has been partially supported by FMJH's “Junior scientific visibility program" (France) and grant MTM2017-85934-C3-3-P (MINECO, Spain). Greedy Selection for Heterogeneous Sensors Kaushani Majumder, Student Member, IEEE, Sibi Raj B. Pillai, Satish Mulleti, Member, IEEE K. Majumder, S. R. B. Pillai, and S. Mulleti are with the Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, 400076, India. Emails: [email protected], [email protected], [email protected] August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION AND MAIN RESULTS We are concerned with the observability of the Schrödinger equation with a confining potential in the Euclidean space: ∂_t ψ = P ψ , P = V(x) - 12 , t ∈, x ∈^d, where V is a real-valued potential, bounded from below. Specific assumptions shall be stated below. The general problem reads as follows: we wonder which measurable sets ω⊂^d and times T > 0 satisfy ∃ C > 0 : ∀ u ∈ L^2(^d) , *u_L^2(^d)^2 ≤ C ∫_0^T *^- t P u_L^2(ω)^2 t . *Obs(ω, T) When this property (ω, T) is true, we say that the Schrödinger equation (<ref>) is observable from ω in time T, or that ω observes the Schrödinger equation. The question consists in finding conditions on the pair (ω, T) ensuring that one can recover a fraction of the mass of the initial data u, by observing the solution ψ(t) = ^- t P u of (<ref>) in ω during a time T. We will often call ω the observation set and T the observation time. As for the constant C in the inequality, we will refer to it as the observation cost throughout the text. When an observation set ω is fixed, the infimum of times T > 0 such that (ω, T) holds is called the optimal observation time, and is denoted by T_⋆ = T_⋆(ω). It is clear that this so-called observability inequality holds for ω = ^d in any time T > 0. This is because the propagator solving the Schrödinger equation ^- t P is an isometry on L^2(^d).[Another consequence of this is that the condition (ω, T) is “open" with respect to T: if (ω, T) is true with cost C > 0, then (ω, T - ) is true as soon as < 1/C. See Lemma <ref> in the appendix for a precise statement.] But from the viewpoint of applications, one would like to find the smallest possible observation sets and the corresponding optimal times for which the observability inequality holds. The observability question for Schrödinger-type equations has been extensively investigated over the past decades, mainly in compact domains of ^d or compact Riemannian manifolds. See the surveys of Laurent <cit.> or Macià <cit.> for an overview. In a compact Riemannian manifold, Lebeau showed that the so-called Geometric Control Condition (introduced for the wave equation in <cit.>) is sufficient to get observability of the Schrödinger equation in any time T > 0 <cit.>. This means that all billiard trajectories have to enter the observation set in finite time. See for instance Phung <cit.> for later developments in Euclidean domains. However, works by Haraux <cit.> and Jaffard <cit.> on the torus show that this condition is not always necessary. Since then, considerable efforts have been made to find the good geometric condition characterizing the observability of the Schrödinger equation, depending on the geometrical context. This question is closely related to that of understanding the concentration or delocalization of Laplace eigenfunctions or quasimodes, which rule the propagation of states through the Schrödinger evolution; see <cit.>. The latter properties are linked to the behavior of the underlying classical dynamics, which is supposed to drive the quantum dynamics at high frequency. In the literature, mainly two different dynamical situations have been investigated. On the one hand, complete integrability, meaning existence of many conserved quantities, usually features symmetries that result in high multiplicity in the spectrum at the quantum level. This allows for possible concentration of eigenfunctions. On the other hand, chaotic systems, epitomized by the geodesic flow of negatively curved Riemannian manifolds, go along with strong instability properties. For instance, quantum ergodicity states that most[In fact, the situation is more complicated due to the possible existence of a sparse subsequence of eigenmodes concentrating around unstable closed classical trajectories—a phenomenon known as scarring.] Laplace eigenfunctions are delocalized on manifolds with ergodic geodesic flow. Here we collect a non-exhaustive list of references illustrating this diversity of situations. On the torus, observability was investigated by several authors. In addition to <cit.>, let us mention Burq and Zworski <cit.>, Bourgain, Burq and Zworski <cit.>, Macià <cit.>, as well as Anantharaman and Macià <cit.>. General completely integrable systems were studied by Anantharaman, Fermanian Kammerer and Macià <cit.>. As for the disk, the question of characterizing open sets from which observability holds was solved by Anantharaman, Léautaud and Macià <cit.>. Macià and Rivière thoroughly described what happens on the sphere and on Zoll manifolds <cit.>. In the negatively curved setting, we refer to Anantharaman <cit.>, Anantharaman and Rivière <cit.>, Eswarathasan and Rivière <cit.>, Dyatlov and Jin <cit.>, Jin <cit.> and recently Dyatlov, Jin and Nonnenmacher <cit.>. See also Privat, Trélat and Zuazua <cit.> in connection with quantum ergodicity. Recently, there has been a growing interest in the question of observability for the Schrödinger equation in the Euclidean space, for which new difficulties arise due to the presence of infinity in space. Täufer <cit.> deals with the observability of the free Schrödinger equation in ^d, showing that it is observable from any non-empty periodic open set in any positive time. It relies on the Floquet-Bloch transform and the theory of lacunary Fourier series. It was later generalized by Le Balc'h and Martin <cit.> to the case of periodic measurable observation sets with a periodic L^∞ potential, in dimension 2. In <cit.>, Huang, Wang and Wang characterize measurable sets for which the Schrödinger equation (<ref>) is observable, in dimension d = 1 when V(x) = *x^2m, m ∈. They prove that, in the case where m = 1 (resp. m ≥ 2), one has observability from ω⊂ in some time (resp. in any time) if and only if lim inf_x → + ∞*ω∩ [-x, x]*[-x, x] > 0 , where * is the one-dimensional Lebesgue measure. Such a set is called “weakly thick". Simultaneously, Martin and Pravda-Starov <cit.> provided a generalization of this condition in dimension d which turns out to be necessary if d ≥ 1 and sufficient if d = 1 for observability to hold, in the case of the fractional harmonic Schrödinger equation, namely equation (<ref>) with P = ( - + *x^2 )^s, where s ≥ 1. In the particular cases of potentials or operators discussed above, the techniques that are used, mainly relying on abstract harmonic analysis tools, provide very strong results. However, it seems that more general potentials remain out of reach, since the arguments involved require the knowledge of precise spectral estimates on eigenvalues and eigenfunctions, explicit asymptotics and symmetry properties. Moreover, regarding the case of the harmonic oscillator, the existing results focus on the properties of the sets for which observability holds, but given such a set, they do not give a hint of what would be the minimal time for which the observability inequality holds. In fact they provide an upper bound for this optimal time independent of the open set, corresponding to half a period of the classical harmonic oscillator. But it is reasonable to think that this upper bound can be improved taking into account the geometry of the observation set. §.§ Motivations, assumptions and notation The present work aims to address the issues discussed above, namely: * find a robust method to prove that the Schrödinger equation is observable from a given set with less restrictions on the dimension or the potential (e.g. variations of the harmonic potential like x · Ax where A is a real symmetric positive-definite d × d matrix, or potentials of the form *x^2m with m > 0 a real number); * provide a more accurate upper bound for the optimal observation time depending on the shape of the observation set. Throughout this work, we make the following assumptions on the potential: Assumption The potential V is C^∞ smooth and satisfies ∃ m > 0, ∃ C, r > 0 : ∀*x≥ r 1C*x^2m≤ V(x) ≤ C *x^2m , ∀α∈^d, ∃ C_α > 0 : ∀ x ∈^d, *∂^α V(x)≤ C_α*x^2m - *α . Unless stated otherwise, we assume that the potential is subquadratic, namely 0 < m ≤ 1. Throughout the article, we shall refer to the left-hand side inequality in (<ref>) by saying that the potential is elliptic. In addition, the notion of principal symbol that we will use is made clear below. [Principal symbol] Let V_0 and V be two potentials satisfying Assumption <ref> above with a power m > 0. We say that V_0 and V have the same principal symbol if ∀α∈^d, ∃ C_α > 0 : ∀ x ∈^d, *∂^α (V - V_0)(x)≤ C_α*x^2m - 1 - *α . This defines an equivalence relation. The equivalence class of such a potential V is called the principal symbol of V. Classical spectral theory arguments ensure that the operator V(x) - 12 with domain C_c^∞(^d) is essentially self-adjoint (from now on, its closure will be denoted by P) and that the evolution problem (<ref>) on L^2(^d) is well-posed. In fact, most of our results will depend only on the principal symbol of V, namely they will not depend on perturbations of the potential of order x^2m - 1. Our strategy emphasizes the role of the underlying classical dynamics ruling the evolution of high-energy solutions to the Schrödinger equation (<ref>), by means of the so-called quantum-classical correspondence principle. This motivates the introduction of the symbol of the operator P, defined by p(x, ξ) := V(x) + *ξ^2/2 , (x, ξ) ∈^2d . This is a smooth function on the phase space ^2d≃_x^d ×_ξ^d, tending to + ∞ as (x, ξ) →∞, since the potential is elliptic. Throughout this text, typical phase space points will be denoted by ρ = (x, ξ), and we will sometimes use the notation π : ^2d→^d for the projection (x, ξ) ↦ x. We will often refer to p as the classical Hamiltonian, and to its quantization P as the quantum Hamiltonian. The Hamiltonian flow (ϕ^t)_t ∈ on ^2d, which preserves p, is defined as the flow generated by the Hamilton equation: tϕ^t(ρ) = J ∇ p(ϕ^t(ρ)) , ϕ^0(ρ) = ρ . It is well-defined for all times under our assumptions. Here J = [ 0 I_d; - I_d 0 ] is the symplectic matrix. Introducing (x^t, ξ^t) = ϕ^t(ρ) the position and momentum components of the flow, this can be rewritten as { t x^t = ξ^t tξ^t = - ∇ V(x^t) . , (x^0, ξ^0) = ρ . In the sequel, we will refer to the x-component of a trajectory of the Hamiltonian flow as a projected trajectory. §.§ Main result Let us insist on the fact that the result below applies for confining potentials having a subquadratic growth, i.e. 0 < m ≤ 1. We will explain later why we restrict ourselves to this case. Throughout the article, the open ball of radius r centered at x ∈^d is denoted by B_r(x). Our main result reads as follows. Let V_0 and V be potentials on ^d satisfying Assumption <ref> with some m ∈ (0, 1], having the same principal symbol. Set P = V(x) - 12Δ and denote by ^- t P the propagator solving the Schrödinger equation ∂_t ψ = P ψ . Also denote by (ϕ_0^t)_t ∈ the Hamiltonian flow associated with the symbol p_0(x, ξ) = V_0(x) + 1/2ξ^2. For any Borel set ω⊂^d, define for any R > 0 the thickened set ω_R = ⋃_x ∈ω B_R(x) , and introduce for any T > 0 the classical quantity[The integral makes sense when ω is Borel. Indeed, the map (t, ρ) ↦_ω×^d(ϕ_0^t(ρ)) is then Lebesgue-measurable, so that the same is true for t ↦_ω×^d(ϕ_0^t(ρ)) when ρ is fixed. Tonelli's theorem <cit.> then shows that the map ρ↦∫_0^T _ω×^d(ϕ_0^t(ρ)) t is Lebesgue-measurable.] K_p_0^∞(ω, T) = lim inf_ρ→∞∫_0^T _ω×^d(ϕ_0^t(ρ)) t = lim inf_ρ→∞*t ∈ (0, T)(π∘ϕ_0^t)(ρ) ∈ω . Fix a Borel set ω⊂^d. Then the following two assertions hold: * (Sufficient condition) Assume there exists T_0 > 0 such that K_p_0^∞ := K_p_0^∞(ω, T_0) > 0 . Then there exists a constant L = L(d, T_0, p_0, p) > 0 such that for R = L/K_p_0^∞, for any compact set K ⊂^d and any T > T_0, (ω_R ∖ K, T) is true, namely: ∃ C > 0 : ∀ u ∈ L^2(^d) , *u_L^2(^d)^2 ≤ C ∫_0^T *^- t P u_L^2(ω_R ∖ K)^2 t . * (Necessary condition) Assume there exists a time T > 0 such that (ω, T) is true with cost C_ > 0, that is to say ∀ u ∈ L^2(^d) , *u_L^2(^d)^2 ≤ C_∫_0^T *^- t P u_L^2(ω)^2 t . Then there exists a constant c = c(d, T, p_0, p) such that for any R ≥ 1 and any compact set K ⊂^d, it holds: K_p_0^∞(ω_R ∖ K, T) ≥1C_ - c log R^1/2R . The rest of the introduction is organized as follows: in Subsection <ref>, we comment on Theorem <ref> and describe the main ideas of the proof. Then we discuss various examples of application. We begin with examples in dimension 1 in Subsection <ref>. In Subsection <ref>, we investigate the particular case of harmonic oscillators in two dimensions. We specifically focus on conical and rotation-invariant observation sets in Subsections <ref> and <ref> respectively. These are cases where one can prove accurate estimates on the optimal observation time—see for instance Proposition <ref>. Arithmetical properties of the characteristic frequencies of the harmonic oscillator under consideration also play a key role, as evidenced by Proposition <ref>. Then in Subsection <ref>, we present other consequences of Theorem <ref> concerning observability of eigenfunctions of the Schrödinger operator P and energy decay of the damped wave equation. Lastly, we discuss the links between our work and the Kato smoothing effect in Subsection <ref>, and provide with further explanations regarding the natural semiclassical scaling of the problem and the criticality of quadratic potentials in Subsection <ref>. §.§ Idea of proof and comments The core of our work consists in establishing a suitable version of Egorov's theorem to relate the evolution through the Schrödinger flow of high-energy initial data on the quantum side, to the action of the associated Hamiltonian flow on the classical side. This is done using semiclassical analysis. To apply this theory, we approximate the indicator function of ω by a smooth and sufficiently flat cut-off function. This is how the larger set ω_R arises. Although Theorem <ref> is not a complete characterization of sets for which observability holds, it provides an almost necessary and sufficient condition of observability, up to thickening the observation set, and it gives sharp results in many concrete situations. See the examples given in Subsections <ref>, <ref> and <ref> below. We review remarkable features of this statement. * The observability condition (<ref>) we find is reminiscent of the Geometric Control Condition that rules the observability or control of the wave equation in a number of geometrical contexts, especially compact Riemannian manifolds <cit.>. It reflects the importance of the quantum-classical correspondence in this problem: high-energy solutions to the Schrödinger equation, lifted to phase space, propagate along the trajectories of the Hamiltonian flow. Our constant K_p_0^∞(ω, T) is to some extent different from the one quantifying the Geometric Control Condition for the wave equation (see the constant C(t) of Lebeau <cit.> or the constant K(T) of Laurent and Léautaud <cit.>). Indeed, the latter constant consists in averaging some function (typically the indicator function of ω) along speed-one geodesics in a time interval [0, T]. In contrast, our constant K_p_0^∞(ω, T) does the same, except that the length of trajectories tends to infinity as their initial datum ρ goes to infinity in phase space. This is consistent with the infinite speed of propagation of singularities for the Schrödinger equation. * Let us insist on the fact that the Schrödinger equation (<ref>) does not contain any semiclassical parameter. Instead, we artificially introduce a semiclassical parameter R → + ∞, which we use to enlarge the observation set. This is natural in view of the fact that remainders in the quantum-classical correspondence are expressed in terms of derivatives of the symbol under consideration: scaling these symbols by 1/R thus produces remainders of the same order. * On the technical side, the non-compactness of the Euclidean space yields new difficulties. In our problem, the use of semiclassical defect measures seems to be limited to very particular geometries of the observation set: roughly speaking, only homogeneous symbols can be paired with such measures, which would theoretically restrict the scope of the result to conical observation sets. Instead, we use (and prove) a version of Egorov's theorem to study the operator ^ t P_ω^- t P. The idea of using Egorov's theorem was introduced in control theory by Dehman, Lebeau <cit.> and Laurent, Léautaud <cit.>. Of course, we must pay a particular attention to the remainder terms, in connection with the non-compactness of the ambient space. The great advantage of this is that we can describe the evolution of a fairly large class of symbols on the phase space, which in turn allows to study observability for a variety of observation sets. * Our result is very robust since it is valid for a fairly large class of potentials, with the noteworthy property that the statement only involves the principal symbol of the potential. Indeed, up to enlarging the parameter R, the fact that the dynamical condition (<ref>) is fulfilled or not in ω_R is independent of the representative of the equivalence class of V_0 (introduced in Definition <ref>) chosen to compute K_p^∞(ω_R, T). This is a consequence of Corollary <ref>. This was already evidenced in the context of propagation of singularities for solutions to the perturbed harmonic Schrödinger equation; see Mao and Nakamura <cit.>. The stability under subprincipal perturbation of the potential fails to be true if one considers superquadratic potentials (m > 1), as we can see by the examination of the trajectories of the flow. Take V_0 satisfying Assumptions <ref> for some m > 1, and perturb this potential with some W behaving like *x^2m - 1. Consider the Hamiltonian flow associated with the potential V = V_0 + W. Then the second derivative of a trajectory of the classical flow is given by ^2 x^2 x^t = - ∇ V_0(x^t) - ∇ W(x^t) . We remark that the perturbation is of order ∇ W(x^t) ≈*x^t^2 (m - 1), which may blow up when x^t is large. When m ≤ 1, the perturbation of the trajectory remains bounded, and can therefore be absorbed by thickening the observation set. See Subsection <ref> and the proof of Theorem <ref> at the end of Section <ref> for further details. * At the level of the Hamiltonian flow, the difference between m ≤ 1 and m > 1 can also be understood by looking at the equation solved by the differential of the flow: differentiating the Hamilton equation (<ref>) yields tϕ^t(ρ) = J p(ϕ^t(ρ)) ϕ^t(ρ) . We deduce that the differential of the flow behaves as *ϕ^t≲^t p , which means that the norm of the Hessian of the Hamiltonian plays the role of a local Lyapunov exponent for the classical dynamics. Yet p is uniformly bounded on phase space if and only if m ≤ 1. Incidentally, it is likely that for m < 1, one can exploit the decay of p at infinity in the space variable in order to get small remainders in the proof of Egorov's theorem (see Proposition <ref>) instead of taking R large. This might allow to thicken ω by any positive rather than by a large parameter R. Since we are mostly interested in quadratic potentials in this work, we chose not to refine our result in this direction. * It is possible that the necessary condition can be slightly improved by propagating coherent states rather than using Egorov's theorem on quantum observables. This is discussed in more details in Subsection <ref>. §.§ Examples in dimension 1 The one-dimensional case gives an insight of how the potential can influence the geometry of sets for which observability holds. §.§.§ Harmonic potential The one-dimensional harmonic oscillator corresponds to V(x) = 1/2 x^2. The Hamiltonian flow reads: ϕ^t(x, ξ) = (x cos t + ξsin t, - x sin t + ξcos t) , (x, ξ) ∈^2, t ∈ . Our dynamical condition (<ref>) can then be written as lim inf_(x, ξ) →∞∫_0^T _ω(x cos t + ξsin t) t > 0 . In view of the periodicity of the flow, it is relevant to consider T = 2 π. Under this additional assumption, condition (<ref>) reduces to K^∞ := lim inf_A →∞∫_0^2 π_ω(A sin t) t > 0 , where A has to be thought as (the square-root of) the energy p(x, ξ) = 1/2 (x^2 + ξ^2). We claim that this is equivalent to the weak thickness (<ref>) condition of Huang, Wang and Wang <cit.>. Suppose that K^∞ > 0. First, notice that ∫_0^2 π_ω(A sin t) t = 2 ∫_- π/2^π/2_ω(A sin t) t . Second, fix c ∈ (0, K^∞/2). Since the integrand is bounded by 1, we can slightly reduce the time interval to [- π/2 + c/3, π/2 - c/3] so that y = A sin t defines a proper change of variables: c/3 ≤lim inf_A →∞∫_- π/2^π/2_ω(A sin t) t - 2/3 c ≤lim inf_A →∞∫_- π/2 + c/3^π/2 - c/3_ω(A sin t) t ≤lim inf_A →∞∫_- π/2 + c/3^π/2 - c/3_ω(A sin t) A cos tA 2/π×c/3 t = 3 π2 clim inf_A →∞1A∫_- A sin(π/2 - c/3)^A sin(π/2 - c/3)_ω(y) y . We used the concavity inequality cos t ≥ 1 - 2/πt on [- π/2, π/2] to get the third inequality. This gives lim inf_A →∞*ω∩ [- A, A]*[- A, A] > 0 , namely ω is weakly thick. Conversely, we can follow the same lines, using that the Jacobian cos t is less than 1, to show that any weakly thick set satisfies (<ref>). Although our main theorem allows to conclude that observability is true only on a slightly larger set, it is more precise than the previous result from <cit.> with respect to the optimal observation time: we can estimate this optimal time depending on the geometry of the observation set. In addition, our result is stable under subprincipal perturbation of the potential. In particular, weak thickness of ω implies observability from ω_R (for some R given by Theorem <ref>) for any potential whose principal symbol is 12 x^2 (or any positive multiple of x^2). Anticipating on the next paragraph, observe that a weakly thick set can contain arbitrarily large gaps, hence is not necessarily thick (see <cit.>). §.§.§ Potentials having critical points An interesting phenomenon appears when the potential possesses a sequence of critical points going to infinity. To construct such a potential, we proceed as follows. We set V(x) = ( 2 + sin(a log*x) ) x^2 , x ∈ , where a is a positive parameter to be chosen properly. See Figure <ref> for an illustration. One can check that Assumption <ref> is fulfilled: V is subquadratic, elliptic (bounded from below by x^2) and each derivative yields a gain of x^-1. Notice however that this is not a subprincipal perturbation of the harmonic potential. It holds for any x ∈: V'(x) = xx^2( 2 x^2 (2 + sin(a logx)) + a x^2 cos(a logx) ) = xx^2( (4 + 2 sin(a logx)) + x^2 (4 + 2 sin(a logx)) + a cos(a logx)) . Factorizing the last two terms, we can write for a certain angle φ_a: V'(x) = xx^2( (4 + 2 sin(a logx)) + x^2 (4 + √(4 + a^2)sin(φ_a + a logx))) = xx^2( (4 + 2 sin(a logx)) + 4 x^2 (1 + √(14 + (a4)^2)sin(φ_a + a logx))) . When 1/4 + (a/4)^2 > 1, which is true if and only if a > 2 √(3), we can find two sequences (x_n^+)_n ∈ and (x_n^-)_n ∈ tending to infinity such that {√(14 + (a4)^2)sin(φ_a + a logx_n^+) ≥ - 1 + η √(14 + (a4)^2)sin(φ_a + a logx_n^-) ≤ - 1 - η. , for some sufficiently small η > 0. The intermediate value theorem then implies that there exist infinitely many points x_n^0, with x_n^0 tending to infinity, where V'(x_n^0) = 0. Now we observe from (<ref>) that the trajectories of the Hamiltonian flow with initial data ρ_n = (x_n^0, 0) are stationary, that is ϕ^t(ρ_n) = ρ_n , ∀ t ∈ . We deduce the following: assume that the Schrödinger equation (<ref>) is observable from ω⊂ in some time for this potential. Then, the necessary condition of Theorem <ref> tells us that there exists R > 0 such that for any n large enough, x_n^0 ∈ω_R. We can rephrase this as ∃ n_0 ∈ : ∀ n ≥ n_0 , ω∩ B_R(x_n^0) ≠∅ . This is consistent with the phase portrait depicted in Figure <ref>: some energy might be trapped around small closed trajectories encircling stable critical points. Hence, in order to have observability, ω cannot be too far away from those points. In fact, one observes that (<ref>) concerns all critical points, whatever the sign of V”(x_n^0). In conclusion, the situation of a potential of the form (<ref>) is in contrast with the previous case of the harmonic potential 1/2 x^2 where the weak thickness condition allowed for large gaps around any sequence of points x_n →∞ satisfying x_n+1≫x_n. Notice that ω can still have large gaps away from critical points though. §.§.§ Sublinear potentials Our last remark in the one-dimensional case concerns potentials having a sublinear growth, namely m ∈ (0, 1/2]. In this situation, the trajectories of the Hamiltonian flow whose initial datum has purely potential energy (namely ξ = 0) do not escape far away from their initial location. This is because: tξ^t = - V'(x^t) = O(*x^t^2m - 1) , which remains bounded uniformly as soon as m ≤ 1/2. For the same reason, m=1/2 also appears to be critical in Proposition <ref>. If observability from ω⊂ holds in some time for such a potential, the necessary condition of Theorem <ref> leads to the conclusion that ω has to intersect any interval of length 2 R, for some R > 0. Likewise, in higher dimension, any set from which the Schrödinger equation is observable must satisfy ∃ R > 0 : ∀ x ∈^d , ω∩ B_R(x) ≠∅ . Therefore, sets observing the Schrödinger equation (<ref>) for a sublinear potential cannot have arbitrarily large holes.[Notice that (<ref>) is much weaker that the usual thickness condition of control theory: ∃ R, c > 0 : ∀ x ∈^d , *ω∩ B_R(x)≥ c *B_R(x) . ] Although the case of bounded potentials (i.e. m = 0) is not in the scope of this article, let us mention that this observation is consistent with recent results on the free Schrödinger equation. See Huang, Wang, Wang <cit.> and Täufer <cit.>, as well as Le Balc'h and Martin <cit.> for the case of bounded periodic potentials in two dimensions. §.§ Observability of two-dimensional harmonic oscillators As an application of Theorem <ref>, we study the observability of harmonic oscillators in conical or rotation-invariant sets. Our results mainly concern the two-dimensional case. The examples presented in this subsection suggest that there is no general reformulation of our dynamical condition (<ref>) in purely geometrical terms. That is to say, it seems difficult to find an equivalent condition that would not involve the Hamiltonian flow (e.g. thickness, weak thickness...). In contrast, by restricting ourselves to a certain class of potentials (harmonic oscillators at the principal level here) and a certain class of observation sets (conical or rotation-invariant), one can indeed transform the dynamical condition into a geometrical one. Along the way, we will see that observability properties are very sensitive to slight modifications of the coefficients of the harmonic oscillator under consideration. This subsection culminates in Proposition <ref>, where we show that observability of rotation-invariant sets is governed by Diophantine properties of the oscillator's coefficients. Let us first recall basics about general harmonic oscillators. Let A be a real symmetric positive-definite d × d matrix and set H_A = 12 (x · A x - Δ). Up to an orthonormal change of coordinates, one can assume that A is diagonal, so that the potential can be written V_A(x) = 12 x · A x = 12∑_j = 1^d ν_j^2 x_j^2 . The characteristic frequencies of H_A are those numbers ν_1, ν_2, …, ν_d, that we will always assume to be positive. The corresponding Hamiltonian flow is explicit: denoting by x_1(t), x_2(t), …, x_d(t) and ξ_1(t), ξ_2(t), …, ξ_d(t) the components of ϕ^t, we can solve the Hamilton equations (<ref>): { x_j(t) = cos(ν_j t) x_j(0) + 1ν_jsin(ν_j t) ξ_j(0) ξ_j(t) = - ν_j sin(ν_j t) x_j(0) + cos(ν_j t) ξ_j(0) . , ∀ j ∈{1, 2, …, d} . From this expression, we see that each coordinate is periodic, so that the trajectories whose initial conditions are of the form x_j(0) = x_0 δ_j = j_0, ξ_j(0) = ξ_0 δ_j = j_0 with x_0, ξ_0 ∈, are periodic, with period 2 π/ν_j_0 (unless both x_0 and ξ_0 vanish, in which case the trajectory is a point). Assuming d = 2, we can classify harmonic oscillators into three categories. See Figure <ref> for an illustration. * We call isotropic a harmonic oscillator[For general dimension, we still call isotropic any harmonic oscillator having all its characteristic frequencies equal.] with ν_1 = ν_2 = ν. In this situation, energy surfaces, that is, level sets of the classical Hamiltonian, are concentric spheres in phase space (up to a symplectic change of coordinates). Trajectories of the Hamiltonian flow are great circles on these spheres, so that their projection on the x-variable “physical space" are ellipses. The flow is periodic, with period 2π/ν. * The harmonic oscillator is said to be anisotropic rational when ν_2/ν_1 is a rational number different from 1. Trajectories, although all closed, exhibit a more complicated behavior. Writing ν_2/ν_1 = p/q with p and q coprime positive integers, the period of the flow is p 2π/ν_2 = q 2π/ν_1. Projected trajectories are known in the physics literature as Lissajous curves <cit.>. * We say a harmonic oscillator is anisotropic irrational when ν_2/ν_1∈∖. In that case, the Hamiltonian flow is aperiodic. Trajectories are dense in invariant tori (see (<ref>) below), yielding projected trajectories that fill rectangles parallel to the eigenspaces of the matrix A. In the multi-dimensional setting, the description of the flow can be achieved by examining the -vector space generated by the characteristic frequencies. The dimension of the latter gives the number of periodic decoupled “sub-oscillators" from which we can reconstruct the dynamics of the whole oscillator. This is thoroughly explained in the article of Arnaiz and Macià <cit.>, who compute the set of quantum limits of general harmonic oscillators, and study their behavior when bounded perturbations of the potential are added <cit.>. In order to understand well the classical dynamics of the harmonic oscillator, it is convenient to take advantage of the complete integrability of this dynamical system. Here, the classical Hamiltonian is the sum of the one-dimensional Hamiltonians 1/2(ν_j^2 x_j^2 + ξ_j^2), which are conserved by the flow, as one can see from the explicit expression (<ref>). This property implies that energy levels are foliated in (possibly degenerate) invariant d-dimensional tori of the form: _E = (x, ξ) ∈^2d∀ j, 12(ν_j^2 x_j^2 + ξ_j^2) = E_j , E = (E_1, E_2, …, E_d) ∈_+^d . The projection of these tori on the x-variable space yields rectangles, as in Figure <ref>. The goal of the following examples is to highlight the fact that observability is sensitive to the global properties of the Hamiltonian flow. We will show that isotropic and anisotropic harmonic oscillators behave differently with respect to observability, i.e. the sets that observe the Schrödinger equation are not the same. One can already anticipate that the isotropic oscillator ν_1 = ν_2 has less such sets since its classical trajectories are all ellipses, that is, they are very simple and only explore a small part of the classically allowed region. It contrasts with the anisotropic situation ν_1 ≠ν_2, where, in the rational case for instance, trajectories visit more exhaustively the classically allowed region. It makes it harder to find a set that is not reached by any of these trajectories. It is even more the case when ν_1 and ν_2 are rationally independent, since the trajectories are then dense in the invariant torus to which they belong, as we already discussed. §.§.§ Observability from conical sets We first investigate the case where the observation set ω is conical, namely it is invariant by dilations with positive scaling factor: ∀ x ∈^d, ∀λ > 0 , ( x ∈ω ⟺ λ x ∈ω) . We will see that exploiting the symmetries of harmonic oscillators is sometimes sufficient to obtain satisfactory results, without the need of our main theorem (see Subsection <ref>). However, Theorem <ref> will prove useful to estimate precisely the optimal observation time in some situations. As we already noticed, it follows from the expression of the flow (<ref>) that, whatever the characteristic frequencies, the classical dynamics exhibits periodic trajectories contained in the coordinate axes. Those starting from the origin are of the form x_j(t) = 1ν_jsin(ν_j t) ξ_j(0) , ξ_j(t) = cos(t) ξ_j(0) for one j ∈{1, 2, …, d}, and with all the other components being equal to zero. Thus it appears that a general necessary condition for a conical ω to observe the Schrödinger equation (<ref>), working for any harmonic oscillator, is that it contains at least half of each line spanned by an eigenvector of A. Note that this works in any dimension. Consider P = V(x) - 12Δ, where V is a potential fulfilling Assumptions <ref> and having principal symbol V_A(x) = 1/2 x · A x, A being a real symmetric positive-definite d × d matrix. Let ω⊂^d be a conical set and assume that it observes the Schrödinger equation in some time T > 0. Then for all eigenvector v of A, it holds v ∈ω or -v ∈ω. Now we place ourselves in dimension d = 2. We know from the above Proposition <ref> that the closure of a conical set which observes the Schrödinger equation has to contain at least half of any line spanned by an eigenvector of the matrix A. Here, we exhibit a conical observation set, illustrated on Figure <ref>, that behaves differently according to whether the harmonic oscillator is isotropic or not. Let d = 2 and consider a potential V fulfilling Assumption <ref>, and with principal symbol V_A(x) = 1/2 x · A x , x ∈^2 , where A is a real symmetric positive-definite matrix. Denote by ν_+ ≥ν_- > 0 its characteristic frequencies. Choose an orthonormal basis of eigenvectors (e_+, e_-) of A, so that A e_± = ν_±^2 e_±. For any ∈ (0, π/2), define the two cones with aperture : C_^± = x ∈^2x · e_∓ < tan(2) x · e_± . Then the set ω() = C_^+ ∪ C_^- observes the Schrödinger equation if and only if the oscillator is anisotropic, that is ν_- < ν_+. In that case, there exist constants C, c > 0, possibly depending on ν_+, ν_-, such that for any ∈ (0, π/2), T_0 - C ^2 ≤ T_⋆(ω()) ≤ T_0 - c ^2 , where T_0 = πν_+(2 + ⌊ν_+ν_-⌋) . This results leads to several noteworthy observations. First, it does not distinguish between rational and irrational anisotropic oscillators: one cannot guess, from the knowledge that observability form ω() holds, whether the oscillator is rational or irrational. Second, the time T_0, obtained formally as the limiting optimal observation time when → 0, does not vary continuously with respect to ν_+ and ν_- because of the floor function. This is related to special symmetry properties of the Hamiltonian flow that appear when ν_+ is a multiple of ν_-, namely the projected trajectories can go from a quadrant to another one crossing the origin, and thus avoiding to cross the observation cones. See Figure <ref>. Third, when is fixed, our result does not rule out that the optimal observation time is continuous with respect to the characteristic frequencies. This is because the constants C and c can depend on ν_1, ν_2. It is interesting to see what happens when ν_+, ν_- →ν, that is to say when the operator P becomes closer to an isotropic harmonic oscillator. As mentioned earlier, we know from Proposition <ref> that observability is not true for a set of the form C_^+ ∪ C_^- for an isotropic oscillator (< π/2 is important here). Thus it can seem surprising that the optimal observation time for such a set is bounded uniformly in ν_+, ν_- as the frequencies tend to ν. Actually, degeneracy in this limit should be seen on the observation cost, rather than on the optimal observation time. Indeed, computations suggest that the value of the dynamical constant K_p^∞(ω(), T) tends to zero; see (<ref>) in the proof. This would imply a blow up of the observation cost as ν_1, ν_2 →ν, in virtue of the necessary condition part of Theorem <ref>. §.§.§ Refinement for the unperturbed isotropic harmonic oscillator Theorem <ref> allows to conclude about whether an open set ω observes the Schrödinger equation provided this open set is in a sense “regular": the thickening process yields open sets that are sufficiently close to a cut-off function. But the quest of characterizing general measurable sets seems to be more delicate. To understand the limitation of our main theorem, we investigate the very particular case of the isotropic harmonic oscillator and conical observation sets in dimension d ≥ 1. In this setting, we can take advantage of symmetries and exact propagation of coherent states. For the purpose of the statement, let us introduce some notation. A conical set in ^d is determined by the subset Σ = ω∩^d - 1 in the unit sphere. When Σ⊂^d - 1 we denote by ω(Σ) the conical set defined by ω(Σ) = x ∈^d ∖{0}xx∈Σ . Moreover, for any subset Σ⊂^d - 1, we introduce the notation - Σ = θ∈^d - 1- θ∈Σ . The lower density of a measurable set Σ⊂^d - 1, denoted by Θ_Σ^-, is the function ^d - 1→ [0, 1] defined by Θ_Σ^-(θ) = lim inf_r → 0σ(Σ∩ B_r(θ))σ(B_r(θ)) , ∀θ∈^d - 1 , where B_r(θ) is the ball of radius r centered at θ in ^d, and σ is the uniform probability measure on the unit sphere ^d - 1. We insist on the fact that the statement below in proved for exact isotropic harmonic oscillators, and not for perturbations of it. Let P = 1/2 ( ν^2 x^2 - Δ ) be an isotropic oscillator with characteristic frequency ν > 0. Let Σ⊂^d - 1 be measurable, and ω(Σ) be the corresponding conical set. Set Σ= Σ∪ - Σ the symmetrized version of Σ. * If the Schrödinger equation is observable from ω(Σ) in some time, then it holds inf_^d - 1Θ_Σ^- > 0 . * If Σ= Σ∪ - Σ has full measure, namely σ(^d - 1∖Σ) = 0, or equivalently Θ_Σ^-(θ) = 1 for all θ∈^d - 1, then ω(Σ) observes the Schrödinger equation, with optimal observation time T_⋆ < 2 π/ν. The gap between the sufficient and the necessary conditions above can be thought as the difference between Σ being the complement of a Cantor set (thus having full measure) and Σ being the complement of a fat Cantor set; see <cit.>. Regarding the estimate on the optimal observation time, the strict inequality is due to Lemma <ref>. In fact, considering the propagation of coherent state, as investigated for instance by Combescure and Robert (see <cit.>), one could conjecture that observability is characterized by the property ∃ R > 0 : lim inf_ρ→∞∫_0^T *ω∩ B_R(ϕ^t(ρ)) t > 0 . This type of integral can be rewritten as ∫_0^T *ω∩ B_R(ϕ^t(ρ)) t = ∫_0^T *_ω_L^1(B_R(ϕ^t(ρ))) t . The necessary condition of Theorem <ref>, namely K_p^∞(ω_R, T) > 0 for some R large enough, involves the quantity ∫_0^T _ω_R(ϕ^t(ρ)) t = ∫_0^T *_ω_L^∞(B_R(ϕ^t(ρ))) t . Since the L^1 norm in a ball of radius R is controlled by the L^∞ norm (times a constant of order R^d), we know that the dynamical condition (<ref>) is stronger than the condition K_p^∞(ω_R, T) > 0, involving the L^∞ norm as written in (<ref>). In particular, if ω is dense but Lebesgue negligible, the condition K_p^∞(ω_R, T) > 0 will be satisfied, since then ω_R = ^d for any R > 0, whereas (<ref>) will not. In this situation, Theorem <ref> would then yield a trivial result, namely that observability holds from the whole space, although it clearly does not hold from ω itself. Thus (<ref>) seems to be a good guess to free ourselves from thickening the observation set. In addition, this condition would be consistent with the generalized geometric control condition introduced by Burq and Gérard in the context of stabilization of the wave equation <cit.>. §.§.§ Observability from spherical sets In this section, we investigate the observability properties of a set consisting in a union of spherical layers. In the sequel, we refer to rotation-invariant (measurable) sets as spherical sets. Such a set ω is completely determined by the data of a measurable set I ⊂_+, such that ω = ω(I) = x ∈^dx∈ I . Due to the thickening process that occurs when applying Theorem <ref>, we shall generally make further assumptions, that ensure that a set and its thickened version are somewhat equivalent. The existence of many periodic circular orbits of the Hamiltonian flow for radial potentials implies that observability from ω(I) does not hold for such Hamiltonians if I contains large gaps. In fact, the proposition below works for slightly more general potentials. Let d ≥ 2. Suppose the Hamiltonian P is of the form P = V(x) - 12Δ with a potential V satisfying Assumption <ref> together with: * V(-x) = V(x), ∀ x ∈^d; * there exists an orthogonal change of coordinates M such that V(M S_θ M^-1 x) = V(x) , ∀ x ∈^d, ∀θ∈ , where S_θ is the rotation of angle θ acting on the first two coordinates; in particular, for every y ∈^d - 2, the map V_y : (x_1, x_2) ↦ V(M (x_1, x_2, y)) is radial; * the map Ṽ_0 such that V_y=0(x_1, x_2) = Ṽ_0((x_1, x_2)) is non-decreasing. Then for any spherical set ω(I), if observability holds from ω(I) in some time T > 0, it holds ∃ r > 0 : ∀ s ∈_+ : I ∩ [s, s + r] ≠∅ . The hypotheses are fulfilled for harmonic oscillators in d dimensions having at least two identical characteristic frequencies. In dimension 2, Proposition <ref> allows to conclude that spherical sets observing the Schrödinger equation for isotropic harmonic oscillators have to occupy space somewhat uniformly—they cannot contain arbitrarily large gaps. Therefore, we shall rule out isotropic harmonic oscillators from our study of observability from spherical sets. Instead, we investigate how the anisotropy of a harmonic oscillator can help to get observability from an observation set made of concentric rings. The proposition below investigates, in dimension 2, the observability from spherical sets of the form ω(I) where I = ⋃ I_n is a countable union of open intervals in _+. We require additionally that I_n→ + ∞ (we drop this assumption if there are only finitely many I_n's). To any such set, we associate a number between 0 and 1 that quantifies the distribution of the annuli ω(I_n) at infinity: κ_⋆(I) = minκ∈ [0, 1]lim inf_r → + ∞1r*I ∩ [κ r, r] = 0∈ [0, 1] . While investigating the observability property from such a set ω(I), we will see that it is relevant to compare the geometrical quantity κ_⋆(I) with a dynamical quantity that encodes relevant features of the underlying Hamiltonian flow. This dynamical constant is expressed in terms of a function Λ : _+^⋆→ [0, 1] defined by Λ(μ) = { tan(π/2p + q) if μ = p/q , (p, q) = 1 , p - q ≡ 0 2 sin(π/2p + q) if μ = p/q , (p, q) = 1 , p - q ≡ 1 2 0 if μ∈∖. . As far as the optimal observation time T_⋆ is concerned, we shall use Diophantine properties of μ to approximate irrational oscillators by rational ones, for which we can control T_⋆ by the period of the flow. This motivates the introduction of the irrationality exponent of an irrational number μ, defined by τ(μ) = sups ∈*μ - pq < 1q^s for infinitely many coprime couples (p, q) . Dirichlet's approximation theorem tells us that τ(μ) ∈ [2, + ∞], for any irrational number. Also keep in mind that τ(μ) = 2 is achieved for Lebesgue-almost every irrational. See the lecture notes <cit.> or the books <cit.> for further details. Let d = 2 and consider a potential V fulfilling Assumption <ref>, and with principal symbol V_A(x) = 12 x · A x , x ∈^2 , where A is a real symmetric positive-definite matrix. Denote by ν_1 and ν_2 the characteristic frequencies of A, and assume that ν_1 ≠ν_2. We fix I = ⋃ I_n a union of open intervals in _+, assuming that I_n→ + ∞. Denote by ω(I) the corresponding open spherical set in ^2, as defined in (<ref>). Then observability from ω(I) holds in some time T if and only if κ_⋆(I) > Λ(ν_2/ν_1) . Moreover, the optimal observation time T_⋆ can be estimated as follows: * if ν_2/ν_1∈, writing ν_2/ν_1 = p/q with p, q positive coprime integers, it holds T_⋆ < π/ν_2 p = π/ν_1 q ; * if ν_2/ν_1∈∖ is Diophantine, that is τ = τ(ν_2ν_1) < ∞, then it holds ∀ > 0, ∃ c_, C_ > 0 : c_(1κ_⋆(I))^1/τ - 1 + ≤ T_⋆≤ C_(1κ_⋆(I))^τ - 1 + . The constants c_ and C_ may depend on ν_1, ν_2, but not on I. Let us review the meaning of the different quantities involved in this statement. The number κ_⋆(I) introduced in (<ref>) encodes some notion of density[Beware of the fact that κ_⋆(I) does not coincide in general with the lower density of I defined by Θ_∞(I) = lim inf_r → + ∞I ∩ [0, r]/[0, r] . In fact, the two quantities satisfy Θ_∞(I) ≤κ_⋆(I) and κ_⋆(I) = 0 ⟺ Θ_∞(I) = 0 . The second assertion follows from the definition of κ_⋆(I). To check the first assertion, we write I ∩ [0, r][0, r] = κ_⋆I ∩ [0, κ_⋆ r][0, κ_⋆ r] + 1rI ∩ [κ_⋆ r, r]≤κ_⋆ + 1rI ∩ [κ_⋆ r, r] . Then taking lower limits as r → + ∞ and using the definition of κ_⋆ yield the desired inequality. Notice that the equality Θ_∞(I) = κ_⋆(I) is not true in general, as one can see from the example I = ⋃_n ∈ (n, n+1/2), for which we have Θ_∞(I) = 1/2 but κ_⋆(I) = 1.] of the set I. For instance, κ_⋆(I) = 1 means that I has positive density in any window [κ r, r] with κ < 1 as r → + ∞. In contrast, κ_⋆(I) close to zero means that the annuli are extremely sparse at infinity. This quantity is well-defined, for the map κ⟼lim inf_r → + ∞1r∫_κ r^r _I(s) s is non-increasing and lower semi-continuous (even Lipschitz-continuous in fact). That it is non-increasing comes from the monotonicity of the integral and of the lower limit, whereas the continuity follows from the fact that *1r∫_κ_2 r^r _I(s) s - 1r∫_κ_1 r^r _I(s) s≤*κ_2 - κ_1 . Given μ∈_+^⋆, the constant Λ(μ) defined in (<ref>) is related to the flow of a harmonic oscillator with characteristic frequencies ν_1, ν_2 such that μ = ν_2/ν_1. More precisely, it corresponds to the largest ratio between the minimum and the maximum of the distance to the origin of a projected trajectory: if we write (x^t, ξ^t)(ρ_0) = ϕ^t(ρ_0), then we shall prove that Λ(ν_2ν_1) = sup_ρ_0 ∈^4 ∖{0}inf_t ∈x^t(ρ_0)sup_t ∈x^t(ρ_0) . Thus we can refer to this quantity as the optimal “radial aspect ratio" of projected trajectories. Observability from ω(I) will depend on whether the critical trajectories that attain this maximal ratio spend sufficient time in ω(I), hence the criterion κ_⋆(I) > Λ(ν_2ν_2). See Figure <ref> for an illustration of the case where such trajectories are not seen by the observation set. Notice that maximizing the ratio in (<ref>) with respect to any non-zero initial data is the same as taking the upper limit as ρ_0 →∞ since the Hamiltonian flow is homogeneous. Thus Λ(μ) can be understood as a quantity that captures the behavior of the flow at infinity. In addition, we remark that Λ(μ) = Λ(1/μ), which means that this value depends only on the spectrum of the matrix A, and not on the choice of a specific basis of ^2. The maximum of Λ is reached exactly at 1, where it is equal to tan(π/4) = 1. This is consistent with the fact that in two dimensions, isotropic harmonic oscillators are the only ones possessing circular orbits: the norm of the trajectory x^t(ρ_0) is constant for well-chosen initial data. The distinction between rational and irrational values of μ is natural in light of the complete integrability of the flow of harmonic oscillators. When the ratio of characteristic frequencies μ = ν_2/ν_1 is rational, writing μ = p/q with p, q a couple of coprime integers, one can check that the Hamiltonian flow of the corresponding harmonic oscillator is periodic of period 2 π/ν_2 p = 2 π/ν_1 q. In that case, there are many orbits of the flow whose projection on the x-variable space stays away from the origin, thus producing a positive Λ(μ), as one can see on Figure <ref>. When μ is irrational, it is known that (non-degenerate) trajectories are dense in the invariant torus to which they belong. In particular, any projected trajectory can get arbitrarily close to the origin, up to waiting a long enough time, so that Λ(μ) = 0; see Figure <ref>. Lastly, let us point out that that the estimate (<ref>) of the optimal observation time for Diophantine irrational does not give any precise information for a given open set I, but is relevant for fixed ν_1, ν_2 in the asymptotics κ_⋆(I) ≪ 1. It can look surprising that Proposition <ref> gives an exact characterization of spherical sets for which observability holds, whereas Theorem <ref> provides a necessary and sufficient condition up to thickening the observation set. This improvement is made possible by the extra assumption that I_n→ + ∞. It ensures that thickening the observation set by a radius R is negligible compared to the width of the annulus ω(I_n), for n large. [Non-Diophantine irrationals] When μ = ν_2/ν_1∈∖, one can estimate T_⋆, even if τ = τ(μ) = + ∞, using the so-called convergents of μ. These are the rational numbers arising in the continued fraction expansion algorithm. Denote them in irreducible form by μ_j = p_j/q_j. It is known that this sequence is the most efficient way to approximate an irrational number by rationals (a result known as Lagrange theorem; see <cit.> or <cit.>). These convergents satisfy ∀ j ∈ , *μ - p_jq_j < 1q_j^2 . (This is why τ(μ) ≥ 2 holds for any irrational.) We will show in the proof of Proposition <ref> the following: when μ∈∖, there exist constants c_1, c_2 > 0 and δ_1, δ_2 > 0, possibly depending on ν_1, ν_2, such that c_1 q_j_1≤ T_⋆≤ c_2 q_j_2 (see (<ref>) in the proof), where j_1 is the largest index for which q_j ≤δ_1/κ_⋆, and j_2 is the smallest index for which q_j ≥δ_2/κ_⋆. The bounds (<ref>) are particularly interesting when τ has the smallest possible value, that is τ = 2, which is the case of Lebesgue-almost every irrational. However, we see that the lower and upper bounds (<ref>) get far apart as τ goes to infinity. This reflects the fact that the gaps between the denominators of consecutive convergents get wider at each step of the continued fraction expansion. Irrationals having an infinite irrationality exponent are known as Liouville numbers. There are many of them: the set of Liouville numbers is an instance of a Lebesgue negligible set having the cardinality of the continuum. When ν_2/ν_1 is a Liouville number, the bounds (<ref>) on the optimal observation time are very poor, owing to the lacunary behavior of the q_j's. §.§ Other applications Let us briefly discuss two other applications of Theorem <ref>. §.§.§ Uniform observability of eigenfunctions Under Assumption <ref>, the operator P is self-adjoint with compact resolvent. Thus, its spectrum consists in a collection of eigenvalues with finite multiplicity. A direct consequence of an observability inequality (ω, T) in a set ω is the fact that the eigenfunctions of P are uniformly observable from ω: ∃ c > 0 : ∀ u ∈ L^2(^d) , ( P u = λ u ⟹ *u_L^2(ω)≥ c *u_L^2(^d)) . Theorem <ref> thus furnishes a sufficient condition for this to hold. In particular, for anisotropic oscillators, Proposition <ref> implies that uniform observability of eigenfunctions from the two cones defined in (<ref>) is true. This can certainly be deduced from the work of Arnaiz and Macià <cit.> that characterizes quantum limits of harmonic oscillators. From Proposition <ref>, we obtain a similar uniform estimate in spherical sets satisfying the assumptions of the proposition together with the condition (<ref>). This time, it is not clear that one can deduce this result as easily from the knowledge of quantum limits <cit.>. See also <cit.> for recent works about spectral inequalities for the Hermite operator, and <cit.> for anisotropic Shubin operators. §.§.§ Energy decay of the damped wave equation Lastly, our study leads to stabilization results concerning the damped wave equation {∂_t^2 ψ + P ψ + _ω∂_t ψ = 0 (ψ, ∂_t ψ)_| t = 0 = U_0 ∈ P^1/2× L^2 . , with damping in ω⊂^d, provided P ≥ 0 (assume for instance that the potential V is non-negative). This equation comes with a natural energy E(U_0, t) = 12(*P^1/2ψ(t)_L^2^2 + *∂_t ψ(t)_L^2^2) , which decays over time. Let us recall that Anantharaman and Léautaud proved in <cit.> that an observability inequality (ω, T) implies a decay at rate t^-1/2 for the damped wave equation (<ref>), meaning that there exists a constant C > 0 such that E(U_0, t) ≤Ct( *P u_0_L^2^2 + *P^1/2 u_1_L^2^2 ) , ∀ t > 0 , for all initial data in the domain of the damped wave operator, here U_0 = (u_0, u_1) ∈ P × P^1/2. Their result applies in our setting since P has compact resolvent under Assumption <ref>. Our examples thus provide concrete situations where such a decay occurs. §.§ Link with the Kato smoothing effect The dynamical condition (<ref>) concerns only what happens at infinity in phase space. We will see that trajectories of the Hamiltonian flow escape from any compact set (in the x variable) most of the time provided the initial data has large enough energy, namely p(ρ) is large enough. This is the reason why one can remove any compact set from the observation without losing observability: no energy can be trapped in a compact set. Quantitatively, we will check that, given T > 0, there exist a constant C > 0 and E_0 > 0 such that ∀ r ≥ 0, ∀ρ∈{p ≥ E_0} , *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d = ∫_0^T _B_r(x^t) t ≤ C r√(p(ρ)) (see Corollary <ref>). We can rephrase this by saying that compact sets are not classically observable. This property is related to the Kato smoothing effect as follows. Writing (x^t, ξ^t) = ϕ^t(ρ), for any > 0, we compute using Fubini's theorem: ∫_0^T √(p(ρ))x^t^1 + t = ∫_0^T ( ∫_x^t^+ ∞ (1 + ) √(p(ρ))r^2 + r ) t = ∫_1^+ ∞ (1 + ) √(p(ρ))r( ∫_0^T _B_r(0)(x^t) t) rr^1 + . From (<ref>), we deduce that ∫_0^T √(p(ρ))x^t^1 + t ≤ C ∫_1^+ ∞ (1 + ) rr^1 + , and the latter integral is indeed convergent when > 0. This is the classical analogue to the so-called Kato smoothing effect. In our context, the latter says roughly that ∫_0^T *x^- 1+/2 P^1/4^- t P u_L^2(^d)^2 t ≤ C *u_L^2(^d)^2 . See for instance Doi <cit.> for a thorough discussion on this topic. See also the survey of Robbiano <cit.>, as well as Robbiano and Zuily <cit.> and Burq <cit.> for related results. The main phenomenon responsible for this smoothing effect is the fact that P contains a Laplace-Beltrami operator associated with a non-trapping metric (here a flat metric), that is to say all geodesics escape at infinity forward and backward in time. In our case, working with a flat Laplacian enables us to compare the trajectories of the Hamiltonian flow to straight lines, at least for some time near the origin. It would be interesting to see whether our study can be adapted to operators of the form P = V(x) - 12Δ_g with a non-trapping metric g on ^d (sufficiently flat at infinity). See <cit.> for an alternative proof that non-trapping implies failure of observability from bounded observation sets. The argument relies on semiclassical defect measures. §.§ Natural semiclassical scaling for homogeneous potentials A way to comprehend what goes wrong when the potential is superquadratic is to introduce the natural semiclassical scales associated to our problem, based on an observation of Macià and Nakamura <cit.>. Take for simplicity p(x, ξ) = x^2m + ξ^2. Following classical arguments, we recall in Appendix <ref> that the observability inequality reduces to a high-energy observability inequality: roughly speaking, we can restrict ourselves to L^2 functions u that are microlocalized around some level set {p = E} with E ≫ 1. Writing p(x, ξ) = E ⟺ *xE^1/2m^2m + *ξE^1/2^2 = 1 , we may introduce a small Planck parameter h such that E = h^- γ for some power γ > 0. Thus we have h^γ/2m x^2m + h^γ/2ξ^2 = 1 . This motivates the definition of an h-dependent Weyl quantization (see Appendix <ref>) [h]a := [1]a(h^γ/2m x, h^γ/2ξ) , for any classical observable a on the phase space. This quantization is properly “normalized" by choosing γ = 2m/m + 1: with this choice, the corresponding pseudodifferential calculus is expressed in powers of h, since then h^γ/2m h^γ/2 = h. Therefore the relevant semiclassical Schrödinger operator is P_h = [h]p = h^γ P . If one wants to express the observability inequality in terms of the associated propagator, one is then lead to study ^- t P u = ^- t h^1 - γ/h P_h u . In other words, running the Schrödinger evolution on a time interval [0, T] amounts to consider a semiclassical time scale of order h^1 - γ = h^1 - m/1 + m. It is then clear that this time blows up as h → 0 when m > 1. Yet the analysis of the quantum-classical correspondence, for long times, is much more difficult. In particular, it restricts considerably the amount of classical observables whose evolution can be described through the usual Egorov theorem. For this reason, we will not pursue in this direction and stick to the case m ≤ 1. An interesting approach to study this would be to consider first particular potentials for which the classical flow is completely integrable, e.g. anharmonic oscillators (see <cit.>). Indeed, observability of the Schrödinger equation has been successfully investigated taking advantage of the completely integrable nature of the underlying classical dynamics in some particular geometrical contexts (e.g. in the disk <cit.> which corresponds morally to m = ∞; see also <cit.> on the torus and <cit.>). §.§ Plan of the article Section <ref> below is devoted to the study of the underlying classical dynamics: we show that the Hamiltonian flow is roughly stable under subprincipal perturbations of the potential, and that high-energy projected trajectories can cross compact sets only on a very short period of time. Then we establish an instance of quantum-classical correspondence adapted to our context in Section <ref>, and subsequently prove Theorem <ref>. This is the core of the article. Next, in Sections <ref> and <ref>, we deal with the examples presented in Subsections <ref>, <ref> and Subsection <ref> (observability from conical and spherical sets respectively). Finally, we recall in Appendix <ref> a classical result, related to the notion of unique continuation, that shows that the sought observability inequality is equivalent to a similar high-energy inequality. Appendix <ref> collects reminders about pseudodifferential operators, as well as refined estimates on the pseudodifferential calculus and the Gårding inequality needed for Section <ref>. § STUDY OF THE CLASSICAL DYNAMICS In this section, we investigate the properties of the Hamiltonian flow (ϕ^t)_t ∈ associated with p. This study consists essentially in analyzing the ODE system that defines ϕ^t, namely the Hamilton equation (<ref>). The dynamical condition of Theorem <ref> K_p^∞(ω, T) = lim inf_ρ→∞∫_0^T _ω×^d(ϕ^t(ρ)) t > 0 motivates the study of what can be referred to as “classical observability". [Classical observability] Let q = q(t; ρ) be a Borel-measurable[Recall that Borel-measurability is slightly stronger than Lebesgue-measurability. This restriction ensures that t ↦ q(t; ϕ^t(ρ)) is Lebesgue-measurable. This is not a problem in our context since we will consider functions q that are continuous, or at worse, indicator functions of Borel sets.] function on ×^2d. Then we say that q is classically observable if K_p^∞(q) := lim inf_ρ→∞∫_ q(t; ϕ^t(ρ)) t > 0 . Of course, we will be specifically interested in the case where p contains a subquadratic potential and q = _(0, T) ×ω×^2d, but it is interesting to work out this problem in a more general setting in order to understand to what extent quadratic potentials are critical for the Schrödinger equation. §.§ Invariance of classical observability under subprincipal perturbation In this subsection, we consider a set of classical symbols on ^2d of order n_1 in x and n_2 in ξ, defined by S^n_1, n_2 = a ∈^∞(^2d)∀α∈^2d, sup_(x, ξ) ∈^2d∂^α a(x, ξ)*x^n_1 - α + *ξ^n_2 - α < ∞ . A basic example is the classical Hamiltonian p(x, ξ) = V(x) + 1/2ξ^2 that we consider: it belongs to S^2m, 2. We draw the reader's attention to the fact that this is not a standard symbol class in microlocal analysis. Our aim here is simply to study symbols whose derivatives have similar decay properties as the classical Hamiltonian p. We will not make use of any notion of pseudodifferential calculus in this subsection. It is clear that these symbol classes are nested in the following way: if n_1 ≤ n_1' and n_2 ≤ n_2', then S^n_1, n_2⊂ S^n_1', n_2' (and this inclusion is even continuous with respect to the associated Fréchet structure). Given n_1, n_2 ∈, a real-valued symbol a ∈ S^n_1, n_2 is said to be elliptic in S^n_1, n_2 if a(x, ξ) ≥ c (x^n_1 + ξ^n_2) provided (x, ξ) is large enough. In addition, the binary relation ∀ f, g ∈ S^n_1, n_2 , f = g S^n_1 - 1, n_2 - 1 ⟺ f - g ∈ S^n_1 - 1, n_2 - 1 is an equivalence relation, and the projection on the quotient space S^n_1, n_2 / S^n_1 - 1, n_2 - 1 is called the principal symbol. Two symbols are said to have the same principal symbol if they belong to the same equivalence class through this projection. In the example of our classical Hamiltonian p, these notions of ellipticity and principal symbol are consistent with the terminology used right after Assumption <ref> regarding the potential V. The proposition below is essentially an application of Grönwall's Lemma. Fix n_1, n_2 > 0 and let p_1, p_2 ∈ S^n_1, n_2 be elliptic symbols in S^n_1, n_2. Assume they have the same principal symbol in the sense of (<ref>). Consider the Hamiltonian flows (ϕ_1^t)_t ∈ and (ϕ_2^t)_t ∈ associated with p_1 and p_2 respectively. Then there exists a constant C > 0 such that *ϕ_2^t(ρ) - ϕ_1^t(ρ)≤^C t p_1(ρ)^max(0, 1 - 2/n_+) , ∀ρ∈^2d, ∀ t ≥ 0 , where n_+ = max(n_1, n_2). In particular, when n_1, n_2 ≤ 2, there exists C > 0 such that *ϕ_2^t(ρ) - ϕ_1^t(ρ)≤^C t , ∀ρ∈^2d, ∀ t ≥ 0 . This result ensures that the distance between ϕ_1^t(ρ) and ϕ_2^t(ρ) is bounded provided n_+ ≤ 2, on a time interval [0, T] independent of ρ. In our problem, this condition on n_+ means exactly that the potential is subquadratic. In this proof, we write n_+ = max(n_1, n_2) and n_- = min(n_1, n_2). Set p̃ = p_2 - p_1, which belongs to S^n_1 - 1, n_2 - 1 by assumption. The Hamilton equation (<ref>) gives * t( ϕ_2^t(ρ) - ϕ_1^t(ρ) ) = *J ( ∇ p_2(ϕ_2^t(ρ)) - ∇ p_1(ϕ_1^t(ρ)) ) ≤*∇ p_2(ϕ_2^t(ρ)) - ∇ p_2(ϕ_1^t(ρ)) + *∇p̃(ϕ_1^t(ρ)) . By assumption, p_1 and p_2 are elliptic at infinity in S^n_1, n_2 so that for any ρ = (x, ξ) large enough, one has: 1C(*x^n_1 + *ξ^n_2) ≤p_j(ρ)≤ C (*x^n_1 + *ξ^n_2) , j ∈{1, 2} . From the definition of S^n_1 - 1, n_2 - 1, which contains p̃, we have *∇p̃(ρ)≤ C (*x^n_1 - 2 + *ξ^n_2 - 2) . The ellipticity of p_2, that is, the left-hand side of (<ref>), then yields *∇p̃(ρ)≤ C ( *p_1(ρ)^max(0, 1 - 2/n_1) + *p_1(ρ)^max(0, 1 - 2/n_2)) ≤ C' *p_1(ρ)^max(0, 1 - 2/n_+) , provided ρ is large enough. We obtain on the whole phase space: *∇p̃(ρ)≤ C + C *p_1(ρ)^max(0, 1 - 2/n_+) , ∀ρ∈^2d . Now we deal with the other term in (<ref>): the mean-value inequality yields *∇ p_2(ϕ_2^t(ρ)) - ∇ p_2(ϕ_1^t(ρ))≤*ϕ_2^t(ρ) - ϕ_1^t(ρ)×sup_s ∈ [0, 1]* p_2((1 - s) ϕ_1^t(ρ) + s ϕ_2^t(ρ)) . Write for short ρ_s^t = (1 - s) ϕ_1^t(ρ) + s ϕ_2^t(ρ). Using that p_2 ∈ S^n_1, n_2, we obtain * p_2(ρ_s^t)≤ C ( *(1 - s) x_1^t + s x_2^t^n_1 - 2 + *(1 - s) ξ_1^t + s ξ_2^t^n_2 - 2) , where we wrote ϕ_j^t(ρ) = (x_j^t, ξ_j^t), j ∈{1, 2}. Then we use the classical inequality a + b≤ 2(a + b) to get * p_2(ρ_s^t) ≤ C ( (*x_1^t + *x_2^t)^max(0, n_1 - 2) + (*ξ_1^t + *ξ_2^t)^max(0, n_2 - 2)) ≤ C' (*x_1^t^max(0, n_1 - 2) + *ξ_1^t^max(0, n_2 - 2)) + C' (*x_2^t^max(0, n_1 - 2) + *ξ_2^t^max(0, n_2 - 2)) . Next we use the ellipticity of p_1 and p_2 and the fact that they are conserved by the corresponding flows: * p_2(ρ_s^t) ≤ C ( *p_1(ϕ_1^t(ρ))^max(0, 1 - 2/n_+) + *p_2(ϕ_2^t(ρ))^max(0, 1 - 2/n_+)) = C ( *p_1(ρ)^max(0, 1 - 2/n_+) + *p_2(ρ)^max(0, 1 - 2/n_+)) , which holds for ρ large enough. Up to adding a constant, this works for all ρ∈^d. Finally we use the fact that p_1 and p_2 are comparable (a consequence of ellipticity) to obtain * p_2(ρ_s^t)≤ C + C *p_1(ρ)^max(0, 1 - 2/n_+) , ∀ρ∈^2d . Plugging this into (<ref>), that results in *∇ p_2(ϕ_2^t(ρ)) - ∇ p_2(ϕ_1^t(ρ))≤ C *ϕ_2^t(ρ) - ϕ_1^t(ρ)×(1 + *p_1(ρ)^max(0, 1 - 2/n_+)) , for all ρ∈^2d. Putting this together with (<ref>), we estimate the right-hand side of (<ref>) from above as: * t( ϕ_2^t(ρ) - ϕ_1^t(ρ) )≤ C(1 + *ϕ_2^t(ρ) - ϕ_1^t(ρ)) ×(1 + *p_1(ρ)^max(0, 1 - 2/n_+)) . We deduce that * t*ϕ_2^t(ρ) - ϕ_1^t(ρ) = * t( ϕ_2^t(ρ) - ϕ_1^t(ρ) ) ·ϕ_2^t(ρ) - ϕ_1^t(ρ)ϕ_2^t(ρ) - ϕ_1^t(ρ) ≤ C *ϕ_2^t(ρ) - ϕ_1^t(ρ)(1 + *p_1(ρ)^max(0, 1 - 2/n_+)) , for any ρ∈^2d. We conclude by Grönwall's Lemma that *ϕ_2^t(ρ) - ϕ_1^t(ρ)≤^C t p_1(ρ)^max(0, 1 - 2/n_+) , ∀ρ∈^2d, ∀ t ≥ 0 , which gives the sought result. The result below roughly states that our dynamical condition is invariant under sub-principal perturbation of the potential V, under the assumption that V is sub-quadratic. Fix 0 < n_1, n_2 ≤ 2 and let p_1, p_2 ∈ S^n_1, n_2 be elliptic symbols in S^n_1, n_2, and assume they have the same principal symbol in the sense of (<ref>). Consider the Hamiltonian flows (ϕ_1^t)_t ∈ and (ϕ_2^t)_t ∈ associated with p_1 and p_2 respectively. For any T > 0, there exists a constant C = C_T > 0 such that the following holds: for any function q =q(t; ρ), Lipschitz in ρ and such that q ⊂ [-T, T] ×^2d , one has *∫_ q(t; ϕ_2^t(ρ)) t - ∫_ q(t; ϕ_1^t(ρ)) t≤ C *∇_ρ q_L^∞(×^2d) , ∀ρ∈^2d . In particular, it holds *K_p_2^∞(q) - K_p_1^∞(q)≤ C *∇_ρ q_L^∞(×^2d) . This is a direct application of the mean-value inequality and Proposition <ref>, observing that n_+ = max(n_1, n_2) ≤ 2: *∫_ q(t; ϕ_2^t(ρ)) t - ∫_ q(t; ϕ_1^t(ρ)) t ≤∫_-T^T *∇_ρ q_L^∞(×^2d)*ϕ_2^t(ρ) - ϕ_1^t(ρ) t ≤ 2 T ^C T*∇_ρ q_L^∞(×^2d) . Taking lower limits in ρ yields the second claim. §.§ Quantitative estimates of classical (non-)observability In this subsection, we show that _(0, T) × B_r(0) ×^d is not classically observable in the sense of Definition <ref> when the Hamiltonian is of the form p(x, ξ) = V(x) + 1/2ξ^2. Actually for this class of Hamiltonians, we can prove a more precise result. Let p be a symbol of the form p(x, ξ) = V(x) + 1/2ξ^2, with V fulfilling Assumption <ref> with an arbitrary m > 0. * If m ≥ 1/2, there exists a constant C > 0 and E_0 > 0 such that for all E ≥ E_0, one has ∀ r ≥ 0, ∀ρ∈{p = E} , *t ∈[0, E^1/2 (1/m - 1)]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r√(E) . * If m < 1/2, then for any > 0 small enough, there exists a constant C > 0 and E_0 > 0 such that for all E ≥ E_0, one has ∀ r ≥ 0, ∀ρ∈{p = E} , *t ∈[0, E^1/2 (1/m - 1) - ]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r√(E) . [Classical non-observability] Under the assumptions of the proposition above, one has the following: * If m < 1, then for any T ≥ 0, there exists a constant C > 0 and E_0 > 0 such that for all E ≥ E_0, one has ∀ r ≥ 0, ∀ρ∈{p = E} , *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d = C r√(E) . * If m ≥ 1, there exists a constant C > 0 and E_0 > 0 such that for all E ≥ E_0 and for all T ≥ 0, one has ∀ r ≥ 0, ∀ρ∈{p = E} , *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r (1 + T)E^1/2m . The corollary implies in particular that when r and T are fixed, the function _(0, T) × B_r(0) ×^d is not classically observable in the sense of Definition <ref>. Let us explain the meaning of the typical scales appearing in Proposition <ref> and the subsequent corollary. When V satisfies Assumption <ref> with an arbitrary m > 0, one can single out a typical time scale in the energy layer {p(ρ) = E} of order τ≈ E^1/2 (1/m - 1), which corresponds roughly speaking to the “period" of the trajectories of the flow, or rather, to the time needed to go from one turning point of a projected trajectory to another. We observe that for the harmonic oscillator, one has m = 1, hence τ≈ 1 is indeed independent of the energy layer. Following this observation, we understand the criticality of quadratic potentials in our problem: if m > 1, the typical time scale of evolution of the flow tends to zero as the energy goes to infinity, which means that the flow mixes the phase space more and more in the high-energy limit in a time interval of the form [0, T] with T > 0 fixed. On the contrary, for m < 1, the flow gets nicer on such a time interval because τ→ + ∞ as E → + ∞. We also have a typical scale with respect to the space variable, which is r ≈ E^1/2m. This is the approximate diameter of the classically allowed region K_E = x ∈^dV(x) ≤ E. This scale also appears naturally when one looks for a trajectory t ↦ϕ^t(ρ) = (x^t(ρ), ξ^t(ρ)) such that x^t(ρ) = constant (think for instance of the case of radial potentials). Differentiating x^t(ρ)^2 with respect to time, one gets x^t(ρ) ·ξ^t(ρ) = 0 for all t, and differentiating again leads to ξ^t(ρ)^2 - x^t(ρ) ·∇ V(x^t(ρ)) = 0. Yet ∇ V(x^t(ρ))≲x^t(ρ)^2m - 1, and p is preserved by the flow. From this we can deduce that x^t(ρ)≈ p(ρ)^1/2m. So if r is larger than p(ρ)^1/2m, such trajectories will always stay in B_r(0) ×^d. Finally, if ρ_0 = (x_0, ξ_0) ∈{p(ρ) = E} is such that x_0≤ r, with r ≤ p(ρ)^1/2m, being sufficiently small, the momentum of the trajectory verifies ξ_0≳√(p(ρ)). Therefore, we can expect that the measure of times t ∈ [0, τ] such that x^t(ρ)≲ r will be of order r/√(p(ρ)). The proof of Proposition <ref> relies on the lemma below. Let a, b, c > 0. Let I ⊂ be a measurable set such that ∀ (t_1, t_2) ∈ I × I , a t_2 - t_1^2 - b t_2 - t_1 + c ≥ 0 . Then it holds *I ∩ [0, τ]≤8 a cb^2τ , ∀τ≥b2a . Observe that the left-hand side of (<ref>) is always bounded by τ. Thus, the lemma is mainly relevant in the case where a c ≪ b^2, in which case the discriminant of the polynomial a X^2 - b X + c is positive. First assume that the discriminant of the polynomial a X^2 - b X + c is positive. Denote by z_- ≤ z_+ the (real) roots of the polynomial. Then we have b2a = z_+ + z_-2≤ z_+ ≤ z_+ + z_- = ba and z_- = z_+ z_-z_+ = c/az_+≤2cb . Since a > 0, we deduce that any t such that a t^2 - b t + c ≥ 0 verifies t ≤ z_- ≤2cb or t ≥ z_+ ≥b2a . We deduce that *I ∩[0, b2a]≤*t ∈ [0, z_+]a t^2 - b t + c ≥ 0≤*[0, z_-]≤2 cb . Now if τ≥ b/2a, we split the interval [0, τ] as follows: [0, τ] = ⋃_k = 1^n [k - 1nτ, knτ] , with n = ⌈τb/2a⌉≥ 1 . On each piece, we have *I ∩[k - 1nτ, knτ] = *(I - k - 1nτ) ∩ [0, 1nτ]≤*(I - k - 1nτ) ∩[0, b2a] , where the last inequality is due to the definition of n. We can apply (<ref>) with I - k - 1nτ instead of I, since the former set satisfies the assumptions of the lemma. Then, summing over k yields *I ∩ [0, τ]≤ n 2 cb≤(τb/2a + 1) 2 cb≤8 a cb^2τ , which is the desired estimate. Finally if the discriminant is nonpositive, i.e. b^2 ≤ 4 a c, then *I ∩ [0, τ]≤τ≤4 a cb^2τ , which concludes the proof. Let us write for short E = p(ρ), and introduce the components of the flow (x^t, ξ^t) = ϕ^t(ρ). Assume E > 0. The core of the argument is to compare x^t to the straight trajectory t ↦ x^0 + t ξ^0, which is of course easier to handle. In order to have two distinct points of the initial trajectory to be in the ball B_r(0), its distance to the straight trajectory has to be very small or very large, which is possible in a time interval which is either small or large respectively. Introduce I = I_ρ, r = t ∈x^t ∈ B_r(0) . This set is measurable. Moreover, for any t_1 ≤ t_2, using the Hamilton equation and the Taylor Formula at order 1 with integral remainder, one has x^t_2 = x^t_1 + (t_2 - t_1) ξ^t_1 - (t_2 - t_1)^2 ∫_0^1 (1 - s) ∇ V(x^(1 - s) t_1 + s t_2) s . Assume now that t_1, t_2 ∈ I. Then the inverse triangle inequality leads to 2r ≥t_2 - t_1ξ^t_1 - (t_2 - t_1)^2 sup_t ∈ [t_1, t_2]*∇ V(x^t) . At this stage we have to estimate differently the term involving ∇ V, depending on whether m is greater or less than 1/2 (or roughly speaking on whether the potential is approximately convex of concave). Case m ≥ 1/2. Using that V satisfies Assumption <ref>, we have *ξ^t_1 = √(2 (E - V(x^t_1)))≥√(max(0, E - C r^2m)) , for some constant C ≥ 1. Moreover, one can roughly estimate the remainder using the triangle inequality: sup_t ∈ [t_1, t_2]*∇ V(x^t)≤ C sup_t ∈ [t_1, t_2]*x^t^2m - 1 . Now we take advantage of the fact that V is elliptic: up to enlarging the constant C, one has - C + 1C*x^2m≤ V(x) ≤ V(x) + 12*ξ^2 , ∀ (x, ξ) ∈^2d . Therefore if E is large enough (say larger than C), we obtain x^t^2m - 1≤ C E^1 - 1/2m, with a possibly larger constant C (we use m ≥ 1/2 here). Inequality (<ref>) then becomes 2r ≥t_2 - t_1√(max(0, E - C r^2m)) - C E^1 - 1/2m*t_2 - t_1^2 . Set a = C E^1 - 1/2m , b = √(max(0, E - C r^2m)) , c = 2r and τ = E^1/2(1/m - 1) . We have τ≥b/2a since we can assume that C ≥ 1: b2a = √(max(0, E - C r^2m))2 C E^1/2m - 1≤12C E^1/2(1/m - 1)≤τ . With these notation, we have that any t_1, t_2 ∈ I satisfy a t_2 - t_1^2 - b t_2 - t_1 + c ≥ 0 . Therefore, assuming first that C r^2m≤ E/2, we have b ≥√(E/2) > 0, so that Lemma <ref> applies. We obtain *I ∩[0, E^1/2(1/m - 1)] ≤8 a cb^2τ≤8 C E^1 - 1/2m× 2rE/2 E^1/2(1/m - 1) = 32 C r√(E) . If on the contrary we have r≥ (E/2C)^1/2m, as soon as E ≥ 2^2m + 1 C we have r ≥1/2 (E/2C)^1/2m, and we check that *I ∩[0, E^1/2(1/m - 1)]≤ E^1/2(1/m - 1) = r√(E)×E^1/2mr≤r√(E)× 2^1 + 1/2m C^1/2m . This is valid for any r > 0, but in fact r = 0 works as well since B_0(0) = ∅. In addition, this is independent of the point ρ∈{p = E}, whence the result. Case m < 1/2. In the situation where the potential is “sublinear", the inequality x^t^2m - 1≲ E^1 - 1/2m is false in general since the power 2m - 1 is nonpositive (such an inequality would require V(x^t) to be controlled from below by E, which is possible near turning points of the trajectory but not in the well). Thus, a priori we can only have ∇ V(x^t)≤ C, which leads to 2r ≥t_2 - t_1ξ^t_1 - C t_2 - t_1^2 ≥t_2 - t_1√(max(0, E - C r^2m)) - C t_2 - t_1^2 ≥t_2 - t_1√(max(0, E - C r)) - C t_2 - t_1^2 . This coincides with the previous case for the critical value m = 1/2: for any t_1, t_2 ∈ I, it holds a t_2 - t_1^2 - b t_2 - t_1 + c ≥ 0 , where a, b, c are defined in (<ref>) (with m = 1/2). Then, the first step of the proof tells us that there exists C > 0 such that for all E large enough, it holds *t ∈[0, √(E)]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r√(E) , ∀ r ≥ 0, ∀ρ∈{p = E} . We shall use this additional information to improve (<ref>), and then bootstrap this procedure to reach the critical time E^1/2 (1/m - 1). We will work this out by induction, taking (<ref>) as our basis step. Consider n ≥ 0 and suppose there exist γ_n ∈ [1/2, 1/2(1/m - 1)) and C_n ≥ 1 such that when E is large enough, one has *t ∈[0, E^γ_n]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C_n r√(E) , ∀ r ≥ 0, ∀ρ∈{p = E} . We first deduce from the Taylor formula a bound slightly more precise than (<ref>): 2r ≥t_2 - t_1ξ^t_1 - t_2 - t_1∫_t_1^t_2*∇ V(x^t) t ≥t_2 - t_1√(max(0, E - C r^2m)) - t_2 - t_1∫_t_1^t_2*∇ V(x^t) t . Take δ∈ [0, 1] to be chosen later. We have ∫_t_1^t_2*∇ V(x^t) t ≤ C ∫_t_1^t_2*x^t^2m - 1 t = C ∫_0^+∞( ∫_t_1^t_2_u ≤x^t^2m - 1 t ) u ≤ C ∫_0^+∞*t ∈ [t_1, t_2]u ≤x^t^2m - 1 u ≤ C ∫_0^E^δ(1 - 1/2m)t_2 - t_1 u + C ∫_E^δ(1 - 1/2m)^+∞*t ∈ [t_1, t_2]x^t≤ u^- 1/1 - 2m u . The first inequality follows from our assumptions on V, the equality is a consequence of Fubini's Theorem, then we use that 2m - 1 ≤ 0 to deduce x^s^2m - 1≤x^s^2m - 1, and finally we split the integral over u into two pieces. To estimate the second piece, we split the interval [t_1, t_2] into N = ⌈t_2 - t_1/E^γ_n⌉ intervals of length less than E^γ_n. On the k-th piece, we use the induction hypothesis (<ref>), with ρ_k = ϕ^t_1 + k-1/Nt_2 - t_1(ρ) instead of ρ, namely setting t̃_k = t_1 + k-1/Nt_2 - t_1, it holds *t ∈ [t̃_k, t̃_k+1]x^t≤ u^- 1/1 - 2m≤*s ∈ [0, E^γ_n]x^s + t̃_k≤ u^- 1/1 - 2m≤C_n√(E) u^- 1/1 - 2m . Summing over k ∈{1, 2, …, N} yields *t ∈ [t_1, t_2]x^t≤ u^- 1/1 - 2m≤C_n√(E) u^- 1/1 - 2m⌈t_2 - t_1E^γ_n⌉ , provided E is large enough. Integrating over u, we obtain a bound for the second term in (<ref>): ∫_E^δ(1 - 1/2m)^+∞*t ∈ [t_1, t_2]x^t≤ u^- 1/1 - 2m u ≤C_nE^1/2(t_2 - t_1E^γ_n + 1) ∫_E^δ(1 - 1/2m)^+∞ u^- 1/1 - 2m u = C_nE^1/2(t_2 - t_1E^γ_n + 1) ×-11 - 1/1 - 2m E^δ(1 - 1/2m)(1 - 1/1 - 2m) = (1/2m - 1) ×C_nE^1/2(t_2 - t_1E^γ_n + 1) E^δ . In the end we obtain ∫_t_1^t_2*∇ V(x^t) t ≤C2t_2 - t_1( E^δ(1 - 1/2m) + E^δ - 1/2 - γ_n) + C E^δ - 1/2 , for some constant C > 0. By choosing δ = m (2 γ_n + 1) (we have indeed δ∈ [2m, 1) ⊂ [0, 1) when γ_n ∈ [1/2, 1/2(1/m - 1))), we obtain ∫_t_1^t_2*∇ V(x^t) t ≤ C t_2 - t_1 E^(2m - 1) γ_n + m - 1/2 + C E^δ - 1/2 . Going back to (<ref>), if t_1, t_2 ∈ I, i.e. x^t_1 and x^t_2 lie in B_r(0), we deduce 2r ≥t_2 - t_1(√(max(0, E - C r^2m)) - C E^δ - 1/2) - C E^1/2 - γ_n+1t_2 - t_1^2 , where we set γ_n+1 = (1 - 2m) γ_n + 1 - m. Now set a = C E^1/2 - γ_n+1 , b = √(max(0, E - C r^2m)) - C E^δ - 1/2 and c = 2r . Assuming first that C r^2m≤ E/2 and recalling that δ < 1, we know that for E large enough, we have b ≥√(E/3). Any t_1, t_2 ∈ I satisfy a t_2 - t_1^2 - b t_2 - t_1 + c ≥ 0 , so we apply Lemma <ref> with τ = E^γ_n+1≥b/2a to get *I ∩ [0, E^γ_n+1]≤8 a cb^2 E^γ_n+1≤16 C E^1/2E/3 r = 48 C√(E) r . When C r^2m≥ E/2, assuming that E is large enough we have r ≥1/2 (E/2C)^1/2m and we conclude as in the previous step that *t ∈ [0, E^γ_n+1]x^t ∈ B_r(0)≤r√(E)E^γ_n+1 + 1/2r≤r√(E) 2^1 + 1/2m C^1/2m E^γ_n+1 - 1/2(1/m - 1) . Since by the induction hypothesis we have γ_n ∈ [1/2, 1/2(1/m - 1)), then γ_n+1 belongs to the same interval because by definition, γ_n+1≥ 1 - m ≥1/2, and we have γ - γ_n+11/2 = (1 - 2m) γ - γ_n1/2 where γ = 12(1m - 1) . Therefore we see that γ_n+1 - γ < 0, so as soon as E is large enough, it holds *t ∈ [0, E^γ_n+1]x^t ∈ B_r(0)≤ C_n+1r√(E) for any r ≥ 0, and for some constant C_n+1. Thus we have constructed by induction a non-decreasing sequence (γ_n)_n ∈ for which (<ref>) holds. We deduce from (<ref>) that it converges to γ = 1/2(1/m - 1), which yields the final result. Firstly we treat the case where m < 1. For small enough, E^1/2(1/m - 1) - tends to + ∞ as E → + ∞, so we can write using Proposition <ref>: *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d≤*t ∈[0, E^1/2(1/m - 1) - ]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r√(E) , provided E is large enough, for all ρ∈{p = E} and all r ≥ 0. Now in the case where m ≥ 1, we know that E^1/2(1/m - 1) remains bounded as E → + ∞. By Proposition <ref> again, there is a E_0 > 0 such that for any E ≥ E_0, it holds *t ∈[0, E^1/2(1/m - 1)]ϕ^t(ρ) ∈ B_r(0) ×^d≤ C r√(E) , whenever r ≥ 0 and ρ∈{p(ρ) = E}. Let n = ⌈T/E^1/2(1/m - 1)⌉. Writing t_k = k/n T and ρ_k = ϕ^t_k(ρ) for any k ∈{0, 1, …, n}, we have *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d ≤∑_k = 1^n *t ∈ [t_k-1, t_k]ϕ^t(ρ) ∈ B_r(0) ×^d = ∑_k = 1^n *t ∈ [0, 1n T]ϕ^t + t_k-1(ρ) ∈ B_r(0) ×^d ≤∑_k = 1^n *t ∈[0, E^1/2(1/m - 1)]ϕ^t(ρ_k-1) ∈ B_r(0) ×^d . The last inequality comes from the definition of n. Estimate (<ref>) applies to each piece of this sum. We conclude that *t ∈ [0, T]ϕ^t(ρ) ∈ B_r(0) ×^d≤ n C r√(E)≤1 + TE^1/2(1/m - 1)× C r√(E) = C r (1 + T)E^1/2m (we can ensure that n ≤1+T/E^1/2(1/m - 1) in the second equality up to enlarging E_0 so that it is larger than 1, independently of T). This completes the proof. §.§ Continuity of the composition by the flow in symbol classes From now on we go back to a subquadratic potential, that is to say we suppose our classical Hamiltonian is of the form p(x, ξ) = V(x) + 1/2ξ^2, with V satisfying Assumption <ref> with m ∈ (0, 1]. In the course of our study, we will need to check that the composition of a symbol with the Hamiltonian flow is still well-behaved in a suitable symbol class, in the sense that its derivatives remain controlled properly. The following lemma is common in the context of the quantum-classical correspondence: see for instance Bouzouina and Robert <cit.>. We reproduce a proof to obtain an estimate adapted to our context and to keep track of the dependence of constants on the parameters of the problem. We recall that a function a ∈^∞(^2d) is said to be a symbol in the class S(1) if all its derivatives are bounded. The quantities *a_S(1)^ℓ = max_α∈^2d 0 ≤α≤ℓsup_ρ∈^2d*∂^α a(ρ) , ℓ∈ , endow the vector space S(1) with a Fréchet structure (see Appendix <ref> for further details). Let a be a symbol in S(1). Then the function a ∘ϕ^t still belongs to S(1), and stays in a bounded subset of S(1) locally uniformly with respect to t. More precisely, for any fixed T > 0, for any nonzero multi-index α∈^2d, we have the derivative estimate *∂^α (a ∘ϕ^t)_∞≤ C_α(T, p) max_1 ≤β≤α*∂^β a_∞ , uniformly in t ∈ [-T, T]. The constants C_α(T, p) depend only on T and on the sup-norm of derivatives of order {2, 3, …, α + 1} of p. In all the proof, t ranges in a compact set, say [-T, T] for some fixed T > 0. Step 1 ­– Control of differentials of the Hamiltonian flow. Differentiating the Hamilton equation (<ref>) defining the flow ϕ^t, we get tϕ^t(ρ) = J p(ϕ^t(ρ)) ϕ^t(ρ) . By assumption on the potential V (see (<ref>)), we observe that the Hessian of p is bounded. Since ϕ^0(ρ) = 𝕀 for any ρ∈^2d, we classically deduce using Grönwall's Lemma that *ϕ^t(ρ)≤^T *J p_∞≤^T * p_∞ , ∀ρ∈^2d, ∀ t ∈ [-T, T] . For higher order differentials, we proceed by induction. Suppose that for some k ≥ 1, all the differentials of order ≤ k of ϕ^t are bounded uniformly in t on ^2d, with a bound involving derivatives of order k+1 of p. Differentiating the Hamilton equation k + 1 times, the Faà di Bruno formula shows that / t^k+1ϕ^t(ρ) is a sum of terms of the form: J ^ℓ (∇ p)(ϕ^t(ρ)). (^k_1ϕ^t(ρ), ^k_2ϕ^t(ρ), …, ^k_ℓϕ^t(ρ)) , where 1 ≤ℓ≤ k + 1 and k_1 + k_2 + ⋯ + k_ℓ = k + 1. Such terms are bounded uniformly in t ∈ [-T, T] by the induction hypothesis as soon as ℓ≥ 2 (note that all the differentials of order ≥ 2 of p are bounded). So in fact the ODE on ^k+1ϕ^t(ρ) can be written t^k+1ϕ^t(ρ) = J p(ϕ^t(ρ)) ^k+1ϕ^t(ρ) + R(t, ρ) , where R(t, ρ) satisfies *R(t, ρ)_∞≤ C(T, p) , ∀ρ∈^2d, ∀ t ∈ [-T, T] , where the constant C(T, p) depends only on the sup-norm of derivatives of order {2, 3, …, k + 2} of p. We conclude by Grönwall's Lemma again, together with Duhamel's Formula that ^k+1ϕ^t(ρ) is bounded similarly: given that k + 1 ≥ 2, we have ^k+1ϕ^0(ρ) = 0 for every ρ∈^2d, so that *^k+1ϕ^t(ρ)≤∫_0^t C(T, p) ^ p_∞ (t - s) s ≤ T C'(T, p) . This finishes the induction. Step 2 ­– Estimates of derivatives of a ∘ϕ^t. We estimate the derivatives in x or ξ. Let α∈^2d∖{0}, and denote by (x_1^t, x_2^t, …, x_d^t, ξ_1^t, ξ_2^t …, ξ_d^t) the components of the flow. The chain rule together with the Faà di Bruno formula yield that ∂^α (a ∘ϕ^t) can be expressed as a sum of terms of the form (∂_x^α̃∂_ξ^β̃ a) ∘ϕ^t ×∏_j_1 ∈α̃∂^α_j_1 x_j_1×∏_j_2 ∈β̃∂^β_j_2ξ_j_2 , where α̃, β̃∈^d are such that 1 ≤α̃ + β̃≤α and α_j_1, β_j_2∈^2d∖{0} satisfy ∑_j_1α_j_1 + ∑_j_2β_j_2 = α. (By j_1 ∈α̃, j_2 ∈β̃, we mean that j_1, j_2 ∈{1, 2, …, d} are indices for which α̃ and β̃ are nonzero.) The claim follows immediately from the bounds on the derivatives of x_j^t and ξ_j^t proved in Step 1. § PROOF OF THE MAIN THEOREM We start with a lemma that will enable us to replace _ω_R ∖ B_r(0) in the observability inequality with a well-behaved symbol. [Mollifying the observation set] Let ω⊂^d and denote by ω_R the open set ω_R = ⋃_x ∈ω B_R(x) , R > 0 . There exists a symbol a = a_R ∈ S(1) depending only on the x variable such that _ω_R/2(x) ≤ a_R(x) ≤_ω_R(x) , ∀ x ∈^d . In addition, it satisfies the seminorm estimates: ∀ℓ∈, ∃ C_ℓ > 0 : ∀ R ≥ 1 , *a_R_S(1)^ℓ≤ C_ℓ and *∇ a_R_S(1)^ℓ≤C_ℓR . The constants involved do not depend on ω. Fix κ∈_^∞(^d) a mollifier with the following properties: κ(x) ≥ 0, , ∀ x ∈^d , κ⊂ B_1(0) and ∫_^dκ(x) x = 1 . For any r > 0, set κ_r = r^-dκ(/r), so that κ_r_L^1(^d) = 1. Set for any R > 0: a_R(x) = (κ_1/4 R∗_ω_3/4 R)(x) , ∀ x ∈^d . We check that a_R defined in this way satisfies the required properties. We first observe that, by definition, a_R is non-negative, and that a_R ≤ 1 by Young's inequality. Now by standard properties of convolution, the support of a_R is contained in ω_3/4 R + B_1/4 R(0) ⊂ω_R (recall that the support of κ is a compact subset of B_1(0)), which proves that a_R ≤_ω_R. On the other hand, if x ∈ω_R/2, then κ_1/4 R(x - ) is supported in ω_3/4 R, so that a_R(x) = 1, which proves that a_R ≥_ω_R/2. Differentiating under the integral sign, we see that *∂^α a_R_∞ is of order 1/R^α for any multi-index α∈^d, which yields the desired seminorm estimates (R ≥ 1 is important here). The constants depend only on the supremum norms of derivatives of κ, and not on ω. The symbol a_R can be considered as a semiclassical symbol, with Planck parameter 1/R^2, since by construction each derivative yields a gain of 1/R. However in view of Lemma <ref>, this property is not preserved by composition by the Hamiltonian flow, since all the derivatives of a_R^ ∘ϕ^t of order ≥ 1 behave as 1/R. This comes from the fact that, when differentiating a_R ∘ϕ^t twice or more, the second, third, and higher order derivatives can hit ϕ^t instead of a_R. We prove a version of Egorov's theorem taking into account the above remark. Our approach is very classical; see <cit.> or <cit.> for refinements. We refer again to Appendix <ref> for an account on the Weyl quantization . Let a ∈ S(1). Then the symbol a ∘ϕ^t lies in S(1) with seminorm estimates ∀ T > 0, ∀ℓ∈, ∃ C_ℓ(T, p) > 0 : *a ∘ϕ^t_S(1)^ℓ≤ C_ℓ(T, p) *a_S(1)^ℓ , ∀ t ∈ [- T, T] , and, it holds ^ t Pa^- t P = a ∘ϕ^t + R_a(t) , where the remainder term R_a(t) is a bounded operator satisfying ∀ T > 0, ∃ C(T, p) > 0 : *R_a(t)_L^2 → L^2≤ C(T, p) *∇ a_S(1)^k_d , ∀ t ∈ [-T, T] , for some integer k_d depending only on the dimension. The claim that a ∘ϕ^t ∈ S(1) and the subsequent seminorm estimates are provided by Lemma <ref>. To prove (<ref>), we follow the classical method that consists in differentiating the time dependent operator Q(s) = ^- s Pa ∘ϕ^s^ s P , and estimating this derivative. For the sake of simplicity, let us introduce a_s = a ∘ϕ^s. All the operators in this composition map (^d) to itself continuously, so that Q(s) u can be differentiated using the chain rule, for any u ∈(^d). From now on, we will omit to write u. Recalling that, by definition of ϕ^s, we have / s a_s = *pa_s, it holds / sa_s = *pa_s (rigorously, one may apply the Dominated Convergence Theorem to the pairing *va_s u_', (^d) for two Schwartz functions u and v). Therefore we get s Q(s) = - ^- s P( *Pa_s + *pa_s) ^ s P = - ^- s PR_3(s)^ s P . The symbol R_3(s) above is nothing but the remainder of order 3 in the pseudodifferential calculus between p and a_s. Proposition <ref> provides a bound on this remainder in terms of seminorms of a_s. Recall that, in the subcritical case m ≤ 1, ∂^α p ∈ S(1) whenever α≥ 2. Therefore according to Proposition <ref>, for any seminorm index ℓ∈, there exist a constant C_ℓ > 0 as well as an integer k ≥ 0 such that *R_3(s)_S(1)^ℓ≤ C_ℓ*d^3 a_s_S(1)^k *d^3 p_S(1)^k . Then we use Lemma <ref> to obtain *R_3(s)_S(1)^ℓ≤ C_ℓ(T, p) *∇ a_S(1)^k , for any s ∈ [- T, T]. Therefore, the Calderón-Vaillancourt Theorem (Theorem <ref>) tells us that the norm of R_3(s) is bounded, uniformly in s ∈ [-T, T], by a seminorm of ∇ a, and a constant depending only on T and p. Plugging this into (<ref>), given that the propagator ^ s P is an isometry, we obtain the same bound on / s Q(s). Integrating this in s, we obtain from the mean-value inequality ∀ t ∈ [-T, T] , *Q(t) - Q(0)_L^2 → L^2≤ 2 T sup_s ∈ [-T, T]* s Q(s)_L^2 → L^2≤ C(T, p) *∇ a_S(1)^k_d , where the integer k_d depends only on the dimension. Conjugating by the propagator, which is an isometry on L^2, yields the desired result. We are now in a position to prove our main result. We fix ω⊂^d, a compact set K ⊂^d, and we introduce ω̃(R) = (ω∖ K_R)_R, for R > 0. One can verify that ω̃(R) ⊂ω_R ∖ K. By Lemma <ref>, there exists a symbol a_R ∈ S(1) depending on the parameter R > 0 such that _(ω∖ K_R) ×^d≤ a_R ≤_ω̃(R) ×^d , ∀ R > 0 , and ∇ a_R_S(1)^ℓ≤ c_d,ℓ/R for any ℓ∈, with a constant c_d,ℓ depending only on the dimension and ℓ, uniformly in R ≥ 1. Notice that the symbol depends on ω and K but not its seminorms. On the quantum side, one can regard the functions in (<ref>) as multiplication operators, and understand the inequalities in the sense of self-adjoint operators. Conjugating by the Schrödinger propagator does not change the inequalities, so that: ^ t P_ω∖ K_R^- t P≤^ t Pa_R^- t P≤^ t P_ω̃(R)^- t P , ∀ t ∈ . Then we use Egorov's theorem (Proposition <ref>) and we integrate with respect to t to get ∫_0^T_0^ t P_(ω∖ K_R) ×^d^- t P t ≤∫_0^T_0a_R ∘ϕ^t t + R_R ≤∫_0^T_0^ t P_ω̃(R)^- t P t , where the remainder term R_R is a bounded operator with *R_R_L^2 → L^2≤ C *∇ a_R_S(1)^k_d≤C'R , ∀ R ≥ 1 . The constant C' above depends only on p and T_0 (and of course on the dimension d), but not on ω or K. One can check that the quantization and the integral over t in the middle term of (<ref>) commute.[One can see this by pairing the operator under consideration with two Schwartz functions and use the Dominated Convergence Theorem.] On the classical side, using the same notation as in (<ref>), we introduce the quantity K_p_0^∞(a_R _(0, T)) = lim inf_ρ→∞∫_0^T a_R(ϕ_0^t(ρ)) t , and similarly for p instead of p_0, replacing the flow ϕ_0^t by ϕ^t. We claim that for any T > 0, there exists a constant C” > 0 depending only on the dimension, on T and on the Hamiltonians p_0 and p, such that for any compact K̃, and for any R > 0: {K_p_0^∞(ω, T) ≤K_p^∞(a_R _(0, T)) + C”R K_p^∞(a_R _(0, T)) ≤K_p_0^∞(ω_R ∖K̃, T) + C”R. . The constant C” does not depend on ω or K from which we built a_R neither. The proof of the first inequality in (<ref>) reads as follows: Corollary <ref> shows that the quantity in the left-hand side does not change if we remove a compact set: K_p_0^∞(ω, T) = K_p_0^∞(ω∖ K_R, T) , ∀ R > 0 . Now we use that _(ω∖ K_R) ×^d≤ a_R to get K_p_0^∞(ω∖ K_R, T) ≤K_p_0^∞(a_R _(0, T)) . Then we switch from p_0 to p, having the same principal symbol, using Corollary <ref>: the function a_R _(0, T) is compactly supported in time and c_d, 1/R-Lipschitz in the variable ρ, so that K_p_0^∞(a_R _(0, T)) ≤K_p^∞(a_R _(0, T)) + C”R , ∀ R > 0 . Putting this together with (<ref>) and (<ref>) yields the first inequality in (<ref>). The second inequality in (<ref>) is proved using similar arguments: Corollary <ref> leads to K_p^∞(a_R _(0, T)) ≤K_p_0^∞(a_R _(0, T)) + C”R , ∀ R > 0 . Then we use from the construction of a_R in (<ref>) that a_R is supported in ω_R ×^d, and we apply Corollary <ref> to remove a compact set K̃. This leads to the sought inequality. Sufficient condition. We wish to bound the left-hand side of (<ref>) from below. The high-energy classical observability constant K_p_0^∞ := K_p_0^∞(ω, T_0) is assumed to be positive. From the first inequality in (<ref>), with T_0 in place of T, we can write ∃ A > 0 : ∀*ρ≥ A , ∫_0^T_0 (a_R ∘ϕ^t)(ρ) t ≥12K_p_0^∞ - C”R = c_R . Take a cut-off function χ∈_^∞(^2d) such that χ≡ 1 on the unit ball, and set χ_R = χ(/(A + R)). Then χ_R has compact support, equals one on the ball B_A(0), and it satisfies ∂^αχ_R_∞ = O(1/R^α), with constants independent of ω again.[The parameter A depends on R, but this will not be a problem in the sequel. The phase space region localized at distance ≤ A from the origin will be handled by Proposition <ref>.] We split the symbol in the left-hand side of (<ref>) using this cut-off function: we write ∫_0^T_0 a_R ∘ϕ^t t = b_0 + b_∞ where we set b_0 = χ_R ×( ∫_0^T_0 a_R ∘ϕ^t t - c_R ) and b_∞ = (1 - χ_R) ∫_0^T_0 a_R ∘ϕ^t t + c_R χ_R . Using the Leibniz Formula and Lemma <ref>, we can prove that b_0 ∈ S(1). Moreover, b_0 is compactly supported in ^2d, so that b_0 is a compact operator by <cit.>. As for b_∞, the Leibniz Formula and Lemma <ref> lead to the following estimates on derivatives: for all α∈^2d, one has *∂^α b_∞_∞≤ C_αmax_α_1 + α_2 = α*∂^α_1 (1 - χ_R)_∞×∫_0^T_0*∂^α_2 (a_R ∘ϕ^t)_∞ t + c_R C_αR^α≤ C_α, T_0, p( 1R^α + 1R) . The last inequality comes from distinguishing the cases α_2 = 0 and α_2 ≠ 0. In the first case, we have ∂^α_1(1 - χ_R) = O(R^-α) and a_R ∘ϕ^t≤ 1. Otherwise, Lemma <ref> tells us that ∂^α_2 (a_R ∘ϕ^t) behaves like ∇ a_R_S(1)^α = O(1/R), R ≥ 1. In particular, b_∞∈ S(1) and it holds b_∞_S(1)^ℓ = O(1/R) for any ℓ∈, with a constant independent of ω and K. In addition, we have b_∞≥ c_R in view of (<ref>). Therefore, the Gårding inequality (Proposition <ref>) yields b_∞≥(c_R - C_1R) 𝕀 . The constant C_1 is independent of ω and K in view of the seminorm estimates of b_∞ discussed above. Going back to (<ref>), we have proved ∫_0^T ^ t P_ω̃(R)^- t P t ≥ c_R 𝕀 + b_0 + R , ∀ R ≥ 1 . As we have seen in the course of the proof, b_0 is a compact self-adjoint operator and *R_L^2 → L^2≤ C_2/R, with a constant C_2 depending only on the dimension, on T_0 and on the Hamiltonians p_0, p. In view of the definition of c_R in (<ref>), taking R = 4(C” + C_2 + T_0)/K_p_0^∞, we obtain the desired observability inequality, up to a compact operator: ∫_0^T_0^ t P_ω̃(R)^- t P t - b_0≥14K_p_0^∞𝕀 . Notice that indeed R ≥ 1, since K_p_0^∞≤ T_0. Proposition <ref> then applies (see Remark <ref>). It yields the sought observability inequality on ω̃(R) ⊂ω_R ∖ K, in any time T > T_0. Necessary condition. Consider the symbol a_R from (<ref>) with K = ∅. We fix R ≥ 1 (not necessarily large), K̃ compact, and we estimate the observation cost C_ in (<ref>) using the quantity K_p_0^∞(ω_R ∖K̃, T). We will track carefully the dependence of remainders on the parameter R. Write for short *a_R_T(ρ) = ∫_0^T (a_R ∘ϕ^t)(ρ) t , ρ∈^2d , and pick a point ρ_0 ∈^2d such that *a_R_T(ρ_0) ≤inf_ρ∈^2d*a_R_T(ρ) + 1R . Notice that in virtue of the second inequality of (<ref>), we have *a_R_T(ρ_0) ≤K_p^∞(a_R _(0, T)) + 1R≤K_p_0^∞(ω_R ∖K̃, T) + C” + 1R . Differentiating under the integral sign and using Lemma <ref>, we check that a_R_T is Lipschitz as a function of ρ: ∀ρ∈^2d , *∇a_R_T(ρ)≤ T sup_t ∈ [0, T]*∇ (a_R ∘ϕ^t)(ρ)≤ C(T, p) *∇ a_R_∞≤cR . Consider a Gaussian wave packet centered at ρ_0, namely, writing ρ_0 = (x_0, ξ_0), we define w(x) = π^-d/4exp(- x - x_0^2/2) ^ξ_0 · x , x ∈^d . It is properly normalized: w_L^2 = 1. A classical computation (cf. <cit.>) shows that the Wigner transform of w is the Gaussian in the phase space centered at ρ_0, defined by ρ↦π^-dexp(- ρ - ρ_0^2), that is to say *w*a_2R_T w_L^2 = π^-d∫_^2d*a_2R_T(ρ) exp(- ρ - ρ_0^2) ρ = π^-d∫_^2d*a_2R_T(ρ_0 + ρ) exp(- ρ^2) ρ . Note that it is a non-negative quantity. Taking an arbitrary A > 0 and splitting the integral over ^2d into two pieces, we obtain *wa_2R_T w_L^2 ≤∫_B_A(0)(a_2R_T(ρ_0) + A *∇a_2R_T_∞) π^-d^- ρ^2ρ + ∫_^2d∖ B_A(0)*a_2R_T_∞π^-d^- ρ^2ρ ≤*a_2R_T(ρ_0) + A cR + T ∫_^2d∖ B_A(0)π^-d^- ρ^2ρ ≤K_p_0^∞(ω_R ∖K̃, T) + C” + 1 + A cR + T ^- A^2/2 2^d . We used (<ref>) and (<ref>) to obtain the last two inequalities. We take A = 2 log R^1/2 to obtain *wa_2R_T w_L^2≤K_p_0(ω_R ∖K̃, T) + C̃1 + log R^1/2R for some constant C̃ > 0 independent of R. Going back to the left-hand side of (<ref>) (recall that we chose K = ∅ here) with T in place of T_0, as well as (<ref>), taking the inner product with w on both sides, we deduce that ∫_0^T *^- t P w_L^2(ω)^2 t ≤K_p_0(ω_R ∖K̃, T) + C̃1 + log R^1/2R + C'R . By assumption, (ω, T) is true with a cost C_ > 0. Recalling that w_L^2 = 1, we can bound the left-hand side from below by C_^-1. We arrive at K_p_0(ω_R ∖K̃, T) ≥1C_ - C̃1 + log R^1/2R - C'R , which yields the sought result. § PROOFS OF OBSERVABILITY RESULTS FROM CONICAL SETS In this section, we give proofs of the results presented in Subsections <ref> and <ref>, which concern observation sets that are conical in the sense of (<ref>). Propositions <ref>, <ref> and <ref> are proved in Subsections <ref>, <ref> and <ref> respectively. §.§ Proof of Proposition <ref> Let us prove the converse statement: assume there exists a normalized eigenvector e of A such that e ∉ω and -e ∉ω. Let ν > 0 be such that A e = ν^2 e. We claim the following. There exists a constant c > 0 such that for any R > 0, it holds ∀ s ∈ , ( s e ∈ω_R ⟹ s≤ c R ) . If s ∈ is such that s e ∈ω_R, then there exists y ∈ω∖{0} such that s e - y≤ R. Moreover, since e belongs to the complement of the closed set ω∪ - ω, there exists > 0 such that ∀ x ∈(ω∪ - ω) ∖{0} , *e - xx≥ . We apply this to x = (s) y to obtain s≤1*s e - syy≤1s e - y + 1y*1 - sy≤1s e - y + 1s e - y≤2 R . We used the inverse triangle inequality to obtain the second to last inequality. Using this lemma, for any T > 0 and any η > 0, we can estimate the quantity ∫_0^T _ω_R ×^d(ϕ^t(0, η e)) t = ∫_0^T _ω_R(ηνsin(ν t) e) t ≤∫_0^2 N π /ν_ω_R(ηνsin(ν t) e) t , where N = ⌈ν T/2 π⌉. Using the periodicity of the sine and a change of variable, we deduce: ∫_0^T _ω_R ×^d(ϕ^t(0, η e)) t ≤Nν∫_0^2 π_ω_R(ηνsin(t) e) t = 2 Nν∫_-π/2^π/2_ω_R(ηνsin(t) e) t . Provided η≠ 0, we make the change of variables s = ηsin t, for which we have t = (η^2 - s^2)^-1/2 s; this leads to ∫_0^T _ω_R ×^d(ϕ^t(0, η e)) t ≤2 Nν∫_-η^η_ω_R(sν e) s√(η^2 - s^2) . From Lemma <ref> above, we conclude that for any η large enough: ∫_0^T _ω_R ×^d(ϕ^t(0, η e)) t ≤2 Nν∫_-c R ν^c R ν s√(η^2 - s^2) . An extra change of variables yields ∫_0^T _ω_R ×^d(ϕ^t(0, η e)) t ≤2 Nν∫_-c R ν/η^c R ν/η s√(1 - s^2) = O(Rη) as η tends to infinity and R is fixed. We deduce that for any R > 0, it holds lim inf_ρ→∞∫_0^T _ω_R ×^d(ϕ^t(ρ)) t = 0 . The necessary condition of Theorem <ref> then proves that observability cannot hold from ω in time T. §.§ Proof of Proposition <ref> We first reduce to the case where the matrix A is diagonal in the canonical basis of ^2. Then we investigate the isotropic and anisotropic cases separately. *Step 1 ­– Reduction to positive cones containing half coordinate axes. Let S : ^2 →^2 be a linear symplectic mapping. It holds ∇ (p ∘ S) = S^∗ (∇ p) ∘ S, and we observe that t S^-1ϕ^t(S ρ) = S^-1 J (S^-1)^∗ S^∗∇ p(ϕ^t(S ρ)) = J ∇ (p ∘ S)(S^-1ϕ^t(S ρ)) . This means that the conjugation of the Hamiltonian flow of p by S is the Hamiltonian flow of p ∘ S. Thus, for any measurable set C ⊂^2 ×^2: ∫_0^T _C(ϕ^t(ρ)) t = ∫_0^T _S^-1 C((S^-1ϕ^t S)(S^-1ρ)) t , and finally, since S^-1ρ→∞ if and only if ρ→∞, we deduce that lim inf_ρ→∞∫_0^T _C(ϕ^t(ρ)) t = lim inf_ρ→∞∫_0^T _S^-1 C((S^-1ϕ^t S)(ρ)) t . Denoting by Q the orthogonal matrix that diagonalizes A as follows: Q^-1 A Q = [ ν_-^2 0; 0 ν_+^2 ] , with Q [ 1; 0 ] = e_- and Q [ 0; 1 ] = e_+ , we apply the above observation (<ref>) to the map S = [ Q 0; 0 Q ] . It is indeed symplectic since Q = (Q^-1)^∗ is an orthogonal matrix. When the subset of the phase space C is of the form ω() given in the statement, the resulting set S^-1 C is ω̃() = C_^1 ∪ C_^2 where C_^1 = (x_1, x_2) ∈^2x_2 < tan(2) x_1 and C_^2 = (x_1, x_2) ∈^2x_1 < tan(2) x_2 . The corresponding Hamiltonian is (p ∘ S)(x, ξ) = 12( Q x · A Q x + *Q ξ^2 ) = 12( ν_-^2 x_1^2 + ν_+^2 x_2^2 + *ξ^2 ) . That is to say, we have reduced the problem to the study of observability from ω̃() for the above Hamiltonian: the Schrödinger equation is observable from ω() in time T for the Hamiltonian p is and only if it is observable from ω̃() in time T for the Hamiltonian p ∘ S. From now on, we write ω() instead of ω̃(), p instead of p ∘ S respectively, and (ν_1, ν_2) = (ν_-, ν_+) . *Step 2 ­– Isotropic case. The case where ν_+ = ν_- = ν follows from Proposition <ref>. Indeed, since < π/2, one has ω()∩ L_± = {0} where L_± = {x_2 = ± x_1} are eigenspaces of A = ν^2 𝕀. Therefore, isotropic oscillators are not observable from ω(). *Anisotropic case. We assume that the harmonic oscillator is anisotropic, i.e. ν_1 < ν_2, and we want to show that ω() observes the Schrödinger equation. Anticipating the use of Theorem <ref> where the observation set has to be enlarged, we will rather prove that the dynamical condition in (<ref>) is verified by the smaller set ω(/2) = C_/2^1 ∪ C_/2^2. We fix an initial point ρ^0 = (x_1^0, x_2^0; ξ_1^0, ξ_2^0) ∈^2 ×^2. We write the space components of the flow as follows: x_j^t = cos(ν_j t) x_j^0 + 1ν_jsin(ν_j t) ξ_j^0 = A_j sin(ν_j t + θ_j) , j ∈{1, 2}, t ∈ , with A_j = √((x_j^0)^2 + (ξ_j^0ν_j)^2) and cosθ_j = ξ_j^0/ν_jA_j , sinθ_j = x_j^0A_j . Our first goal will be to prove that the dynamical condition (<ref>) is satisfied in the time interval [0, T_0], where T_0 is given in (<ref>). We can consider ρ^0 to be nonzero since we are interested in what happens at infinity. Therefore A_1 > 0 or A_2 > 0. Also keep in mind that ρ^0 →∞ if and only if (A_1, A_2)→ + ∞. *Step 3 ­– Time spent in C_/2^2. First we look at the possibility to be in the cone C_/2^2. This will certainly be the case provided A_1 is very small compared to A_2, that is to say the projected trajectory (x_1^t, x_2^t) is almost contained in the ordinate axis. We prove the following: ∫_0^T_0_C_/2^2(x_1^t, x_2^t) t ≥πν_2(1 - A_1/A_2tan(/4)) . Suppose t ∈ [0, T_0] is such that sin(ν_2 t + θ_2) ≥δ, namely x_2^t ≥ A_2 δ. Assuming that A_2 > 0, one has x_1^t≤ A_1 ≤A_1A_2 δ x_2^t . We want to quantify the amount of t such that this holds. In the following estimate, we use the fact that T_0 ≥2π/ν_2 and the classical concavity inequality sin x ≥2/π x, ∀ x ∈ [0, π/2]: ∫_0^T_0_sin(ν_2 t + θ_2) ≥δ t ≥∫_0^2 π/ν_2_sin(ν_2 t) ≥δ t ≥1ν_2∫_0^2 π_sin t ≥δ t ≥2ν_2∫_0^π/2_2/π t ≥δ t = πν_2 (1 - δ) . Now in (<ref>), we wish A_1/A_2 δ to be strictly less than tan(/4), that is to say δ > A_1/(A_2 tan(/4)). Therefore, for any δ satisfying this condition, the time spent by the trajectory in C_/2^2 can be bounded from below by: ∫_0^T_0_x_2^t ≥ A_2 δ t ≥∫_0^T_0_sin(ν_2 t + θ_2) ≥δ t ≥πν_2 (1 - δ) , so that, maximizing the right-hand side with respect to δ, one obtains (<ref>). Notice that this inequality is useful only if A_1/A_2 is small enough. In the opposite case where A_1/A_2 is large, we use another argument (ν_1 and ν_2 do not play a symmetric role here). *Step 4 ­– Time spent in C_/2^1. Let us now consider the times when the trajectory is in the other cone C_/2^1. Set η = ⌊ν_2/ν_1⌋ + 1 - ν_2/ν_1∈ (0, 1]. The main claim in this step of the proof is the following: ∃ t_2 ∈ [0, T_0] : x_2^t_2 = 0 and x_1^t_2≥ A_1 δ_1 , where δ_1 = min(ν_1ν_2η, 1 - ν_1ν_2) . Denote by t_1 the first zero of sin(ν_1 t + θ_1) in [0, T_0]. It exists since by definition, T_0 ≥π/ν_2(1 + ν_2/ν_1) ≥π/ν_1. It turns out that t_1 is given by t_1 = πν_1(⌈θ_1π⌉ - θ_1π) . Then t_1 ∈ [0, π/ν_1), and we know that sin(ν_1 t + θ_1) has constant sign on I_1 := [0, t_1], on I_2 := [t_1, t_1 + π/ν_1] ∩ [0, T_0] and on I_3 := [t_1 + π/ν_1, t_1 + 2 π/ν_1] ∩ [0, T_0]. Observe that I_1 is possibly reduced to a singleton, I_2 is always non trivial, and I_3 is possibly empty. One can check this from the fact that T_0 can be rewritten T_0 = π/ν_1 + (1 + η) πν_2 . Because T_0 ≥π/ν_1, we know that sin(ν_1 t + θ_1) vanishes at least once in [0, T_0]. We first distinguish cases according to whether there are a single one or more than two of these zeroes in this interval. * Assume first t_1 is the only zero in [0, T_0]. Then in view of (<ref>), it lies at distance > (1 + η) π/ν_2 from the boundary of [0, T_0], otherwise t_1 + π/ν_1 or t_1 - π/ν_1 is another zero in [0, T_0]. In particular, the intervals [0, t_1] and [t_1, T_0] have length ≥ (1 + η) π/ν_2. We know that sin(ν_1 t + θ_1) ≥ 0 on one of these intervals, that we denote by Ĩ. Given that Ĩ has length ≥ (1 + η) π/ν_2, it contains a zero of sin(ν_2 t + θ_2), lying at distance ≥π/ν_2η/2 from the boundary of Ĩ. We denote such a zero by t_2. Given that the only zero of sin(ν_1 t + θ_1) in Ĩ is t_1, we deduce that the distance between t_2 and the closest zero t_1' of sin(ν_1 t + θ_1) is at least π/ν_2η/2. Then the inequality sin x ≥2/π x on x ∈ [0, π/2] yields sin(ν_1 t_2 + θ_1) = sin(ν_1(t_2 - t_1') + ν_1 t_1' + θ_1) = sin(ν_1 t_2 - t_1') ≥2 ν_1πt_2 - t_1'≥ν_1ν_2η . The absolute value resulting from the second inequality is due to the fact that we chose t_2 in an interval where sin(ν_1 t + θ_1) ≥ 0, or equivalently, ν_1 t_1 + θ_1 is an even or odd multiple of π according to the sign of t_2 - t_1. We conclude that x_2^t_2 = 0 by definition of t_2 and that we have x_1^t_2≥ A_1 ν_1/ν_2η in virtue of (<ref>), hence the claim (<ref>). * Now we treat the case where t_1 + π/ν_1 also lies in [0, T_0]. In other words, the interval J_1 := [t_1, t_1 + π/ν_1] is contained in [0, T_0]. As we already mentioned, sin(ν_1 t + θ_1) has constant sign on J_1. * If sin(ν_1 t + θ_1) ≥ 0 on J_1, since J_1 has length π/ν_1 > π/ν_2, then t ↦sin(ν_2 t + θ_2) vanishes in J_1, and we can choose a zero t_2 at distance ≥π/2(1/ν_1 - 1/ν_2) from the boundary of J_1. Reproducing the previous argument with the concavity inequality for the sine function, we deduce that sin(ν_1 t_2 + θ_1) ≥2 ν_1π×π2(1ν_1 - 1ν_2) = 1 - ν_1ν_2 . Therefore in this case, there is t_2 ∈ [0, T_0] with x_2^t_2 = 0 and x_1^t_2≥ (1 - ν_1/ν_2) A_1, hence the claim (<ref>). * In the remaining case where sin(ν_1 t + θ_1) ≤ 0 on J_1, we introduce some additional notation. We denote by t_- (resp. t_+) the largest (resp. smallest) zero of sin(ν_2 t + θ_2) which is < t_1 (resp. > t_1 + π/ν_1), given respectively by t_- = πν_2(⌈ν_2 t_1 + θ_2π⌉ - 1 - θ_2π) and t_+ = πν_2(⌊ν_2 (t_1 + π/ν_1) + θ_2π⌋ + 1 - θ_2π) . They both have the property that sin(ν_1 t_± + θ_1) > 0, but we wish to quantify this statement in order to have a uniform lower bound. We observe that we can write t_+ - t_- = πν_2(k + 1 + ⌊ν_2ν_1⌋) , with k ∈{0, 1}. Indeed, from the definition of t_+ and t_- and the properties of the floor and ceiling functions, we see that k = ⌊ν_2 (t_1 + π/ν_1) + θ_2π⌋ - ⌈ν_2 t_1 + θ_2π⌉ + 1 - ⌊ν_2ν_1⌋ is an integer satisfying -1 ≤ν_2ν_1 - 1 - ⌊ν_2ν_1⌋ < k ≤ 1 + ν_2ν_1 - ⌊ν_2ν_1⌋ < 2 , whence k = 0 or 1. In particular, we remark that the distance between t_- and t_+ is always less than T_0. This implies that either t_- or t_+ belongs to [0, T_0]. * Suppose t_- and t_+ both belong to [0, T_0]. We have (t_+ - t_-) - πν_1 = πν_2(k + 1 + ⌊ν_2ν_1⌋ - ν_2ν_1) = πν_2 (k + η) ≥πν_2η . Recalling that t_- < t_1 and t_+ > t_1 + π/ν_1, we deduce that either t_1 - t_- ≥π/2 ν_2η or t_+ - (t_1 + π/ν_1) ≥π/2 ν_2η. We call t_2 the zero, among t_- and t_+, that satisfies this property. Then, the concavity inequality for the sine function allows to conclude that x_1^t_2≥ A_1 ν_1/ν_2η again. * If t_- ∉[0, T_0], so that t_+ ∈ [0, T_0], we can estimate the distance of t_+ from t_1 + π/ν_1 and T_0 as follows: t_+ - (t_1 + πν_1) = t_+ - t_- - πν_1 - (t_1 - t_-) = πν_2 (k + η) - (t_1 - t_-) ≥πν_2η - πν_2(1 - k) , where we used the fact that t_1 - t_-≤π/ν_2 by construction in the last inequality; and T_0 - t_+ = T_0 - (t_+ - t_-) - t_- = πν_2(1 - k) - t_- ≥πν_2(1 - k) , where this time we have used that t_- < 0 by assumption. Now observe that t_+ satisfies by definition t_1 + 2 πν_1 - t_+ = πν_1 - (t_+ - (t_1 + πν_1)) ≥πν_1 - πν_2 . Thus, if t_+ - (t_1 + π/ν_1)≥π/2min(η/ν_2, 1/ν_1 - 1/ν_2), then t_2 = t_+ lies at distance ≥π/2min(η/ν_2, 1/ν_1 - 1/ν_2) from the boundary of the interval [t_1 + π/ν_1, t_1 + 2 π/ν_1], to which it belongs. This allows to deduce that x_1^t_2≥ A_1 δ_1 using the inequality sin x ≥2/π x on [0, π/2] again, and x_2^t_2 = 0 by definition of t_2 = t_+. If on the contrary t_+ - (t_1 + π/ν_1)≤π/2min(η/ν_2, 1/ν_1 - 1/ν_2), then from (<ref>), it follows that k = 0, so that t_2 = t_+ + π/ν_2≤ T_0 from (<ref>). Then we have t_1 + 2 πν_1 - t_2 = πν_1 - πν_2 - (t_+ - (t_1 + πν_1)) ≥π2(1ν_1 - 1ν_2) . In particular, t_2 lies again at large enough distance of the boundary of [t_1 + π/ν_1, t_1 + 2 π/ν_1]. We deduce as before that x_1^t_2≥ A_1 δ_1 and x_2^t_2 = 0. * It remains to deal with the case where t_+ ∉[0, T_0], hence t_- ∈ [0, T_0], which is symmetrical. We write t_1 - t_- = - (t_+ - t_1 - πν_1) + t_+ - t_- - πν_1≥ - πν_2 + πν_2 (k + η) = πν_2η - πν_2(1 - k) , t_- = T_0 - (t_+ - t_-) + t_+ - T_0 ≥πν_2 (1 - k) , using respectively that t_1 + π/ν_1 - t_+≤π/ν_2 by construction of t_+, and t_+ > T_0 by assumption. By definition of t_- we have t_- - (t_1 - πν_1) = πν_1 - (t_1 - t_-) ≥πν_1 - πν_2 , so that t_2 = t_- satisfies x_1^t_2≥ A_1 δ_1 and x_2^t_2 = 0 provided t_1 - t_-≥π/2min(η/ν_2, 1/ν_1 - 1/ν_2). Otherwise, k = 0 in virtue of (<ref>) so that (<ref>) ensures that t_2 = t_- - π/ν_2≥ 0. Then we check that t_2 - (t_1 - πν_1) = πν_1 - πν_2 - (t_1 - t_-) ≥π2(1ν_1 - 1ν_2) , and we conclude similarly to the previous case. The discussion above shows that (<ref>) is true. In particular, (x_1^t_2, x_2^t_2) is in the cone C_/2^1. Using that the sine function is 1-Lipschitz, we know that for t in a neighborhood of 0, it holds *x_2^t_2 + t≤ A_2 ν_2 t and x_1^t_2 + t≥ A_1 (δ_1 - ν_1 t) . So for t small enough, (x_1^t_2 + t, x_2^t_2 + t) will remain in the cone C_/2^1. Quantitatively, as soon as t fulfills the condition t < δ_1/ν_11 + ν_2/ν_1A_2/A_1/tan(/4) , we compute that x_1^t_2 + t > A_1 δ_1 ν_2/ν_1A_2/A_1/tan(/4)1 + ν_2/ν_1A_2/A_1/tan(/4) > A_2 ν_2tan(/4)t≥x_2^t_2 + ttan(/4) . This means that for t satisfying (<ref>), the point (x_1^t_2 + t, x_2^t_2 + t) belongs indeed to the cone C_/2^1. In the case where t_2 = 0 or t_2 = T_0, we may restrict ourselves to times t satisfying t ≥ 0 or t ≤ 0 in addition to (<ref>), so that in the end, we obtain ∫_0^T_0_C_/2^1(x_1^t, x_2^t) t ≥min( T_0, δ_1/ν_11 + ν_2/ν_1A_2/A_1/tan(/4)) . *Step 5 ­– Upper bound on the optimal observation time. Now that we have (<ref>) and (<ref>) at hand, we can obtain a lower bound independent of the values of A_1 and A_2. If on the one hand A_1/A_2≤tan(/4)/2, then (<ref>) yields ∫_0^T_0_(x_1^t, x_2^t) ∈ω(/2) t ≥π2 ν_2 , while on the other hand, if A_2/A_1≤2/tan(/4), then (<ref>) leads to ∫_0^T_0_(x_1^t, x_2^t) ∈ω(/2) t ≥min( T_0, δ_1/ν_11 + ν_2/ν_12/tan^2(/4)) ≥^216min( T_0, δ_1ν_1 + 2 ν_2) (to get the second inequality, use that /4≤tan(/4) ≤ 1 since ≤π/2 by assumption). On the whole, we have ∫_0^T_0_(x_1^t, x_2^t) ∈ω(/2) t ≥^232min( πν_2, δ_1ν_1 + ν_2) = c ^2 , and setting T_ = T_0 - c/2^2, we deduce ∫_0^T__(x_1^t, x_2^t) ∈ω(/2) t ≥∫_0^T_0_(x_1^t, x_2^t) ∈ω(/2) t - c2^2 ≥c2^2 . Therefore the dynamical condition (<ref>) holds in time T_. Setting T = T_0 - c/4^2 > T_, we use Theorem <ref> to conclude that observability is true on [0, T] from ω(/2)_R ∖ K, for some R > 0 and for any compact set K. We can take K to be a ball with radius large enough so that ω(/2)_R ∖ K ⊂ω() (this can be justified by an argument similar to Lemma <ref>). We conclude that observability holds from ω() in time T. This proves the upper bound in (<ref>). *Step 6 ­– Lower bound on the optimal observation time. Fix ∈ (0, π/4). We recall that ν_2 > ν_1. Our objective is to exhibit trajectories (x_1^t, x_2^t) that do not meet the set ω(2 ). They typically look like the one shown in Figure <ref>. Take δ > 0 a small parameter to be chosen later. These trajectories we look for are of the form x_1^t = A_1 sin(πν_1ν_2(1 - δ) - ν_1 t) and x_2^t = A_2 s sin(πδ + ν_2 t) , with s ∈{+1, -1}, and A_1, A_2 > 0 to be tuned properly later on as well. Let us introduce three remarkable times t_0, t_1 and t_2: provided δ < 1/2, the first zeroes of x_1^t and x_2^t in the interval [0, T_0] coincide and are given by t_0 = π/ν_2(1 - δ) . The next zero of x_1^t is t_1 = t_0 + πν_1 . As for x_2^t, its first zero that is strictly larger than t_1 is given by t_2 = πν_2(1 + ⌊δ + ν_2π t_1 ⌋ - δ) = πν_2(1 + ⌊ 1 + ν_2ν_1⌋ - δ) = T_0 - πν_2δ . Notice that t_2 ≤ T_0. By construction, the interval [t_1, t_2] has length t_2 - t_1 ∈ (0, π/ν_2], and x_2^t has constant sign on this interval. We choose the sign s involved in the definition (<ref>) of x_2^t in such a way that x_2^t ≤ 0 on [t_1, t_2]. In particular, the projected trajectory (x_1^t, x_2^t) cannot cross C_2 ^2 in the time interval [t_1, t_2]. Likewise, since x_1^0 > 0, it follows that x_1^t ≤ 0 on [t_0, t_1], by definition of t_0, t_1. In particular, the curve (x_1^t, x_2^t) cannot be in C_2 ^1 for t ∈ [t_0, t_1]. Set T = t_2 - π/ν_2δ. In each interval [0, t_0], [t_0, t_1] and [t_1, T], we want to exclude the possibility for the trajectory to be in C_2 ^1 or C_2 ^2 by suitably choosing the parameters δ, A_1, A_2. To achieve this goal, we are interested in estimating from above and from below x_1^t and x_2^t in these intervals. We first deal with x_1^t. Recalling that the sine function is 1-Lipschitz, we know that *x_1^t≤ A_1 ν_1 min(*t - t_0, *t - t_1) , ∀ t ∈ . We obtain lower estimates by roughly bounding from below sin x on [0, π] by the “triangle" function 2/πmin(x, π - x). For t ∈ [0, t_0], that leads to *x_1^t ≥ A_1 2πmin(ν_1 *t_0 - t, *π - ν_1 (t_0 - t)) ≥ A_1 2 ν_1πmin(*t_0 - t, *πν_1 - πν_2 + δπν_2 + t) ≥ A_1 2 ν_1πmin(*t_0 - t, πν_1 - πν_2) , for t ∈ [t_0, t_1] we obtain *x_1^t≥ A_1 2 ν_1πmin(*t - t_0, *t - t_1) , while for t ∈ [t_1, T], we obtain *x_1^t ≥ A_1 2πmin(ν_1 *t - t_1, *π - ν_1 (t - t_1)) ≥ A_1 2 ν_1πmin(*t - t_1, *πν_1 + t_1 - t) ≥ A_1 2 ν_1πmin(*t_1 - t, πν_1 + t_1 - T) . The last inequality rests on the fact that π/ν_1 + t_1 ≥ T. More quantitatively, we have πν_2 + t_1 = T_0 + πν_2( 1 + ν_2ν_1 + 1 - δ - 2 - ⌊ν_2ν_1⌋) = T + πν_2δ + πν_2( ν_2ν_1 - ⌊ν_2ν_1⌋) . In particular, it holds πν_1 + t_1 - T = π(1ν_1 - 1ν_2) + (πν_2 + t_1 - T) ≥π(1ν_1 - 1ν_2) + πν_2δ , which leads to *x_1^t≥ A_1 2 ν_1πmin(*t_1 - t, π(1ν_1 - 1ν_2)) , ∀ t ∈ [t_1, T] . We obtain a similar estimate for x_2^t: it vanishes at t_0 and t_2, so using again that the sine function is 1-Lipschitz we get *x_2^t≤ A_2 ν_2 min(*t_0 - t, *t_2 - t) , ∀ t ∈ . Near, t_1, we want an accurate upper bound using the fact that x_2^t_1≤ 0 (recall that we chose the sign s in (<ref>) so that this is true): for any t ∈, we have x_2^t ≤ x_2^t - x_2^t_1≤ A_2 ν_2 t - t_1 . As for a lower bound, we obtain for t ∈ [0, t_0]: *x_2^t ≥ A_2 2πmin(ν_2 *t_0 - t, *π - ν_2 (t_0 - t)) ≥ A_2 2 ν_2πmin(*t_0 - t, δπν_2 + t) ≥ A_2 2 ν_2πmin(*t_0 - t, δπν_2) , and for t ∈ [t_1, T]: *x_2^t ≥ A_2 2πmin(*π - ν_2 (t_2 - t), ν_2 *t_2 - t) ≥ A_2 2 ν_2πmin(*πν_2 - (t_2 - t), *t_2 - t) ≥ A_2 2 ν_2πmin(t - t_1, πν_2δ) . This time, the last inequality holds true since on the one hand, t_2 - t ≥ t_2 - T = πν_2δ, and on the other hand, thanks to (<ref>) and (<ref>), we check that for any t ∈ [t_1, T]: πν_2 - (t_2 - t) = (t - t_1) + (πν_2 + t_1) - t_2 = (t - t_1) + πν_2(ν_2ν_1 - ⌊ν_2ν_1⌋) ≥ t - t_1 . Now we show that the two conditions 2 ν_1ν_2 ≤A_2A_1δ , 2 ν_2ν_1 ≤A_1A_2min(1, ν_2ν_1 - 1 ) , imply that the curve (x_1^t, x_2^t) does not cross the set ω(2 ) in the interval [0, T]. We study the three intervals [0, t_0], [t_0, t_1] and [t_1, T] separately. * Let t ∈ [0, t_0]. On the one hand, the condition (<ref>) implies that A_1 ν_1 t - t_04 π≤ A_2 2 ν_2πmin(t - t_0, δπν_2) (recall that t_0 ≤π/ν_2 and δ≤ 1/2). Using that tan≤/π/4 for ∈ [0, π/4], we obtain A_1 ν_1 t - t_0tan≤ A_2 2 ν_2πmin(t - t_0, δπν_2) , which leads to tan() x_1^t≤x_2^t in virtue of (<ref>) and (<ref>). Therefore (x_1^t, x_2^t) ∉C_2 ^1. On the other hand, the condition (<ref>) implies that A_2 ν_2 t_0 - t4 π≤ A_1 2 ν_1πmin(*t_0 - t, πν_1 - πν_2) (recall again that t_0 ≤π/ν_2). Using that tan≤/π/4 for ∈ [0, π/4], we obtain A_2 ν_2 t_0 - ttan≤ A_1 2 ν_1πmin(*t_0 - t, πν_1 - πν_2) , which leads to tan() x_2^t≤x_1^t in virtue of (<ref>) and (<ref>). Therefore (x_1^t, x_2^t) ∉C_2 ^2. * On [t_0, t_1], the situation is slightly simpler because we already know that x_1^t ≤ 0 on this interval, which means that the trajectory does not cross C_2 ^1 by construction. In addition, condition (<ref>) implies that A_2 ν_2 min(*t_0 - t, *t_1 - t) 4 π≤ A_1 2 ν_1πmin(*t - t_0, *t_1 - t) . Then (<ref>), (<ref>) and (<ref>) yield tan() x_2^t ≤x_1^t, hence (x_1^t, x_2^t) ∉C_2 ^2. * We finally consider t ∈ [t_1, T]. Notice that by construction, it holds x_2^t ≤ 0 on [t_1, T], so that the trajectory does not enter C_2 ^2. To disprove the fact that it meets C_2 ^1, we check that the condition (<ref>) implies A_1 ν_1 *t - t_14 π≤ A_2 2 ν_2πmin(*t - t_1, πν_2δ) , owing to the fact that π/ν_2≥ T - t_1 ≥ t - t_1 (this can be deduced from (<ref>)). Then (<ref>) and (<ref>) lead to tan() x_1^t≤x_2^t, which shows indeed that (x_1^t, x_2^t) ∉C_2 ^1. To sum up, in order to ensure that t ↦ (x_1^t, x_2^t) does not cross ω(2 ), it suffices to choose A_1/ A_2 properly, as well as δ, so that (<ref>) and (<ref>) are fulfilled. If we set δ = 4 ^2min(1, ν_2/ν_1 - 1) and A_1A_2 = 2 ν_2/ν_1min(1, ν_2/ν_1 - 1) , we can check that these two conditions are indeed satisfied. The conclusion is as follows: we consider a sequence of initial data of the form ρ_n = (A_1, nsin(πν_1/ν_2 (1 - δ)), A_2, n s sin(πδ)) with A_1, n/A_2, n as in (<ref>) and A_1, n, A_2, n→∞ as n →∞. The x component of the trajectory t ↦ϕ^t(ρ_n) is then of the same form as the projected trajectory (x_1^t, x_2^t) that we studied. Given that these trajectories do not cross ω(2 ), we conclude that the observability condition of Theorem <ref> is not true in time T, namely K_p_0^∞(ω(2 ), T) = 0 . Yet for any R > 0, as we have already seen earlier, ω()_R is contained in ω(2 ) modulo a compact set. Thus for any R > 0, there exists a compact set K(R) ⊂^d such that K_p_0^∞(ω()_R ∖ K(R), T) ≤K_p_0^∞(ω(2 ), T) = 0 . We conclude thanks to the necessary condition in Theorem <ref> that observability cannot hold in ω() in time T. It remains to see that by definition (recall (<ref>) and (<ref>)), we have T = t_2 - πν_2δ = T_0 - 2 πν_2δ = T_0 - C ^2 . This ends the proof of the lower bound of the optimal observation time. §.§ Proof of Proposition <ref> The aim of this proposition is to study observability from measurable conical sets for the (exact) isotropic harmonic oscillator. We first simplify the situation owing to periodicity properties of the isotropic quantum harmonic oscillator. *Step 1 ­– Upper bound of the optimal observation time. First recall that there exists a complex number z of modulus 1 such that ^π/ν P u = z u(- ) , ∀ u ∈ L^2(^d) . See for instance[The property (<ref>) can be derived from the fact that the spectrum of P is made of half integer multiples of ν, together with parity properties of eigenfunctions.] <cit.> or <cit.>. In particular, the propagator ^- t P is 2 π/ν-periodic modulo multiplication by z^2. This enables us to show that observability holds in some time T if and only if it holds in time 2 π/ν: assume the Schrödinger equation is observable from ω⊂^d in some time T; let k be an integer such that k 2 π/ν≥ T. The aforementioned 2 π/ν-periodicity of the harmonic oscillator leads to *u_L^2(^d)^2 ≤ C ∫_0^T *^- t P u_L^2(ω)^2 t ≤ C ∫_0^k 2 π/ν*^- t P u_L^2(ω)^2 t = C k ∫_0^2 π/ν*^- t P u_L^2(ω)^2 t , for any u ∈ L^2 so that observability holds in ω in time 2 π/ν. In particular, the optimal observation time is always ≤2 π/ν. We can further reduce the observation time by (2 C k)^-1 (see Lemma <ref>), so that the optimal observation time is in fact T_⋆ < 2 π/ν. Incidentally, the property (<ref>) yields ∫_0^π/ν*^- t P u_L^2(ω∪ - ω)^2 t ≤∫_0^2 π/ν*^- t P u_L^2(ω)^2 t ≤ 2 ∫_0^π/ν*^- t P u_L^2(ω∪ - ω)^2 t , which will be useful later on. *Step 2 ­– Necessary condition. Assume observability holds from ω = ω(Σ) in some time T. Let k be a positive integer such that k 2 π/ν≥ T. Using (<ref>) and (<ref>), we obtain ∀ u ∈ L^2(^d) , *u_L^2(^d)^2 ≤ C ∫_0^T *^- t P u_L^2(ω)^2 t ≤ 2 C k ∫_0^π/ν*^- t P u_L^2(ω∪ - ω)^2 t . We choose for u a particular coherent state. Following Combescure and Robert <cit.>, for any ρ_0 = (x_0, ξ_0), we set φ_ρ_0(x) = (νπ)^d/4^- /2ξ_0 · x_0 + ξ_0 · xexp(- ν2*x - x_0^2) . Then it holds ^- t Pφ_ρ_0 = ^- /2 t ν dφ_ρ_t , where ρ_t = ϕ^t(ρ_0) is the evolution of ρ_0 in phase space along the Hamiltonian flow associated with p(x, ξ) = 1/2(ν^2 x^2 + ξ^2), that is to say ρ_t = ( cos(ν t) x_0 + sin(ν t) ξ_0ν, - νsin(ν t) x_0 + cos(ν t) ξ_0 ) . Equation (<ref>) can be checked by observing that the derivative of both sides agree, or by applying <cit.>. Selecting an initial datum of the form ρ_0 = (0, ξ_0) with a non-zero ξ_0, the observability inequality implies 1 = *φ_ρ_0_L^2(^d)^2 ≤ C ∫_0^T *φ_ρ_t_L^2(ω)^2 t ≤ 2 k C ∫_0^π/ν*φ_ρ_t_L^2(ω∪ - ω)^2 t = 2 k C ( πν)^d/2∫_0^π/ν∫_ω∪ - ω*exp( - ν2*x - sin(ν t) ξ_0ν^2 )^2 x t = 4 k Cν( πν)^d/2∫_0^π/2∫_ω∪ - ωexp( - ν*x - sin(t) ξ_0ν^2 ) x t . We used a change of variables in the integral over t and the fact that sin(x) = sin(π - x) to obtain the last equality. Next we truncate the integrals in t and in x using respectively a small parameter δ > 0 and a large parameter R > 0: ∫_0^π/2( πν)^d/2∫_ω∪ - ωexp( - ν*x - sin(t) ξ_0ν^2 ) x t ≤πδ + ∫_π/2δ^π/2 (1 - δ)( πν)^d/2( ∫_ω∪ - ωexp( - ν*x - sin(t) ξ_0ν^2 ) _B_R(sin(t) ξ_0/ν)(x) x. .+ ∫_^d ∖ B_R(0)^- νx^2 x ) t . The rightmost integral is controlled by c/R for some constant c > 0. Therefore it holds ∫_0^π/2( πν)^d/2∫_ω∪ - ωexp( - ν*x - sin(t) ξ_0ν^2 ) x t ≤πδ + cR + ( πν)^d/2∫_π/2δ^π/2 (1 - δ)*(ω∪ - ω) ∩ B_R(sin(t) ξ_0ν) t . We get rid of the sine in the right-hand side by noticing that cos t ≥ 1 - 2/π t ≥δ for any t ∈ [π/2δ, π/2 (1 - δ)], and changing variables: ∫_π/2δ^π/2 (1 - δ)*(ω∪ - ω) ∩ B_R(sin(t) ξ_0ν) t ≤∫_π/2δ^π/2 (1 - δ)*(ω∪ - ω) ∩ B_R(sin(t) ξ_0ν)cos tδ t = 1δ∫_sin(π/2δ)^sin(π/2 (1 - δ))*(ω∪ - ω) ∩ B_R(s ξ_0ν) s . Using that sin x ≥2/π x on [0, π/2], we finally deduce that ∫_0^π/2( πν)^d/2∫_ω∪ - ωexp( - ν*x - sin(t) ξ_0ν^2 ) x t ≤πδ + cR + 1δ( πν)^d/2∫_δ^1 *(ω∪ - ω) ∩ B_R(s ξ_0ν) s . We plug this into (<ref>) to obtain 12 = 12*φ_ρ_0_L^2(^d)^2 ≤ 4 k Cδν( πν)^d/2∫_δ^1 *(ω∪ - ω) ∩ B_R(s ξ_0ν) s , where we absorbed the remainder terms of (<ref>) in the left-hand side by choosing δ sufficiently small and R sufficiently large. We now use a scaling argument in the right-hand side, which is possible since the set ω∪ - ω is conical: for any s ∈ [δ, 1], writing θ_0 = ξ_0/ξ_0 and r = ν R/δξ_0 , we have *(ω∪ - ω) ∩ B_R(s ξ_0ν) = (s ξ_0/ν)^d *(ω∪ - ω) ∩ B_ν R/s ξ_0(θ_0) ≤(R/δ)^d r^-d*(ω∪ - ω) ∩ B_r(θ_0) . After integrating over the s variable, the estimate (<ref>) becomes 1 = *φ_ρ_0_L^2(^d)^2 ≤ 8 k Cδν( πν)^d/2(R/δ)^d r^-d*(ω∪ - ω) ∩ B_r(θ_0) . We now reformulate the right-hand side in terms of the lower density Θ_Σ^- defined in (<ref>). To do so, we observe that the triangle inequality yields for r ∈ (0, 1): ∀ x ∈ B_r(θ_0) , x - 1≤x - θ_0 and *xx - θ_0 = *x - θ_0x + θ_0x(1 - x)≤2 r1 - r , which in turn leads to B_r(θ_0) ⊂x ∈^d1 - r ≤x≤ 1 + r and *xx - θ_0≤2 r1 - r , r ∈ (0, 1) . Recall that if ξ_0 is large enough, then (<ref>) implies r ∈ (0, 1). We conclude by a spherical change of coordinates that *(ω∪ - ω) ∩ B_r(θ_0) ≤∫_1 - r^1 + r∫_^d - 1_θ - θ_0≤2 r/1 - r_ω∪ - ω(r̃θ) c_d r̃ ^d - 1σ(θ) r̃ ≤∫_1 - r^1 + r∫_^d - 1∩ B_2 r/1 - r(θ_0)_Σ(θ) c_d 2^d - 1σ(θ) r̃ = c_d 2^d - 1× 2 r σ( Σ∩ B_2 r/1 - r(θ_0) ) . In addition, one has σ(B_r(θ_0)) ≤ c_d' r^d - 1 . (In the above estimates, c_d and c_d' are constants depending only on the dimension.) Combining (<ref>), (<ref>) and (<ref>), we obtain 1 = *φ_ρ_0_L^2(^d)^2 ≤ c_d c_d' 2^d + 3 k Cδν( πν)^d/2(R/δ)^d ×(21 - r)^d-11c_d' (2 r/1 - r)^d - 1σ( Σ∩ B_2 r/1 - r(θ_0) ) ≤ c_d c_d' 2^d + 3 k Cδν( πν)^d/2(R/δ)^d (21 - r)^d σ( Σ∩ B_2 r/1 - r(θ_0) )σ( B_2 r/1 - r(θ_0) ) . Recalling that r behaves as 1/ξ_0, it remains to let ξ_0 →∞ with ξ_0/ξ_0 = θ_0 arbitrary, to deduce that 1 ≤ c_d c_d' 2^d + 3 k Cδν( πν)^d/2(2R/δ)^d Θ_Σ^-(θ_0) , ∀θ_0 ∈^d - 1 . This concludes the proof of the necessary condition. *Step 3 ­– Sufficient condition. Write for short ω = ω(Σ) again. The fact that Σ= Σ∪ - Σ has full measure, namely σ(^d - 1∖Σ) = 0, implies that ^d ∖ (ω∪ - ω) is Lebesgue negligible (recall the definition of ω(Σ) in (<ref>)). Therefore the left-hand side of (<ref>) with k = 1 yields ∫_0^2 π/ν*^- t P u_L^2(ω)^2 t ≥∫_0^π/ν*^- t P u_L^2(ω∪ - ω)^2 t = ∫_0^π/ν*^- t P u_L^2(^d)^2 t = π/ν*u_L^2(^d)^2 , where we used the fact that the propagator is an isometry. This completes the proof.  § PROOFS OF OBSERVABILITY RESULTS FROM SPHERICAL SETS In this section, we give proofs of the results presented in Subsection <ref>, which concern observation sets that are spherical in the sense of (<ref>). Propositions <ref> and <ref> are proved in Subsections <ref> and <ref> respectively. §.§ Proof of Proposition <ref> The rotation S_θ of angle θ reads S_θ y = (cosθ y_1 + sinθ y_2, - sinθ y_1 + cosθ y_2, y_3, …, y_d) , y = (y_1, y_2, …, y_d) ∈^d . In the sequel, we set L_0 to be the two dimensional plane spanned by the vectors e_1 = M(1, 0, 0, …, 0) and e_2 = M(0, 1, 0, 0, … 0) , The two linear maps Π_L_0 = 12 M (𝕀 - S_π) M^-1 and Π_L_0^⊥ = 12 M (𝕀 + S_π) M^-1 are the orthogonal projectors on L_0 and L_0^⊥ respectively, since M is orthogonal. With the notation of assumption <ref>, we can write, with a slight abuse of notation, V(x_0) = Ṽ_0(*M^-1 x_0) , ∀ x_0 ∈ L_0 . Let us investigate the properties of the gradient of V on L_0. Let x_0 ∈ L_0. Then it holds ∇ V(x_0) ∈ L_0 and ∃ c = c(x_0) ≥ 0 : ∇ V(x_0) = c x_0 . Assumptions <ref> and <ref> (with θ = π) yield for any x ∈^d: - ∇ V(-x) = ∇ V(x) and M S_-π M^-1∇ V(M S_π M^-1 x) = ∇ V(x) . Yet since x_0 ∈ L_0, we have Π_L_0 x_0 = x_0 so that x_0 = - M S_π M^-1 x_0 , and noticing that S_π = S_-π, we obtain combining the two equations (<ref>): ∇ V(x_0) = - ∇ V(- x_0) = - M S_π M^-1∇ V(- M S_π M^-1 x_0) = - M S_π M^-1∇ V(x_0) . That means exactly that Π_L_0^⊥∇ V(x_0) = 0, or in other words, ∇ V(x_0) ∈ L_0. Next we prove that ∇ V(x_0) is collinear with x_0. We first get rid of the case x_0 = 0: the first equation in (<ref>) implies that ∇ V(0) = 0. From now on, we assume that x_0 ≠ 0. We compute θ M S_θ M^-1 = θ(M S_θ M^-1Π_L_0 + M S_θ M^-1Π_L_0^⊥) = M S_θ + π/2 M^-1Π_L_0 . This is true because M S_θ M^-1Π_L_0^⊥ is independent of θ (M S_θ M^-1 is the identity in L_0^⊥). Therefore, differentiating the equality V(x) = V(M S_θ M^-1 x) at θ = 0, we obtain 0 = θ V(x_0)_|θ = 0 = θ V(M S_θ M^-1 x_0)_|θ = 0 = ∇ V(x_0) · M S_π/2 M^-1Π_L_0 x_0 = ∇ V(x_0) · M S_π/2 M^-1 x_0 . This means that ∇ V(x_0) is orthogonal to M S_π/2 M^-1 x_0. Yet the plane L_0 is invariant by M S_θ M^-1 and x_0 ⊥ M S_π/2 M^-1 x_0. Since ∇ V(x_0) ∈ L_0 and L_0 has dimension 2, we deduce that ∇ V(x_0) = c x_0 for some c ∈. We claim that c ≥ 0 as a consequence of assumption <ref> that Ṽ_0 is non-decreasing. Indeed for t > 0 close to zero, using (<ref>), the Taylor formula at order one yields 0 ≤Ṽ_0((1 + t) *M^-1 x_0) - Ṽ_0(*M^-1 x_0) = V(x_0 + t x_0) - V(x_0) = t ∇ V(x_0) · x_0 + o(t) . Dividing by t > 0, we find that ∇ V(x_0) · x_0 = c x_0^2 ≥ 0. Thus c = ∇ V(x_0) ·x_0/x_0^2 depends only on x_0 since V restricted to L_0 is radial, and the proof is complete. This lemma allows us to exhibit periodic circular orbits of the Hamiltonian flow of p. For any x_0 ∈ L_0, denoting by c the scalar such that ∇ V(x_0) = c x_0, the phase space curve x^t = M S_√(c) t M^-1 x_0 , ξ^t = √(c) M S_√(c) t + π/2 M^-1 x_0 is the trajectory of the Hamiltonian flow with initial data (x_0, √(c) M S_π/2 M^-1 x_0). This follows from uniqueness in the Picard-Lindelöf Theorem, since the above curve solves on the one hand: t x^t = √(c) M S_√(c) t + π/2 M^-1Π_L_0 x_0 = ξ^t , and on the other hand, in view of (<ref>) and observing that x^t = x_0 for any t, tξ^t = c M S_√(c) t + π M^-1Π_L_0 x_0 = c M S_√(c) t M^-1(Π_L_0^⊥ - Π_L_0) x_0 = - c x^t = - ∇ V(x^t) . To conclude, we argue as follows: since by assumption observability holds from ω(I) in time T > 0, the necessary condition of Theorem <ref> implies that there exists R > 0 such that ∃ϵ > 0, ∃ A > 0 : ∀ρ≥ A , ∫_0^T _ω(I)_R ×^d(ϕ^t(ρ)) t ≥ϵ . Let x_0 ∈ L_0 be such that x_0≥ A. We consider the Hamiltonian trajectory issued from the point (x_0, √(c(x_0)) M^-1 S_π/2 M x_0) constructed in (<ref>). Then x^t is constant over time, which implies that ϵ≤∫_0^T _ω(I)_R ×^d(ϕ^t(ρ)) t = ∫_0^T _ω(I)_R(x^t) t = T _I_R(x_0) , whence x_0∈ I_R. We deduce that ∀ s ∈_+ , I_R ∩ [s, s + A] ≠∅ , which implies the desired result (<ref>) with r = A + 2 R. §.§ Proof of Proposition <ref> Firstly we assume that ν_2/ν_1 is rational: we write it as an irreducible fraction p/q. The number T = 2 π/ν_2 p = 2 π/ν_1 q is the period of the Hamiltonian flow associated with 1/2(x · A x + ξ^2). Without loss of generality, we can assume that A is diagonal, and that the eigenvectors associated with ν_1^2 and ν_2^2 are the vectors (1, 0) and (0, 1) of the canonical basis of ^2. As mentioned in (<ref>), we seek to compute a sharp uniform upper bound of the ratio between the minimal and the maximal value of the norm of projected trajectories x^t on the time interval [0, T]. More explicitly, we are interested in the quantity Λ_0 = sup_ρ_0 ∈^4 ∖{0}min_t ∈ [0, T](π∘ϕ^t)(ρ_0)max_t ∈ [0, T](π∘ϕ^t)(ρ_0) , where we recall that π : (x, ξ) ↦ x. We start with two remarks, related to explicit expressions of the Hamiltonian flow. First we can replace the supremum on ^4 by a maximum on a compact set parametrizing trajectories, e.g. the unit sphere ^3, because the Hamiltonian flow is homogeneous of degree 1, that is ϕ^t(λρ_0) = λϕ^t(ρ_0) for any scalar λ∈ (it fact ϕ^t is a linear map for all t). Second, since x^t^2 = x_1^t^2 + x_2^t^2, it will be easier to compute Λ_0^2. In view of these remarks, and writing the Hamiltonian trajectories in action-angle coordinates: x_1^t = A_1 sin(ν_1 t + θ_1) and x_2^t = A_2 sin(ν_2 t + θ_2) , we want to study Λ_0^2 = sup_A_1^2 + A_2^2 = 1 θ_1, θ_2 ∈min_t ∈ [0, T](A_1^2 sin^2(ν_1 t + θ_1) + A_2^2 sin^2(ν_2 t + θ_2))max_t ∈ [0, T](A_1^2 sin^2(ν_1 t + θ_1) + A_2^2 sin^2(ν_2 t + θ_2)) = sup_λ∈ [0, 1] θ_1, θ_2 ∈min_t_1 ∈ [0, T]((1 - λ) sin^2(ν_1 t_1 + θ_1) + λsin^2(ν_2 t_1 + θ_2))max_t_2 ∈ [0, T]((1 - λ) sin^2(ν_1 t_2 + θ_1) + λsin^2(ν_2 t_2 + θ_2)) = sup_λ∈ [0, 1] θ_1, θ_2 ∈min_t_1 ∈ [0, T]((1 - λ) sin^2(ν_1 t_1 + θ_1) + λsin^2(ν_2 t_1 + θ_2))1 - min_t_2 ∈ [0, T]((1 - λ) cos^2(ν_1 t_2 + θ_1) + λcos^2(ν_2 t_2 + θ_2)) . In view of the periodicity in the variables θ_1 and θ_2, the supremum in the variables λ, θ_1, θ_2 is in fact a supremum over (λ, θ_1, θ_2) ∈ [0, 1] × [0, 2 π] × [0, 2 π]. A compactness and continuity argument shows that this supremum is attained for some triple (λ, θ_1, θ_2). Furthermore, one can check that max_λ, θ_1, θ_2 = max_θ_1, θ_2max_λ. Thus we should simplify the problem first by considering fixed values for θ_1 and θ_2, and maximizing with respect to these variables ultimately. Therefore our objective is to compute Λ_θ_1, θ_2^2 = max_λ∈ [0, 1]min_t_1 ∈ [0, T]((1 - λ) sin^2(ν_1 t_1 + θ_1) + λsin^2(ν_2 t_1 + θ_2))1 - min_t_2 ∈ [0, T]((1 - λ) cos^2(ν_1 t_2 + θ_1) + λcos^2(ν_2 t_2 + θ_2)) . We can further simplify this by rewriting in more pleasant terms the minima in the numerator and the denominator. It relies on the following fact. *Step 1 ­– Simplification of the optimization problem. The minimum we want to estimate involves a sum of two squared sine functions that oscillate at different frequencies. Intuitively, it looks reasonable that the minimum of such a sum is attained between two zeroes that achieve the minimal distance between a zero of the first sine function, and a zero of the second. This is a motivation to introduce d_0 = d_0(θ_1, θ_2) = 4 p qTmin_sin(ν_j t_j + θ_j) = 0 j = 1, 2t_1 - t_2 . It is indeed a minimum, and not only an infimum, thanks to the rational ratio between ν_1 and ν_2, or equivalently, thanks to the periodicity of the Hamiltonian flow. We can give an explicit expression of this quantity reasoning as follows: the numbers t_1 and t_2 are such that sin(ν_j t_j + θ_j) = 0, j = 1, 2, if and only if there exist two integers k_1 and k_2 such that ν_j t_j + θ_j = k_j π . Therefore it holds *t_1 - t_2 = *π(k_1ν_1 - k_2ν_2) - (θ_1ν_1 - θ_2ν_2) = T2 p q*(k_1 p - k_2 q) - (p θ_1π - q θ_2π) . Yet since p and q are coprime integers, it follows from Bézout's identity that k_1 p - k_2 q can take any value in when we vary k_1 and k_2. We deduce that d_0 = 2 (p θ_1π - q θ_2π, ) = (p θ_1π/2 - q θ_2π/2, 2 ) . Incidentally, this expression implies that d_0 ∈ [0, 1]. Now we claim that min_t ∈ [0, T]((1 - λ) sin^2(ν_1 t + θ_1) + λsin^2(ν_2 t + θ_2)) = min_s ∈ [0, 1]((1 - λ) sin^2(π/2p s d_0) + λsin^2(π/2q (1 - s) d_0)) . It amounts to prove that the minimum in t in the left-hand side of (<ref>) is attained between two zeros t_1, t_2 of sin(ν_1 t + θ_1) and sin(ν_2 t + θ_2) such that t_2 - t_1 = T/4 p q d_0. We first show that the minimum in s (in the right-hand side) is less than the minimum in t (in the left-hand side). To do so, we pick t_0 ∈ [0, T] that attains the minimum in t. We choose t_j two zeroes of sin(ν_j t + θ_j) respectively, j = 1, 2, that are the closest possible to t_0. Due to periodicity, they verify t_j - t_0≤π2 ν_j. That t_0 attains the minimum means that it is a critical point of the function F : t ⟼ (1 - λ) sin^2(ν_1 t + θ_1) + λsin^2(ν_2 t + θ_2) = (1 - λ) sin^2(ν_1 (t - t_1)) + λsin^2(ν_2 (t - t_2)) . Classical trigonometry formulae then yield (1 - λ) ν_1 sin(2 ν_1 (t_0 - t_1)) + λν_2 sin(2 ν_2 (t_0 - t_2)) = F'(t_0) = 0 . Recalling that 2 ν_j (t_0 - t_j)≤π, we see that sin(2 ν_j (t_0 - t_j)) is of the same sign as t_0 - t_j, thus leading to the condition that (t_0 - t_1) (t_0 - t_2) ≤ 0 , or in other words, t_0 lies between t_1 and t_2. Let s_0 ∈ [0, 1] be such that t_0 = (1 - s_0) t_1 + s_0 t_2. We obtain F(t_0) = (1 - λ) sin^2(ν_1 (t_0 - t_1)) + λsin^2(ν_2 (t_0 - t_2)) = (1 - λ) sin^2(ν_1 s_0 (t_2 - t_1)) + λsin^2(ν_2 (1 - s_0) (t_1 - t_2)) . We finally use that t_1 - t_2≥T/4 p q d_0 and the monotonicity of the sine function on [0, π/2] to deduce one inequality in (<ref>), namely: min_t ∈ [0, T] F(t) ≥min_s ∈ [0, 1]((1 - λ) sin^2(π/2p s d_0) + λsin^2(π/2q (1 - s) d_0)) . To check the converse inequality, we proceed as follows: we pick t_1 and t_2, zeroes of sin(ν_j t + θ_j) respectively, that satisfy t_1 - t_2 = T/4 p q d_0. Denote by J the closed interval with endpoints t_1, t_2. Let t_0 ∈ J be a point where F restricted to J attains its minimum. Then introducing a parameter s ∈ [0, 1] such that t = (1 - s) t_1 + s t_2, we obtain F(t_0) ≤ F(t) = (1 - λ) sin^2(π/2p s d_0) + λsin^2(π/2q (1 - s) d_0) , for all s ∈ [0, 1]. This results in min_t ∈ [0, T] F(t) ≤min_s ∈ [0, 1]( (1 - λ) sin^2(π/2p s d_0) + λsin^2(π/2q (1 - s) d_0) ) , which shows together with (<ref>) that (<ref>) is true. We observe in the definition of Λ_θ_1, θ_2 (see (<ref>)) that a similar minimum is involved with cosine functions instead of sine functions. To reduce to the case of sine functions and use (<ref>), we simply recall that cos(x) = sin(x + π/2). We obtain min_t ∈ [0, T]((1 - λ) . . cos^2(ν_1 t + θ_1) + λcos^2(ν_2 t + θ_2)) = min_t ∈ [0, T]((1 - λ) sin^2(ν_1 t + θ_1 + π2) + λsin^2(ν_2 t + θ_2 + π2)) = min_s ∈ [0, 1]((1 - λ) sin^2(π/2p s d_π/2) + λsin^2(π/2q (1 - s) d_π/2)) , where we set (recall the definition of d_0 in (<ref>)) d_π/2 = d_π/2(θ_1, θ_2) = d_0(θ_1 + π2, θ_2 + π2) = (p θ_1π/2 - q θ_2π/2 + p - q, 2 ) . Depending on whether p and q have the same parity, we can state that d_π/2 = { d_0 if p - q ≡ 0 2 1 - d_0 if p - q ≡ 1 2. . With this at hand, we can rewrite Λ_θ_1, θ_2^2 defined in (<ref>) as Λ_θ_1, θ_2^2 = max_λ∈ [0, 1]min_s_1 ∈ [0, 1]((1 - λ) sin^2(π/2/p s_1 d_0) + λsin^2(π/2/q (1 - s_1) d_0))1 - min_s_2 ∈ [0, 1]((1 - λ) sin^2(π/2/p s_2 d_π/2) + λsin^2(π/2/q (1 - s_2) d_π/2)) . *Step 2 ­– Computation of Λ_θ_1, θ_2^2. We set for any λ∈ [0, 1] and s ∈ [0, 1]: g_λ(s) = g_λ, d_0(s) = (1 - λ) sin^2(π/2/p s d_0) + λsin^2(π/2/q (1 - s) d_0) . In the perspective of computing Λ_θ_1, θ_2^2, we first show the following result. It holds max_λ∈ [0, 1]min_s ∈ [0, 1] g_λ(s) = g_λ_0(s_0) = sin^2(π/2p + q d_0) , where s_0 = p/p + q and λ_0 = q/p + q. Firstly, we observe that g_λ(s_0) is independent of λ, since it solves sin^2(π/2/p s_0 d_0) = sin^2(π/2/q (1 - s_0) d_0) . This remarkable property implies that for any λ∈ [0, 1], it holds ∀λ' ∈ [0, 1] , min_s ∈ [0, 1] g_λ'(s) ≤ g_λ'(s_0) = g_λ(s_0) , which results in max_λ' ∈ [0, 1]min_s ∈ [0, 1] g_λ'(s) ≤ g_λ(s_0) , ∀λ∈ [0, 1] . Now we to show that the equality is reached when λ = λ_0 introduced in the statement. Noticing that (1 - λ_0) 1/p = λ_0 1/q = 1/p + q, we obtain using classical trigonometry formulae: g_λ_0'(s) = π2 d_0 (2 1p (1 - λ_0) cos(π/2p s d_0) sin(π/2p s d_0) - 2 1qλ_0 cos(π/2q (1 - s) d_0) sin(π/2q (1 - s) d_0)) = π/2p + q d_0 ( sin(π/p s d_0) - sin(π/q (1 - s) d_0) ) = πp + q d_0 cos(π2 d_0 ( sp + 1 - sq)) sin(π2 d_0 ( sp - 1 - sq)) = πp + q d_0 cos(π2 d_0 ( sp + 1 - sq)) sin(π2 d_0 ( 1p + 1q) (s - s_0)) . We observe that the cosine is always non-negative for any s ∈ [0, 1], because d_0 ≤ 1. As for the sine, it is non-positive for s ≤ s_0 and non-negative for s ≥ s_0. We deduce that g_λ_0'(s) ≤ 0 on [0, s_0] and g_λ_0'(s) ≥ 0 on [s_0, 1]. Therefore, the minimum of g_λ_0 is attained at s_0, which concludes the proof of the lemma. Regarding the denominator in the definition of Λ_θ_1, θ_2^2, observing that λ_0 and s_0 in the above lemma do not dependent on d_0 or d_π/2, we find min_λ∈ [0, 1](1 - min_s ∈ [0, 1]( (1 - λ) sin^2(π/2/q s d_π/2) + λsin^2(π/2/p (1 - s) d_π/2) ) ) = 1 - max_λ∈ [0, 1]min_s ∈ [0, 1] g_λ, d_π/2(s) = 1 - g_λ_0, d_π/2(s_0) = cos^2(π/2p + q d_π/2) . This implies that λ_0 maximizes the minimum of the numerator and minimizes the maximum of the denominator at once. Moreover, when λ = λ_0, the minimum of the denominator and the maximum of the numerator are reached at a common value s_0. Therefore Λ_θ_1, θ_2^2 = sin^2(π/2/p + q d_0)cos^2(π/2/p + q d_π/2) . When p and q have the same parity, it holds d_π/2 = d_0, so that Λ_θ_1, θ_2 = tan(π/2p + q d_0) . When they do not have the same parity, then d_π/2 = 1 - d_0 and we obtain Λ_θ_1, θ_2 = sin(π/2/p + q d_0 )cos(π/2/p + q (1 - d_0)) = sin(π/2/p + q) - cos(π/2/p + q) sin(π/2/p + q (1 - d_0) )cos(π/2/p + q (1 - d_0)) = sin(π/2/p + q) - cos(π/2/p + q) tan(π/2/p + q (1 - d_0)) . Recall that in the above formulae, the dependence on the phase shifts θ_1 and θ_2 is hidden in d_0. Thus it remains to optimize over these parameters θ_1, θ_2 to compute the quantity Λ_0 defined in (<ref>). In the first case (<ref>), we notice that d_0 ≤ 1, and that the equality is achieved for θ_1 = π/2p and θ_2 = 0 for instance, so that Λ_0 = tan(π/2p + q) . In the second case (<ref>), the maximum is reached for d_0 = 1 as well, so that Λ_0 = sin(π/2p + q) . The conclusion is that Λ_0 = Λ(p/q), where the function Λ is the one defined in (<ref>). *Step 3 ­– Construction of an equivalent shrunk observation set. Recall that the sufficient condition of Theorem <ref> implies observability from an “enlarged" observation set. This leads us to construct a shrunk set Ĩ⊂ I, such that Ĩ_R = Ĩ + (-R, R) is contained in I up to a bounded set, so that the same is true for the sets ω(Ĩ) and ω(I). In the lemma below, when I ⊂_+, we use the notation I_R := ⋃_s ∈ I (s - R, s + R). [Shrunk observation set] Let I = ⋃_n I_n where I_n ⊂_+ are open intervals, with I_n→ + ∞ if the union is infinite. Then there exists a family of disjoint open intervals (J̃_n)_n in _+ (with J̃_n→ + ∞ if there are infinitely many of them) such that the set Ĩ = ⋃_n J̃_n satisfies the following: * Ĩ⊂ I; * for any R > 0, the set Ĩ_R ∖ I is bounded; * for any R > 0, it holds κ_⋆(Ĩ) = κ_⋆(Ĩ_R) = κ_⋆(I) = κ_⋆(I_R). Recall the definition of κ_⋆ in (<ref>). We write the open set I as a union of disjoint open intervals I = ⋃_n J_n. Let us fix R > 0. We first deal with the case where there are only finitely many J_n's. If I is bounded, one has κ_⋆(I) = κ_⋆(I_R) = 0 and Ĩ = ∅ satisfies the conclusions of the lemma. If I is not bounded, then there is an index n_0 for which J_n_0 is of the form J_n_0 = (a, + ∞). Then one has for any R > 0 the equality κ_⋆(I) = κ_⋆(I_R) = 1 and Ĩ = J_n_0 satisfies the conclusions of the lemma as well. We now consider the case where there are infinitely many J_n's. By assumption, one has J_n→ + ∞ as n →∞. Writing J_n = (a_n, b_n), with a_n < b_n < ∞, we define for any index n the interval J̃_n = (a_n + √(4 + δ_n)2, b_n - √(4 + δ_n)2) , where δ_n = min(a_n, b_n - a_n) . Since the J_n's are disjoint and J_n = b_n - a_n → + ∞, we also have a_n → + ∞, so that δ_n → + ∞ too. Incidentally, one readily checks that J̃_n→ + ∞ as well. Thus, defining Ĩ = ⋃_n J̃_n , we have Ĩ⊂ I, namely the property <ref>, and given any R > 0, there are finitely many n's such that R ≥√(δ_n)/2. It implies that the thickened set Ĩ_R is contained in I modulo a bounded set, hence <ref>. The crucial point of this construction is the claim <ref>. As a consequence of the inclusions Ĩ⊂Ĩ_R and I ⊂ I_R, it holds κ_⋆(Ĩ) ≤κ_⋆(Ĩ_R) and κ_⋆(I) ≤κ_⋆(I_R). Moreover, in virtue of <ref>, we can write Ĩ_R = (Ĩ_R ∩ I) ∪ A, where A = Ĩ_R ∖ I is bounded. Since 1/rA ∩ [0, r]≤1/rA→ 0 as r → + ∞, one can check that κ_⋆(Ĩ_R) ≤κ_⋆(Ĩ_R ∩ I) ≤κ_⋆(I). To sum up, we have proved so far that κ_⋆(Ĩ) ≤κ_⋆(Ĩ_R) ≤κ_⋆(I) ≤κ_⋆(I_R). Thus, in order to prove <ref>, it remains to check that κ_⋆(I_R) ≤κ_⋆(Ĩ). Unless we are in the straightforward case κ_⋆(I_R) = 0, we pick κ∈ (0, κ_⋆(I_R)), so that by definition of κ_⋆, it holds ∃ c > 0, ∃ r_0 > 0 : ∀ r ≥ r_0 , 1r*I_R ∩ [κ r, r]≥ c . In the sequel, to simplify notation, we write J_n^R = (J_n)_R. Up to enlarging r_0, we can assume that for any index n such that J_n^R ∩ [κ r_0, + ∞) ≠∅, it holds δ_n ≥ 5 + 8 R (recall that δ_n → + ∞). Fix an r ≥ r_0. Then there is a finite (possibly empty) set of indices {n_k}_k such that J_n_k^R ⊂ [κ r, r]. Assume first that 1r*⋃_k J_n_k^R ∩ [κ r, r] = 1r∑_k J_n_k^R≥c2 . Then it holds 1r*Ĩ∩ [κ r, r] ≥1r∑_k J̃_n_k = 1r∑_k (J_n_k^R - ( √(4 + δ_n_k) + 2 R )) ≥1r∑_k (1 - √(4 + δ_n_k) + 2 Rδ_n_k + 2R) J_n_k^R≥1r∑_k (1 - √(4δ_n^2 + 1δ_n_k) - 2 Rδ_n + 2R) J_n_k^R . To obtain the second to last inequality, we used the fact that by definition of δ_n, it holds J_n≥δ_n, which implies in particular that J_n^R≥δ_n + 2 R. Using in the last line that δ_n_k≥ 5 + 8 R, together with (<ref>), we obtain 1r*Ĩ∩ [κ r, r]≥(1 - √(925) - 15) 1r∑_k *J_n_k^R≥15×c2 . Otherwise, if now (<ref>) is not satisfied, then recalling (<ref>), it holds 1r*(I_R ∖⋃_k J_n_k^R) ∩ [κ r, r]≥c2 . Any interval J_n^R ⊂ I_R ∖⋃_k J_n_k^R intersecting [κ r, r] must contain κ r or r, otherwise it would satisfy J_n^R ∩ [κ r, r] = ∅, or J_n^R ⊂ (κ r, r) (the latter would imply that n ∈{n_k}_k). Therefore, there are at most two such intervals. We deduce that there is an index n_⋆ such that J_n_⋆^R ⊄[κ r, r] but J_n_⋆^R ∩ [κ r, r] ≠∅, with 1r*J_n_⋆^R ∩ [κ r, r]≥c4 . Writing J_n_⋆^R = (a_n_⋆ - R, b_n_⋆ + R), the fact that J_n_⋆^R ∩ [κ r, r] ≠∅ imposes that a_n_⋆ - R ≤ r, hence a_n_⋆≤ r + R. Thus we obtain 1r*Ĩ∩ [κ r, r] ≥1r*J̃_n_⋆∩ [κ r, r]≥1r( J_n_⋆^R ∩ [κ r, r] - √(4 + δ_n_⋆) - 2 R ) ≥c4 - √(4 + r + R) + 2 Rr . We used the fact that δ_n_⋆≤ a_n_⋆≤ r + R to obtain the last inequality. In view of the estimates (<ref>) and (<ref>), in any case we have 1r*Ĩ∩ [κ r, r]≥min( c10, c4 - √(4 + r + R) + 2 Rr) . We conclude that lim inf_r → + ∞1r*Ĩ∩ [κ r, r] > 0 . Recalling that κ is any arbitrary number < κ_⋆(I_R), we finally get the desired converse inequality κ_⋆(Ĩ) ≥κ_⋆(I_R). Thus <ref> is proved, which concludes the proof of the lemma. In the sequel, we will proceed as follows: to prove that κ_⋆(I) > Λ(ν_2/ν_1) is a sufficient condition to have observability from ω(I), we will check that the dynamical condition (<ref>) of Theorem <ref> is true in the smaller set ω(Ĩ), where Ĩ is given by Lemma <ref>. To show that it is also necessary, we will check that the condition (<ref>) is violated in the larger set ω(I)_R = ω(I_R) for any R > 0. *Step 4 ­– Geometric condition of observability for rationally dependent characteristic frequencies. We investigate the validity of the dynamical condition (<ref>) of Theorem <ref>. In the case where ν_2/ν_1∈, writing ν_2/ν_1 = p/q as an irreducible fraction, the period of the Hamiltonian flow is given by T_0 = 2 π/ν_2 p = 2 π/ν_1 q. We write for short Λ = Λ(ν_2/ν_1) and κ_⋆ = κ_⋆(I). Our goal now is to reformulate the dynamical condition (<ref>) using the Area formula. Let J ⊂ be a bounded interval and let γ : J →^n be a Lipschitz curve. Then γ is differentiable at Lebesgue-almost every point in J and for any Borel set E ⊂^n, it holds ∫_J _E(γ(t)) *γ'(t) t = ∫_γ∩ E#γ^-1({x}) ^1(x) . Here, γ = γ(t)t ∈ J⊂^n, #γ^-1({x}) stands for the cardinality of the set t ∈ Jγ(t) = x, and ^1 is the one-dimensional Hausdorff measure. We will apply this formula to a curve of the form γ : t ↦x^t∈ defined on J = (0, T), where t ↦ (x^t, ξ^t) is a trajectory of the Hamiltonian flow. Calculations will involve the inverse Jacobian γ'(t)^-1. Using anisotropy[In the excluded isotropic case (p = q = 1), one can choose (x^0, ξ^0) so that x^t is constant, as we did in the proof of Proposition <ref> (see Subsection <ref>). In such a situation, the set γ⊂_+ is reduced to a point. This is a very singular situation, since the Jacobian γ'(t) is identically zero.] of the harmonic oscillator (p ≠ q), we can check that the Jacobian vanishes only at a finite number of points. Let t ↦ (x^t, ξ^t) be a trajectory of the Hamiltonian flow of an anisotropic harmonic oscillator, with initial datum ρ_0 = (x_0, ξ_0). Then the curve γ : ∋ t ↦x^t∈_+ is Lipschitz with constant √(2 p(ρ_0)). If ρ_0 ≠ 0, then γ is of class ^∞ in ∖{γ = 0}. Moreover, the set S_γ := t ∈γ(t) = 0 or γ'(t) = 0 is locally finite, namely for any bounded interval I ⊂, the set S_γ∩ I is finite. In addition, for any bounded interval I ⊂, one has ∃ k = k(I) ∈ : ∀ s ∈_+ , #γ^-1({s}) ∩ I ≤ k . That γ is Lipschitz follows from the inverse triangle inequality, the Hamilton equations (<ref>) and the fact that p(x, ξ) = V(x) + 1/2ξ^2 is preserved by the flow: *γ(t_2) - γ(t_1)≤x^t_2 - x^t_1≤t_2 - t_1sup_t ∈ξ^t≤t_2 - t_1√(2 p(ρ_0)) . From now on, we assume that ρ_0 ≠ 0. First notice that the set {γ = 0} is closed since γ is continuous. Given that t ↦ x^t is smooth, the curve γ is smooth in a neighborhood of any point t ∈∖{γ = 0}, so that γ∈^∞(∖{γ = 0}). To show that S_γ is locally finite, it is sufficient to prove that it is closed and also discrete, namely that it is made of isolated points.[If S ⊂ is closed and discrete, the for any compact interval I ⊂, the set S ∩ I is compact. Since S is discrete, the set S ∩ I can be covered by open sets containing at most one element of S. Then, extracting a finite subcovering shows that S ∩ I is finite.] We first check that it is closed by observing that the map f : t ↦γ^2(t) = x^t^2 belongs to ^∞() and that S_γ = t ∈f'(t) = 0 . To check this equality, we use the fact that f'(t) = 2 γ(t) γ'(t) for all t ∈∖ S_γ. If t ∉S_γ, then it follows that γ(t) γ'(t) ≠ 0. Conversely, if t ∈ S_γ, either γ(t) ≠ 0, so that γ'(t) = 0, in which case it holds f'(t) = 2 γ(t) γ'(t) = 0; or γ(t) = 0, which implies that x^t = 0, hence f'(t) = 2 x^t ·ξ^t = 0. This justifies (<ref>). Thus it remains to show that S_γ is discrete. Let us compute the derivatives of f up to order 4: f'(t) = 2 x^t ·ξ^t , f^(2)(t) = 2 *ξ^t^2 - 2 x^t · A x^t , f^(3)(t) = - 4 ξ^t · A x^t - 2 ξ^t · A x^t - 2 x^t · A ξ^t = - 8 ξ^t · A x^t , f^(4)(t) = 8 (*A x^t^2 - ξ^t · A ξ^t) . Let us write the Taylor expansion of f' at order 3 near t_0 ∈: f'(t) = f'(t_0) + (t - t_0) f^(2)(t_0) + (t - t_0)^22 f^(3)(t_0) + (t - t_0)^36 f^(4)(t_0) + o((t - t_0)^3) . Suppose that t_0 ∈ S_γ. Then f'(t_0) = 0 in virtue of (<ref>). If f^(2)(t_0) ≠ 0, then (<ref>) yields *f'(t)≥f^(2)(t_0)2*t - t_0 for all t in a neighborhood U of t_0. In particular, S_γ∩ U = {t_0}, meaning that t_0 is isolated. Likewise, if f^(2)(t_0) = 0 but f^(3)(t_0) ≠ 0, then (<ref>) leads to *f'(t)≥f^(3)(t_0)4*t - t_0^2 in a neighborhood of t_0, so that t_0 is isolated again. Now, if f^(2)(t_0) = f^(3)(t_0) = 0, we show that necessarily f^(4)(t_0) ≠ 0. In view of (<ref>), (<ref>), and (<ref>), it holds x^t_0·ξ^t_0 = 0 ξ^t_0^2 = x^t_0· A x^t_0 , and ξ^t_0· A x^t_0 = 0 . The first and third equalities mean that ξ^t_0⊥ x^t_0 and ξ^t_0⊥ A x^t_0. Moreover, the second equality ensures that x^t_0≠ 0 and ξ^t_0≠ 0, otherwise (x^t_0, ξ^t_0) = (0, 0), hence ρ_0 = 0. Since we are in two dimensions, we deduce that A x^t_0 and x^t_0 are parallel, and therefore x^t_0 is an eigenvector of A. Since ξ^t_0⊥ x^t_0 and ξ^t_0≠ 0, we deduce that ξ^t_0 is also an eigenvector, associated with a different eigenvalue since A has two distinct eigenvalues by assumption. We relabel ν_1 and ν_2 so that A x^t_0 = ν_x^2 x^t_0 and A ξ^t_0 = ν_ξ^2 ξ^t_0. Plugging this into the second equality in (<ref>) yields ξ^t_0^2 = ν_x^2 x^t_0^2, from which we deduce that the fourth derivative (<ref>) cannot vanish at t_0, given that the oscillator is anisotropic (ν_x ≠ν_ξ): *A x^t_0^2 - ξ^t_0· A ξ^t_0 = ν_x^4 *x^t_0^2 - ν_ξ^2 *ξ^t_0^2 = ν_x^2 (ν_x^2 - ν_ξ^2) x^t_0^2 ≠ 0 . Therefore (<ref>) implies that *f'(t)≥f^(4)(t_0)12*t - t_0^3 in a neighborhood of t_0, that is to say the critical point t_0 is again isolated. To sum up, the above argument shows that there exists a neighborhood U of t_0 such that U ∩ S_γ = {t_0}, so S_γ is indeed a discrete set. Now fix I ⊂ a bounded interval. We have just shown that n = # (S_γ∩ I) is finite. To prove (<ref>), we observe that the complement of S_γ in I is a union of at most n + 1 open intervals in I, on which γ' does not vanish and has constant sign (use the intermediate value theorem). Therefore γ is one-to-one in each of these intervals. We infer that ∀ s ∈ , #t ∈ Iγ(t) = s≤ n + 1 + # (S_γ∩ I) = 2 n + 1 . This completes the proof of the lemma. Let us assume that κ_⋆≤Λ and fix R > 0. Recalling that κ_⋆ = κ_⋆(I ) = κ_⋆(I_R) from <ref> in Lemma <ref>, we know that there exists a sequence (r_n)_n ∈ tending to + ∞ along which 1r_n*I_R ∩ [κ_⋆ r_n, r_n]n →∞ 0 . According to Step 2, considering actions (A_1, A_2) = (√(1 - λ_0), √(λ_0)) = (√(p/p + q), √(q/p + q)) and initial angles (θ_1, θ_2) = (π/2p, 0), one obtains a trajectory of the Hamiltonian flow t ↦ (x^t, ξ^t) such that min_t ∈ [0, T]x^t = Λmax_t ∈ [0, T]x^t , that is to say a trajectory that attains the supremum (<ref>). Here T is any real number larger than the period of the flow T_0. In view of the homogeneity of degree 1 of the Hamiltonian flow, we know that t ↦ (c x^t, c ξ^t) is still a trajectory of the Hamiltonian flow, for any scalar c ∈. Note that (<ref>) above ensures that x^t is bounded from below by a positive constant for all times. Therefore, Lemma <ref> implies that the curve γ : (0, T) ∋ t ↦x^t is smooth. The corresponding set S_γ of Lemma <ref> is nothing but S_γ = {γ' ≠ 0}. A consequence of this lemma is that S_γ has vanishing measure. Thus we write (0, T) ∖ S_γ = ⋃_N ∈ B_N where B_N = t ∈ (0, T)*γ'(t)≥ 2^-N . Fix an arbitrary N ∈ and a scalar c > 0. Then we obtain ∫_B_N_I_R(c *x^t) t ≤2^Nc∫_B_N_I_R(c γ(t)) *c γ'(t) t = 2^Nc∫_c γ(B_N)_I_R(s) #t ∈ (0, T)γ(t) = sc s ≤2^N kc∫_c γ(B_N)_I_R(s) s ≤2^N kc∫_c Λmaxγ^c maxγ_I_R(s) s , where the equality results from the Area formula (Proposition <ref>) applied to E = I_R. The integer k is the one from Lemma <ref> (<ref>). The last inequality follows from the fact that x^t spans the interval [Λmaxγ, maxγ] by construction (recall (<ref>)). Thus taking c = c_n = r_n/ maxγ, with (r_n)_n ∈ the sequence from (<ref>), we obtain ∫_B_N_I_R(c_n γ(t)) t ≤ 2^N k (maxγ) ×1r_n*I_R ∩ [Λ r_n, r_n]n →∞ 0 , by (<ref>), since Λ≥κ_⋆. Now going back to (<ref>), since the set S_γ is negligible, monotone convergence ensures that B_N→ T as N →∞. We finally obtain that ∫_0^T _ω(I_R)(c_n x^t) t ≤*(0, T) ∖ B_N + ∫_B_N_I_R(c_n *x^t) t = T - *B_N + o(1) as n →∞. We let N →∞ to conclude that the dynamical condition (<ref>) is not fulfilled, namely lim inf_ρ→∞∫_0^T _ω(I)_R ×^d(ϕ^t(ρ)) t = 0 . The parameter R > 0 is arbitrary. Therefore the necessary condition of Theorem <ref> tells us that observability from ω(I) in time T does not hold, and T ≥ T_0 itself is arbitrary. We turn to the case where κ_⋆ > Λ. This time, we take T = T_0 to be the period of the Hamiltonian flow and check that the observability condition (<ref>) holds in ω(Ĩ). We pick κ∈ (Λ, κ_⋆). In virtue of Lemma <ref> <ref>, we have κ_⋆ = κ_⋆(I) = κ_⋆(Ĩ) so that ∃ c > 0, ∃ r_0 > 0 : ∀ r ≥ r_0 , 1r*Ĩ∩ [κ r, r]≥ c . Let (x^t, ξ^t) be a trajectory of the Hamiltonian flow with initial datum ρ_0. One can estimate r̃ = maxx^t from below as follows: since the time t_0 at which the maximum is reached is also a (local) maximum of x^t^2, the second derivative satisfies: ^2 t^2x^t^2_| t = t_0 = 2 ξ^t_0^2 - 2 x^t_0· A x^t_0≤ 0 . Thus it holds r̃^2 := *x^t_0^2 ≥ x^t_0·AA x^t_0≥1A( 12 x^t_0· A x^t_0 + 12ξ^t_0^2 ) = 1A p(ρ_0) . Provided ρ_0 is large enough so that p(ρ_0) ≥A r_0^2, we see in particular that r̃≥ r_0. Introduce γ : (0, T) ∋ t ↦x^t. We know from Lemma <ref> that γ is Lipschitz with constant √(2 p(ρ_0))≤√(2 A)r̃ (this inequality is a consequence of (<ref>) above). In particular, we have γ'(t)≤√(2 A)r̃ outside the set S_γ from Lemma <ref>. Thus we can apply again the Area formula (Proposition <ref>): ∫_0^T _ω(Ĩ)(x^t) t = ∫_0^T _Ĩ(x^t) t ≥ (2 A)^-1/21r̃∫_0^T _Ĩ(γ(t)) *γ'(t) t = (2 A)^-1/21r̃∫_γ((0, T))_Ĩ(s) #t ∈ (0, T)γ(t) = s s ≥ (2 A)^-1/21r̃∫_γ((0, T))_Ĩ(s) s . This time, one has γ((0, T)) ⊃ [Λr̃, r̃] ⊃ [κr̃, r̃] (by definition of Λ, see (<ref>)). This means that ∫_0^T _ω(Ĩ)(x^t) t ≥ (2 A)^-1/21r̃∫_κr̃^r̃_Ĩ(s) s ≥ (2 A)^-1/2 c , where the last inequality is due to (<ref>) (recall that r̃≥ r_0). Therefore the dynamical condition (<ref>) of Theorem <ref> is satisfied. In fact, the explicit expression of the Hamiltonian flow in action-angle coordinates (<ref>) shows that x^t^2 is T_0/2-periodic.[One can check that the projected trajectories of rational harmonic oscillators are invariant by point reflection with respect to the origin or axial symmetry with respect to some coordinate axis, depending on whether p and q have the same parity or not.] Therefore, setting c̃ := (2 A)^-1/2 c, the dynamical condition (<ref>) is equivalently satisfied in time T_0/2 - c̃/4: ∫_0^T_0/2 - c̃/4_ω(Ĩ)(x^t) t ≥12∫_0^T_0_ω(Ĩ)(x^t) t - c̃4≥ (2 A)^-1/2c4 . By Theorem <ref>, this implies that observability holds from ω(Ĩ)_R ∖ K for some R > 0 and any compact set K ⊂^d in any time > T_0/2 - c̃/4, which in turn implies observability from ω(I) in virtue of Lemma <ref> <ref>. Incidentally, the optimal observation time is strictly smaller than T_0/2. *Step 5 ­– Diophantine approximation in the irrational case. We assume that ν_2/ν_1∈∖ and denote by p_j/q_j the reduced fraction expression of its convergents (see Remark <ref>). We investigate the validity of the dynamical condition (<ref>) by approximating the trajectories of the “irrational" Hamiltonian flow by the trajectories of the “rational" Hamiltonian flow obtained by replacing ν_2/ν_1 with its convergent p_j/q_j. For instance, a projected trajectory of the irrational harmonic oscillator of the form x_1^t = A_1 sin(ν_1 t + θ_1) , x_2^t = A_2 sin(ν_2 t + θ_2) , should be compared to x_j, 1^t = A_1 sin(ν_1 t + θ_1) , x_j, 2^t = A_2 sin(p_jq_jν_1 t + θ_2) , which is a trajectory of the Hamiltonian flow of the (rational) harmonic oscillator with characteristic frequencies ν_1 and p_j/q_jν_1, whose classical Hamiltonian is: p_j(x, ξ) = 12( ν_1^2 x_1^2 + p_j^2q_j^2ν_1^2 x_2^2) + 12( ξ_1^2 + ξ_2^2 ) . The distance between these two trajectories is *x^t - x_j^t = *x_2^t - x_j, 2^t≤ A_2 *ν_2 - p_jq_jν_1*t≤ A_2 ν_1 t/q_j^2 , owing to the fact that the sine function is 1-Lipschitz and to the Diophantine approximation result (<ref>). We already know from Step 2 that min_t ∈ [0, T_j]x_j^t≤Λ_j max_t ∈ [0, T_j]x_j^t , where T_j = 2 π/ν_1 q_j , Λ_j = Λ(p_jq_j) . The time T_j is the period of the flow of the rational harmonic oscillator with characteristic frequencies ν_1 and p_j/q_jν_1. Let us set m_j = min_t ∈x_j^t and M_j = max_t ∈x_j^t . Although the trajectory t ↦ x_j^t is T_j-periodic, it will be convenient to compare x_j^t and x^t on smaller times. Then in view of (<ref>), on the time interval [0, η T_j], where η∈ (0, 1], the norm x^t spans an interval J_j^η such that J_j^η⊂[m_j - A_2 η2 πq_j, M_j + A_2 η2 πq_j] , and if η = 1, since x_j^t attains m_j and M_j on the time interval [0, T_j], it holds [m_j + A_2 2 πq_j, M_j - A_2 2 πq_j] ⊂ J_j^1 . So now, according to the value of κ_⋆, we check whether the dynamical condition (<ref>) of Theorem <ref> is satisfied, using the Area formula. *Step 6 ­– Geometric condition of observability for rationally independent characteristic frequencies. Take η = 1, that is we consider a whole period of the rational Hamiltonian flow. We first establish a lower bound on the time spent by t ↦ x^t in ω(Ĩ). We consider κ_⋆ > 0 here. From Lemma <ref>, we know that γ : (0, T_j) ∋ t ↦x^t is Lipschitz with constant √(2 p(ρ_0)). Yet, similarly to (<ref>), it holds p(ρ_0) ≤AM̃_j^2 , where M̃_j = max_t ∈ [0, T_j]x^t , so that γ is Lipschitz with constant √(2 A)M̃_j. Applying the Area formula (Proposition <ref>), we obtain as in (<ref>) the lower bound: ∫_0^T_j_ω(Ĩ)(x^t) t ≥ (2 A)^-1/21M̃_j∫_J_j^1_Ĩ(s) s , and in view of (<ref>) and (<ref>) with η = 1, we deduce that ∫_0^T_j_ω(Ĩ)(x^t) t ≥(2 A)^-1/2M_j + A_2 2 π/q_j∫_m_j + A_2 2 π/q_j^M_j - A_2 2 π/q_j_Ĩ(s) s . Observing that A_2 ≤ M_j and that m_j ≤Λ_j M_j, we obtain ∫_0^T_j_ω(Ĩ)(x^t) t ≥(2 A)^-1/2M_j (1 + 2 π/q_j)∫_M_j (Λ_j + 2 π/q_j)^M_j(1 - 2 π/q_j)_Ĩ(s) s . Setting r = M_j (1 - 2 πq_j) and Λ̃_j = Λ_j + 2 π/q_j1 - 2 π/q_j , we can write the lower bound in (<ref>) under the form ∫_0^T_j_ω(Ĩ)(x^t) t ≥ (2 A)^-1/21 - 2 π/q_j1 + 2 π/q_j×1r∫_Λ̃_j r^r_Ĩ(s) s . We assume that q_j > 2 π so that r > 0, which is the case for j large enough since q_j →∞. The above estimate (<ref>) is valid for any trajectory of the (irrational) Hamiltonian flow with initial datum ρ_0 ≠ 0. In addition, we remark that M_j defined in (<ref>) tends to infinity as ρ_0 →∞, so that r defined in (<ref>) tends to +∞ as ρ_0 →∞ too. Thus (<ref>) leads to lim inf_ρ→∞∫_0^T_j_ω(Ĩ) ×^d(ϕ^t(ρ)) t ≥ (2 A)^-1/21 - 2 π/q_j1 + 2 π/q_j×lim inf_r → + ∞1r*Ĩ∩ [Λ̃_j r, r] . In order to deduce a positive lower bound, it suffices that Λ̃_j < κ_⋆ = κ_⋆(Ĩ) = κ_⋆(I). This is achieved provided q_j ≥ 6 π / κ_⋆≥ 6 π. Indeed, under this condition, we have on the one hand 1 - 2 π/q_j1 + 2 π/q_j≥1 - 2 π/6 π1 + 2 π/6 π = 12 , and on the other hand, recalling the definition of Λ̃_j in (<ref>), the formula for Λ_j (<ref>) shown in Step 2, and using that sin x ≤ x and tan x ≤4/π x for x ∈ [0, π/4], we obtain Λ̃_j ≤4/π×π/2/p_j + q_j + 2 π/q_j1 - 2 π/6 π≤ 3 1 + πq_j≤12(1π + 1) κ_⋆ < κ_⋆ . Now we turn to the upper bound on the time spent by projected trajectories of the (irrational) Hamiltonian flow in ω(I)_R, for a fixed R > 0. We consider κ_⋆∈ [0, 1] arbitrary now, with the convention 1/κ_⋆ = + ∞ if κ_⋆ = 0. We go back to η∈ (0, 1]. We select a curve t ↦ (x_j^t, ξ_j^t) of the rational flow that maximizes the ratio min_t x_j^t / max_t x_j^t, namely that satisfies m_j = min_t ∈ [0, T_j]x_j^t = Λ_j max_t ∈ [ 0, T_j]x_j^t = Λ_j M_j . This curve is of the form (<ref>) for well-chosen action and angle variables. We consider t ↦ (x^t, ξ^t) the corresponding trajectory of the irrational flow given by (<ref>), that is the integral curve obtained by substituting ν_2 for p_j/q_jν_1 in (<ref>). Notice that this trajectory depends on j. We still write γ(t) = x^t. By Lemma <ref>, we know that it is a Lipschitz map and that there exists an integer k_0 such that #γ^-1(s) ∩ [0, T_j] ≤ k_0 for all s ∈_+. Reproducing the computation (<ref>), we find: ∫_B_N_I_R(c *x^t) t ≤2^N k_0c∫_c Λ_j maxγ^c maxγ_I_R(s) s ≤2^N k_0c∫_c J_j^η_I_R(s) s , where we recall that the parameter c > 0 is an arbitrary scaling factor, and B_N is defined similarly to (<ref>) by B_N = t ∈ [0, η T_j]*γ'(t)≥ 2^-N . In view of (<ref>), this leads to ∫_B_N_I_R(c *x^t) t ≤2^N k_0c∫_c (m_j - A_2 η2 π/q_j)^c (M_j + A_2 η2 π/q_j)_I_R(s) s . As we did before in (<ref>), we use the fact that A_2 ≤ M_j, together with m_j = Λ_j M_j (the equality is important here) to obtain ∫_B_N_I_R(c *x^t) t ≤2^N k_0c∫_c M_j (Λ_j - η2 π/q_j)^c M_j (1 + η2 π/q_j)_I_R(s) s . Defining now r = M_j (1 + η2 πq_j) and Λ̃_j = Λ_j - η2 π/q_j1 + η2 π/q_j , we end up with ∫_B_N_I_R(c *x^t) t ≤ 2^N k_0 M_j (1 + η2 πq_j) 1c r∫_Λ̃_j c r^c r_I_R(s) s . We finally prove that this upper bound tends to zero along a well-chosen sequence of parameters c provided Λ̃_j ≥κ_⋆. This is fulfilled whenever q_j ≤δ / κ_⋆, for a small enough constant δ. To see this, we can use that tan x ≥ x and sin x ≥2/π x on [0, π/2] to control Λ_j from below by 1/p_j + q_j. Then (<ref>) leads to p_j/q_j≤ν_2/ν_1 + 1, which yields Λ_j ≥1p_j + q_j≥1q_j (2 + ν_2/ν_1) =: Cq_j . Assuming that η < C/2 π≤ 1, we obtain Λ̃_j ≥C - 2 πη/q_j1 + η2 π/q_j≥1q_j×C - 2 πη1 + 2 πη≥C - 2 πηδ (1 + 2 πη)κ_⋆ . This yields Λ̃_j ≥κ_⋆ if δ is small enough, so that by definition of κ_⋆, letting c → + ∞, we obtain lim inf_c → + ∞∫_0^η T_j_ω(I_R)(c x^t) t ≤*[0, η T_j] ∖ B_N + 2^N k_0 M_j (1 + η2 πq_j) lim inf_c → + ∞1c r∫_Λ̃_j c r^c r_I_R(s) s = η T_j - *B_N , which tends to zero as N →∞. The general conclusion is the following: if κ_⋆ > 0 and j ∈ is such that q_j ≥6 π/κ_⋆, we know by (<ref>) that Λ̃_j < κ_⋆, so that by definition of κ_⋆, the estimate (<ref>), together with (<ref>), proves that the dynamical condition (<ref>) of Theorem <ref> holds for ω(Ĩ) in time T_j = 2 π/ν_1 q_j. If on the contrary κ_⋆∈ [0, 1] and it holds q_j ≤δ/κ_⋆ for some δ > 0 depending only on ν_2/ν_1, then from (<ref>), the dynamical condition (<ref>) is violated in ω(I)_R for any R > 0 on the time interval [0, η T_j], where η > 0 depends only on ν_2/ν_1 again. Theorem <ref> then implies that the Schrödinger equation is observable from ω(I) if and only if κ_⋆ > 0. If indeed κ_⋆ > 0, then the optimal observation time T_⋆ = T_⋆(ω(I)) is controlled as follows: there exist constants C, c > 0 such that c q_j_1≤ T_⋆≤ C q_j_2 , where j_1 is the largest index such that q_j ≤δ/κ_⋆ and j_2 is the smallest index such that q_j ≥6 π/κ_⋆. To go from (<ref>) to the desired estimate (<ref>) in the case where ν_2/ν_1 is Diophantine, we use the fact that the irrationality exponent τ, defined in (<ref>), is related to the growth of the q_j's. This comes from the formula τ(μ) = 1 + lim sup_j →∞log q_j+1log q_j (see <cit.> or <cit.>). When τ is finite, we deduce in particular that for any > 0, we have for any j large enough log q_j + 1log q_j≤τ - 1 + , which leads to the existence of a constant C_ > 0 such that q_j + 1≤ C_q_j^τ - 1 + , ∀ j ∈ . We obtain by definition of the indices j_1 and j_2: δκ_⋆≤ q_j_1 + 1≤ C_q_j_1^τ - 1 + and q_j_2≤ C_q_j_2 - 1^τ - 1 + ≤ C_(6 πκ_⋆)^τ - 1 + . Plugging this into (<ref>), we finally deduce (<ref>). This concludes the proof of Proposition <ref>. § REDUCTION TO A WEAKER OBSERVABILITY INEQUALITY The following proposition shows that (ω, T) is equivalent to a similar inequality with a remainder involving a compact operator. The argument goes back to Bardos, Lebeau and Rauch <cit.>. This reformulation of the problem paves the way for the use of microlocal analysis: we are interested in the propagation of high-energy modes through the Schrödinger evolution, discarding anything that is microlocalized near a fixed energy sub-level {p ≤cst}. An alternative route could be to slice the phase space according to energy layers of the Hamiltonian p(x, ξ) = V(x) + 1/2ξ^2; see <cit.>. Suppose P is a self-adjoint operator with compact resolvent, and let B be a bounded operator on L^2(^d) satisfying the unique continuation property: for any eigenfunction u of P, B u = 0 ⟹ u = 0 . Let T_0 > 0 and assume there exists a compact self-adjoint operator K such that ∃ C_0 > 0 : ∀ u ∈ L^2(^d) , *u_L^2^2 ≤ C_0 ∫_0^T_0*B ^- t P u_L^2^2 t + *uK u_L^2 . Then for every T > T_0, there exists C > 0 such that ∀ u ∈ L^2(^d) , *u_L^2^2 ≤ C ∫_0^T *B ^- t P u_L^2^2 t . The operators of the form P = V(x) - 12 that we consider, with V subject to Assumption <ref>, satisfy the unique continuation property of the statement when B is the multiplication by the indicator function of a non-empty open set. See <cit.>. Let us introduce for any S ∈: A_S = ∫_0^S ^ t P B^∗ B ^- t P t , and denote by I_S its kernel (the space of so-called invisible solutions). One can check that I_S = ⋂_t ∈ [0, S] B ^- t P = u ∈ L^2(^d)∀ t ∈ [0, S] , B ^- t P u = 0 , using the fact that ^ t P B^∗ B ^- t P≥ 0 for all t ∈ as operators, and that the map t ↦ B ^- t P is strongly continuous. The space I_S is a closed linear subspace of L^2(^d), both for the strong and the weak topology (use for instance that A_S is a bounded operator). Moreover, one has the property that S_1 ≤ S_2 yields I_S_1⊃I_S_2. It implies that for any S, the set I_S^- = ⋃_S' > SI_S' is also a linear subspace, contained in I_S. Step 1 ­– I_T_0 is finite-dimensional. This assertion is a consequence of the fact that K is coercive on I_T_0, namely ∀ u ∈I_T_0 , *u_L^2≤*K u_L^2 , It follows directly from the assumption (<ref>) and the Cauchy-Schwarz inequality. Setting W = K_|I_T_0, we deduce that K : I_T_0→ W is one-to-one and its inverse K^-1 is bounded as an operator in L(W, I_T_0). Now denote by B̅_I_T_0 the closed unit ball of I_T_0. Since I_T_0 is strongly and weakly closed, the same holds for its closed unit ball as a subset of L^2(^d). We deduce that B̅_I_T_0 is weakly compact. The compactness of K implies that K(B̅_I_T_0) is (strongly) compact in L^2(^d). Since it is contained in W, it is compact in W. Therefore the fact that K^-1 : W →I_T_0 is bounded implies that B̅_I_T_0 = K^-1(K(B̅_I_T_0)) is compact. We deduce by the Riesz's Theorem that I_T_0 is finite-dimensional. Step 2 ­– I_T_0^- is stable by P. Let us check that I_T_0^- ⊂ P. Let u ∈I_T_0^- and set u_ϵ = ^- ϵ P u - uϵ , ∀ϵ≠ 0 . By definition of I_T_0^-, the function u belongs to I_T_0 + ϵ_0 for some ϵ_0 > 0, so that u_ϵ∈I_T_0^- for any ϵ∈ (0, ϵ_0). Recall from the previous step that I_T_0⊃I_T_0^- is finite dimensional. We observe that v ↦*(P - )^-1 v_L^2 is a norm on I_T_0, so it is equivalent to the L^2 norm. Yet we see that (P - )^-1 u_ϵ = ^- ϵ P (P - )^-1 u - (P - )^-1 uϵ , with (P - )^-1 u ∈ P, so that (P - )^-1 u_ϵ converges as ϵ→ 0 to some v ∈ P. Therefore we conclude that *u_ϵ - (P - ) v_L^2≤ C *(P - )^-1 u_ϵ - v_L^2ϵ→ 0 0 . The fact that u_ϵ converges shows that u ∈ P, hence I_T_0^- ⊂ P. It remains to see that lim_ϵ→ 0 u_ϵ = - P u belongs to I_T_0^-, which is a consequence of the fact that I_T_0^- is finite-dimensional, hence closed. Step 3 ­– I_T_0^- = {0}. This results from the unique continuation property (<ref>). Indeed, we can argue as follows: from the previous steps, I_T_0^- is a finite-dimensional linear subspace of L^2(^d) which is stable by the self-adjoint operator P. Therefore there exists a basis (u_1, u_2, …, u_n) of I_T_0^- made of eigenvectors of P. By definition of I_S, these eigenvectors satisfy in particular B u_j = 0. So by the unique continuation result (<ref>), we find that I_T_0^- must be trivial. Step 4 ­– Conclusion. Let T > T_0. We want to show that A_T ≥ c for some c > 0. To do this, it suffices to prove that A_T is invertible, because A_T is self-adjoint and A_T ≥ 0. The assumption (<ref>) implies that the self-adjoint operator A_T + K is invertible, meaning that zero does not belong to its spectrum. Since K is compact and self-adjoint, we classically know that A_T has the same essential spectrum as A_T + K, so in particular zero is not in the essential spectrum of A_T. It is not an eigenvalue neither since A_T ⊂I_T_0^- = {0}. Therefore A_T is invertible, and the conclusion follows. The following lemma is not related to the previous proposition. Still, it is worth stating it properly since we use it on several occasions throughout the article. Let ω⊂^d be measurable. Assume (ω, T) holds in some time T > 0 with a cost C > 0, namely ∀ u ∈ L^2(^d) , *u_L^2(^d)^2 ≤ C ∫_0^T *^- t P u_L^2(ω)^2 t . Then (ω, T - ) holds for any < 1/C. We use the fact that the propagator ^- t P is an isometry on L^2(^d) to get C ∫_T - ^T *^- t P u_L^2(ω)^2 t ≤ C ∫_T - ^T *^- t P u_L^2(^d)^2 t = C *u_L^2(^d)^2 . Thus we can absorb this term in the left-hand side of the observability inequality provided C < 1: (1 - C ) *u_L^2(^d)^2 ≤ C ∫_0^T - *^- t P u_L^2(ω)^2 t , namely (ω, T - ) holds with cost C (1 - C )^-1. § PSEUDODIFFERENTIAL OPERATORS We recall below basics of the theory of pseudodifferential operators (see the textbooks <cit.> for further details). We will also need a precise bound on the remainder of the pseudodifferential calculus and of the sharp Gårding inequality. This is why we reproduce the proofs of these results below. §.§ Weyl quantization Let a ∈(^2d). We define the operator a acting on the Schwartz class (^d) by [a u](x) = (2 π)^-d∫_^2d^ (x - y) ·ξ a(x + y2, ξ) u(y) y ξ , u ∈(^d), x ∈^d . It is known that a : (^d) →(^d) is continuous. The quantization extends to tempered distributions: for any a ∈'(^2d), the operator a : (^d) →'(^d) is continuous. §.§ Symbol classes [Symbol classes] Let f be an order function.[A positive function f on the phase space is said to be an order function if ∃ C > 0, ∃ N > 0 : ∀ρ, ρ_0 ∈^2d , f(ρ) ≤ C ρ - ρ_0^N f(ρ_0) . ] Then the symbol class S(f) is the set of functions a ∈^∞(^2d) satisfying ∀α∈^2d, ∃ C_α > 0 : ∀ρ∈^2d , ∂^α a(ρ)≤ C_α f(ρ) . Collecting the best constants C_α for each α, the quantities a_S(f)^ℓ = max_α≤ℓ C_α , ℓ∈ , are seminorms that turn the vector space S(f) into a Fréchet space. Any a ∈ S(f) is a tempered distribution and yields a continuous linear operator a : (^d) →(^d). §.§ L^2-boundedness of pseudodifferential operators [Calderon-Vaillancourt] There exist constants C_d, k_d > 0 depending only on the dimension d such that the following holds: for any a ∈ S(1), the operator a can be extended to a bounded operator on L^2(^d) with the bound *a_L^2 → L^2≤ C_d a_S(1)^k_d . §.§ Refined estimate in the pseudodifferential calculus Let a_1, a_2 be two symbols. We have seen previously that the composition a_1a_2 makes sense as an operator on the Schwartz space. This operator is also a pseudodifferential operator, whose symbol is denoted by a_1 a_2, called the Moyal product of a_1 and a_2, and satisfies a_1a_2 = a_1 a_2 . More generally, one can define the h-Moyal product, depending on a parameter h ∈ (0, 1], as (a_1 _h a_2)(ρ) = ^- h/2(∂_ρ_1, ∂_ρ_2) a_1(ρ_1) a_2(ρ_2)_|ρ_1 = ρ_2 = ρ , where is the canonical symplectic form on ^2d. Taking h = 1, one gets a formula for the Moyal product in (<ref>) above. The h-Moyal product is known to be a bilinear continuous map between symbol classes; see <cit.> or <cit.> for instance. Let f_1, f_2 be two order functions. Then the map S(f_1) × S(f_2) → S(f_1 f_2) (a_1, a_2) ↦ a_1 _h a_2 is bilinear continuous, with constants independent of h ∈ (0, 1]. More precisely, for any ℓ∈, there exist k ∈ and C_ℓ > 0 such that *a_1 _h a_2_S(f_1 f_2)^ℓ≤ C_ℓ*a_1_S(f_1)^k *a_2_S(f_2)^k , ∀ h ∈ (0, 1], ∀ (a_1, a_2) ∈ S(f_1) × S(f_2) . A stationary phase argument leads to an asymptotic expansion of the Moyal product a_1 a_2 ∼∑_j (- /2)^jj!(∂_ρ_1, ∂_ρ_2)^j a_1(ρ_1) a_2(ρ_2)_|ρ = ρ_1 = ρ_2 . In the sequel, we denote by R_j_0(a_1, a_2) the remainder of order j_0 in this asymptotic expansion, namely R_j_0(a_1, a_2)(ρ) = a_1 a_2 - ∑_j = 0^j_0 - 1(- /2)^jj!(∂_ρ_1, ∂_ρ_2)^j a_1(ρ_1) a_2(ρ_2)_|ρ_1 = ρ_2 = ρ . Estimates on this remainder term are usually stated as follows. Let f_1, f_2 be two order functions. Then for any integer j_0 ≥ 1, the map S(f_1) × S(f_2) → S(f_1 f_2) (a_1, a_2) ↦R_j_0(a_1, a_2) is bilinear continuous. In our study, it will be convenient to have a slightly more precise statement. Actually, the explicit formula for the remainder allows to prove that its seminorms are controlled not only by the seminorms of a_1 and a_2 but more precisely by the seminorms of the derivatives ^j_0 a_1 and ^j_0 a_2. Let f_1, f_2 be two order functions. Then for any j_0 ≥ 1, it holds ∀ℓ∈, ∃ k ∈, ∃ C_ℓ > 0 : *R_j_0(a_1, a_2)_S(f_1 f_2)^ℓ≤ C_ℓ*^j_0 a_1_S(f_1)^k *^j_0 a_2_S(f_2)^k , for all (a_1, a_2) ∈ S(f_1) × S(f_2). We outline the arguments of the proof, which are classical, trying to keep track of constants carefully. The starting point of this result is the explicit expression of the remainder (see <cit.> for instance): R_j_0(a_1, a_2)(ρ) = (- 2)^j_0∫_0^1 (1 - t)^j_0 - 1(j_0 - 1)!^-t/2(∂_ρ_1, ∂_ρ_2)(∂_ρ_1, ∂_ρ_2)^j_0 a_1(ρ_1) a_2(ρ_2)_|ρ_1 = ρ_2 = ρ t . The binomial expansion of (∂_ρ_1, ∂_ρ_2)^j_0 exhibits a particular structure: we observe that the integrand of the integral over t can be written as a sum of terms of the form ^-t/2(∂_ρ_1, ∂_ρ_2) (∂^α_1 a_1)(ρ_1) (∂^α_2 a_2)(ρ_2)_|ρ_1 = ρ_2 = ρ with α_1 = α_2 = j_0, which corresponds exactly to ∂^α_1 a_1 _t ∂^α_2 a_2. By Proposition <ref>, we know that the Moyal product is a bilinear continuous map S(f_1) × S(f_2) → S(f_1 f_2) with respect to the Fréchet space topology, with seminorm estimates independent of t ∈ (0, 1]. This yields *R_j_0(a_1, a_2)_S(f_1 f_2)^0 ≤ C_0 *^j_0 a_1_S(f_1)^k *^j_0 a_2_S(f_2)^k . In order to handle seminorms of order ℓ≥ 0, we use the Leibniz formula: ∂R_j_0(a_1, a_2) = R_j_0(∂ a_1, a_2) + R_j_0(a_1, ∂ a_2) , and we apply (<ref>). The result follows. §.§ Positivity Heuristically, the quantization of a non-negative symbol is an almost-non-negative operator. The formal statement, known as the Gårding inequality, says that the negative part of the operator is controlled in terms of the Planck parameter in semiclassical analysis, or exhibits some decay at infinity in the phase space in microlocal analysis. In the main part of the article, we need to apply the Gårding inequality to a symbol in S(1) whose derivatives, of any order, behave like 1/R, where R is a large parameter. Unfortunately, such a symbol does not fit in the semiclassical framework, in which derivatives of order j behave like 1/R^j. Thus we provide in this paragraph a refined statement of the sharp Gårding inequality that keeps track of the dependence of the remainder term on the seminorms of the derivatives of the symbol. There exists a constant c_d > 0 and an integer k_d ≥ 0 depending only on the dimension d such that the following holds. For any real-valued symbol a ∈ S(1) satisfying a ≥ 0, one has a≥ -c_d * a_S(1)^k_d𝕀 . We redo the usual proof (see for instance <cit.>) using the refined estimate on the remainder in the pseudodifferential calculus (Proposition <ref>). Let us prove that for z sufficiently negative, the operator a - z is invertible, which in turn shows that it is non-negative by classical arguments. Step 1 ­– Estimate of the derivatives of (a - z)^-1. Using the assumption that a ≥ 0, we classically have ∇ a(ρ)≤√(2 a_∞ a(ρ)) , ∀ρ∈^2d (see <cit.> for instance). Besides the Faà di Bruno Formula tells us that for any nonzero α∈^2d, the partial derivative ∂^α (a - z)^-1 can be computed as a sum of terms of the form 1(a - z)^1 + ℓ∏_j = 1^ℓ∂^α_j a , with 1 ≤ℓ≤α, ∑_j = 1^ℓα_j = α, α_j≠ 0, ∀ j. Denote by ℓ' the number indices j such that α_j = 1. We apply (<ref>) to the ℓ' factors of the form ∂^α_j a corresponding to these indices, and we bound the ℓ - ℓ' other ones by seminorms of the Hessian of a (recall that α_j≥ 2 for those remaining indices). We obtain *1(a - z)^1 + ℓ∏_j = 1^ℓ∂^α_j a≤1a - z^1 + ℓ(2 * a_∞ a(ρ))^ℓ'/2(* a_S(1)^α)^ℓ - ℓ' . We deduce that *1(a - z)^1 + ℓ∏_j = 1^ℓ∂^α_j a ≤1a - z^1 + ℓ 2^ℓ'/2*a(ρ)^ℓ'/2(* a_S(1)^α)^ℓ - ℓ'/2 ≤1a - z^1 + ℓ 2^ℓ'(*a - z^ℓ'/2 + z^ℓ'/2) (* a_S(1)^α)^ℓ - ℓ'/2 . Putting together all the terms in the Faà di Bruno Formula, and using that a - z ≥z (since z ≤ 0), we finally get that there exists a constant C > 0 (depending on α) such that *∂^α1a - z≤Czmax_1 ≤ℓ≤α 0 ≤ℓ' ≤ℓ( a_S(1)^αz)^ℓ - ℓ'/2 . Assuming that z≥ a_S(1)^α, we arrive at *∂^α1a - z≤Cz√( a_S(1)^αz) . Step 2 ­– Invertibility of a - z. From the previous step, we know that a - z and (a - z)^-1 are in S(1) with explicit seminorm estimates, provided z is large enough. We perform the pseudodifferential calculus: a - z1a - z = 𝕀 + 0 + R_2 , keeping in mind that the second term in the asymptotic expansion vanishes because both symbols are functions of the same symbol. According to the Calderón-Vaillancourt Theorem (Theorem <ref>), our refined estimate on the remainder (Proposition <ref>), and finally to (<ref>), we obtain *R_2_L^2 → L^2≤ C_d *R_2_S(1)^k_d≤ C_d * a_S(1)^k_1'* (a - z)^-1_S(1)^k_2'≤ C ( a_S(1)^kz)^3/2 , for some constant C and some integer k independent of a and z, and provided z is negative enough. Actually when z ≤ - (2 C)^2/3 a_S(1)^k, we obtain that R_2≤ 1/2, so that 𝕀 + R_2 is invertible by Neumann series. This leads classically to the invertibility of a - z, which concludes the proof. plain
http://arxiv.org/abs/2307.01918v3
20230704210418
Computational Reproducibility in Computational Social Science
[ "David Schoch", "Chung-hong Chan", "Claudia Wagner", "Arnim Bleier" ]
cs.CY
[ "cs.CY" ]
Black Holes as a Collider of High Energy Particles Shuhrat Hayitov August 1, 2023 ================================================== In the last decade, replication and reproducibility crises have shaken the scientific landscape. As potential solutions, open science practices were heavily discussed and have been implemented with varying success in different disciplines. We argue, however, that the binary definition of reproducibility, specifically for computational-X disciplines such as computational social science, is insufficient since it is not explicit about the agents and conditions under which results can be reproduced. We expand the definition to avoid "open washing", the practice of fabricating theoretical reproducibility but not supporting practical or verified reproducibility, and introduce a tier system of computational reproducibility based on the concept of verifiability. We identify common barriers to verifiable computational reproducibility, specifically in the field of computational social science, and provide suggestions on how to circumvent common access and computational barriers. § A CRISIS, A RENAISSANCE, AND A REVOLUTION We begin this paper by reiterating three important scientific events in the 2010s. The first was a series of related crises. As <cit.> put it, the 2010s were considered psychology's decade of crisis in terms of replicability <cit.>. Social scientists, however, have recognized that this crisis is not an isolated phenomenon in psychology <cit.>, but a disease that wreaks havoc in multiple fields and various sub disciplines of the social sciences have identified their own crisis, e.g. political science, <cit.>, and communication science <cit.>. A symptom of this disease is the widespread problem of questionable research practices such as p-hacking, HARKing, and downright data falsification, which lead to many high profile retraction scandals in the 2010s (e.g. the cases of Diederik Stapel and Brian Wansink). The second important event was a renaissance. A reaction to the crisis (the first event) was the call for improvement of scientific practices. Researchers trace this also back to the early 2010s, <cit.>. This growing movement in the scientific community to improve research practices and increase transparency in reporting scientific results <cit.> (usually being discussed with Robert Merton's model of science in the mid 1940s, such as organized skepticism and communalism) promotes practices such as pre-registering studies to reduce bias, using larger sample sizes, making data and code openly available, and increasing replication studies. Many of the proposed enablers of replicability can be summarized under the term “Open Science”, including the sharing of data (Open Data), code and other research material (Open Material), methods (Open Methodology), and Open Access <cit.>. Calls for more Open Science practices and efforts for more transparency in research have since resonated through major social science disciplines such as psychological <cit.>, political <cit.>, and communication <cit.> science. The third important event was a revolution. The 2010s are also remembered as the decade where computational methods have begun to be applied in studying social phenomena. Some researchers <cit.> select 2009 as the emblematic starting year of computational social science (CSS) because of the now widely-cited paper in Science by <cit.>. CSS research includes research on almost every topic related to human behavior. It is thus not a discipline centered around a single branch of knowledge, like the brain in neuroscience. Rather, it is characterized by data, and computational and statistical methods used to provide evidence-driven answers to questions that emerge in other social scientific disciplines. This data and method focus, however, is what makes CSS susceptible for the symptoms of the described crisis. Not in terms of replicability, but reproducibility. In this paper, we argue that the binary definition of reproducibility is insufficient for computational social science and potentially other computational-X disciplines. We propose a conceptual expansion of the term by introducing two additional dimensions that are important for assessing the reproducibility of any type of research that uses data and computational methods: who is the reproducibility agent (i.e. who conducted the reproducibility check) and what is the reproducibility environment (i.e., where was the reproducibility check executed)? We further argue that the use of computational methods does not only generate new reproducibility issues  <cit.>, but also offers new opportunities via documented and automated research pipelines that can be executed in various computational environments by different agents. The remainder of the paper is structured as follows. In the first part, we thoroughly define the term computational reproducibility, ironing out conceptual ambiguities from the literature. We continue by discussing the practical aspects of our proposed definition of computational reproducibility. Lastly, we describe common barriers to achieving computational reproducibility and how to resolve these barriers. § A TIER SYSTEM OF COMPUTATIONAL REPRODUCIBILITY There has been a great deal of confusion about the meanings of “replicability” and “reproducibility”. The two terms are sometimes even used interchangeably and different fields have different understandings of both <cit.>. In this paper, we initialize our discussion with the definitions provided by the American Statistical Association (ASA). ASA defines replication as “the act of repeating an entire study, independently of the original investigator without the use of original data (but generally using the same methods).” Reproducibility, however, is defined by the ASA as “you can take the original data and the computer code used to analyze the data and reproduce the numerical findings from the study.” We deliberately call this computational reproducibility to avoid further confusion. The distinction between replicability and reproducibility, together with robustness and generalizability, is commonly presented as a two-by-two grid like in Table <ref>. A drawback of this two-by-two presentation is that it does not present the criteria of a successful event. A definition including such criteria is given by <cit.>: “A result is reproducible when the same analysis steps performed on the same dataset consistently produces the same answer.” Note that this definition is compatible with our guiding ASA definition and the definition by <cit.>. Therefore, a better summary of (computational) reproducibility should be “same data, same analysis, same (consistent) answer.” However, we argue that this definition is still not sufficient to describe computational reproducibility unambiguously since it neglects two dimensions that are important for its assessment: (i) “who is the agent that is able to conduct the reproducibility check?” and (ii) “what is the computational environment in which the reproducibility check can be conducted?” Existing definitions for reproducibility lack clarity about those dimensions. The Turing Way definition does not specify the agent, whereas the definition by the ASA uses an ambiguous “you”.[An interesting sidenote to this is that <cit.> stated in the early 90s that in the future (i.e. now), the judgement of computational reproducibility does not require an expert: “a clerk can do it.” Theoretically, we could also go as inclusive as <cit.> to make this “by” question to include all parties, e.g. an indifferent clerk. But making this statement exceedingly inclusive also opens up room for people without any proper knowledge in the subject matter to criticize the computational reproducibility of a scientific work.] Without this ambiguity being clarified, one can claim a result to be reproducible when the result is reproduced on a specific laptop used by any original investigator (the first party). To be meaningful, we maintain that the “you” in the original ASA definition should actually mean all stakeholders: The original investigators who made the claim (first party), and other researchers (third party). A potential third group of stakeholders is a “trusted” third party. For instance, they can be people assigned by the publication outlet or the regulator to conduct the reproducibility check. This group of trusted third-party agents should have full access to the original data and perhaps also the original computational environment. The Turing Way definition and the ASA definition do not consider the computational environment at all, despite its importance for computational reproducibility. We argue that the accessibility of suitable computational environments should be treated as equally important as the accessibility of data and methods and should therefore become an integral part of the definition for computational reproducibility. For computational environments, we differentiate between the local computational environment that can only be used by the original authors, a restrictive environment for trusted third parties and nonrestrictive environments. External researchers do not have access to the local computational environment used by the original authors. Instead, external researchers should have no restriction to (re)construct a compatible computational environment on their own where the computer code is executable. We call this “nonrestrictive computational environment.” However, there is great variation in a computational environment that can prevent the shared computer code from being executable or consistently producing the same answer. We will discuss the details in a subsequent section. We define the basic level of computational reproducibility as “A result is reproducible when the same analysis steps performed on the same dataset consistently produce the same answer. This reproducibility of the result can be checked by the original investigators and other researchers within a nonrestrictive computational environment.” To ease the discussion in the rest of this paper, we call this first-order computational reproducibility (1°CR) which allows us to define a tier system of computational reproducibility [This tier system is partly inspired by the classification of software introduced by <cit.>: re-runnable (R^1), repeatable (R^2), reproducible (R^3), reusable (R^4), and replicable (R^5).]. This basic level of computational reproducibility is not verified externally. However, the original authors have given all the materials, including data, code, and a nonrestrictive computational environment, for third-party agents for its verification. A result can be said to be externally verified computational reproducible, or having third-order computational reproducibility (3°CR) when the same analysis steps performed on the same dataset by external researchers consistently produce the same answer. This involves the execution of the shared computer code with the shared data by external researchers. For practical reasons, we also define the state of second-order computational reproducibility (2°CR) where only trusted third-party agents can confirm the computational reproducibility. This limited state is sometimes useful when a finding is based on the analysis of highly sensitive raw data or with highly specialized equipment. This state is constrained by the restrictive access to materials such as data or computational resources. The complete tier system of computational reproducibility is summarized in Table <ref>. An implication of this tier system is that a standalone paper does not provide any evidence of computational reproducibility (the upper left cell). Papers should thus be considered secondary output and only serve three purposes: (1) to aid reviewers and editors to assess the validity of the claims, (2) to work as a record to facilitate citation, the so-called “one-dimensional credit system” of academia <cit.>, and (3) to advertise the actual CSS work, i.e. code and data. This is by no means a novel suggestion. <cit.> already called for such a model of publication: "An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures." § PRACTICAL ASPECTS OF VERIFIABILITY The verifiability of computational reproducibility can be explained using <cit.>'s reproducibility spectrum. Any result that is obtained using computational analysis steps and made public only as a traditional paper publication should be deemed as “not reproducible”. Computational reproducibility can only be supported by his proposed “Publication +” model. The provision of data and code supporting the analysis that came up with the result is the minimum standard, as stated in our definition of 1°CR above. Based on this standard, a claim without providing code and data is not verifiable by anyone else than the original author and thus should be classified as “not reproducible”. With this demarcation line drawn (provision of data and computer code), we can conclude that most of published CSS findings so far are not reproducible. Although the situation has changed in recent years <cit.>, sharing of code and data is still not a quid pro quo for the publication of CSS papers in scientific journals. Important outlets of CSS, for instance Social Media & Society, do not mention anything on data and computer code sharing in their instructions to authors in 2023. The sharing of data and computer code is only encouraged by some other outlets, e.g. Social Science Computer Review, New Media & Society and Information, Communication & Society. Researchers should be highly skeptical about findings that do not have any supporting data and computer code. Many papers might have the claim that data and computer code are available upon request. However, we should still consider findings with these claims to be not reproducible. Several previous studies have shown that researchers do not necessarily respond to these requests <cit.>, or they themselves do not have access to their own data and code as time goes by <cit.>. <cit.> report that the availability rapidly decreases with age of the paper. For instance, only 26% of data could still be obtained for six year old papers. However, even if journals have stricter mandates on data and code sharing, they seemed to be ignored in the past <cit.>. In contrast, if papers come with a data (and code) availability statement, 80% of data and code remain available even over a longer period of time <cit.>. There must be strong reasons provided to justify the non-sharing of data. We agree that the provision of data could be a sensitive issue. Sharing of personal data, even when anonymized, can have unintended social consequences. With enough computational power and effort, it is not too difficult to re-identify the individuals from the data <cit.>. Sharing of data about vulnerable groups might expose them to harassment <cit.>. Some data, such as media content data, are copyrighted and redistributing those data in verbatim is copyright infringement <cit.>. The protection of personal identifying information, vulnerable groups, and the researchers themselves always trumps the need for opening up the raw data to the public. Even with the data protection concerns researchers can still share some data. Processed datasets (e.g. aggregated data) are frequently shared <cit.>. In those cases, the code used for processing the raw data should also be provided. There are other technical reasons and design choices that can render some data (and code) not shareable, mostly associated with the applications of proprietary analytic solutions and data sources. But those cases are controllable and can be mitigated and will be discussed later. In summary, researchers together with their publication outlets should make their findings at least 2°CR. Researchers should eliminate any access barrier to their code and data and make the computational reproducibility of their findings verifiable. In exceptional situations, researchers can settle with 2°CR and data sharing can be waived. However, under such circumstances, researchers should consider partnering with trusted third-party agents, appointed by publication outlets or other relevant entities like regulators, to guarantee the computational reproducibility of their results. An example is <cit.>'s experience of accessing confidential data for the purposes of checking the reproducibility (in our language, 2°CR) of works submitted to Journal of Econometrics. § INCENTIVES FOR REPRODUCIBILITY AND AGAINST OPEN-WASHING Similar to concerns such as greenwashing and pinkwashing, open-washing – having an appearance of being open for marketing purposes, while in reality being proprietary – has not only been a concern in the business world but also in science <cit.>, specifically when claims of openness are being made (“material available on request”) without being verified. As we argued above, any published study in the traditional model is by default not reproducible. It should be the responsibility of the first- and second-parties, i.e. the original authors who make those openness claims and journals which reward openness, e.g. in form of badges to at least ensure 1°CR. The tier system lends itself nicely to implement a fact-based audit system. This audit system should not be opt-in, but the computational reproducibility of any work should be objectively audited for its placement on the tier system. We are aware that implementing such a rigorous scheme to check for openness is a complex matter. A positive turn of events is that Political Communication, a journal already issuing a form of Open Science badges, has a funded data editor to check these Open Science claims <cit.>. In our opinion, all journals publishing CSS research should consider appointing a data editor to increase the accountability of any openness claim and implement a audit system along the tier system of computational reproducibility, with the ultimate goal for any study to attain 3°CR. Establishing such a scheme may also help tackle one of the biggest enemies for reproducibility: the complete lack of incentives, except perhaps moral responsibilities. Academic recognition or professional advancement are still largely based on quantifiable research assessment indicators such as amount of grant money, number of citations, h-index, and impact factor of the publishing outlets. This lack of incentives can only be resolved by changes in publication policies <cit.> and evaluation practices in academia. For example, a journal announces in its instructions to authors its required minimum order of computational reproducibility, e.g. 2°CR. Upon the conditional acceptance of a manuscript based on the traditional form of peer review, the data editor audits the computational reproducibility of the work and reports its placement in the tier system. The ultimate decision by the editor could then be contingent on whether or not the work can attain the minimum order of computational reproducibility the journal required. Implementing an audit system based on the tier system can be a step toward establishing quantifiable measures of computational reproducibility. But even when researcher try to achieve 3°CR, there are still many barriers to breakthrough before reaching the pinnacle of reproducibility. These barriers prevent either the successful execution of the code and / or to obtain the same consistent result. In the subsequent sections, we will discuss the manifestations of such barriers and how to resolve them. § BARRIERS IMPOSED BY EXTERNAL DEPENDENCIES External dependencies are parts in the research pipeline which some external stakeholders have complete control, but the researchers who create or use the research pipeline does not. Computational social science, compared with other social science disciplines, is very reliant on the generosity of big tech companies which can affect all steps in the research pipeline. For instance, collecting data from their APIs or using data offers (e.g. Twitter API or Social Science One), data analysis APIs (e.g. Google Perspective API for checking online incivility or ChatGPT), and other free or paid services. In any scenario, the most common Barrier is changing access policies to offered services. Following the Cambridge Analytica scandal in 2018, many social media companies started limiting academic access to data collection via their APIs. The most recent example is the academic research access from Twitter which went from being heralded as a good example of platform collaboration in 2021 to the abrupt deprecation of the free access in February 2023. This left many Twitter researchers in a dire situation and poses threats to computational reproducibility of existing research: Shared Twitter datasets in the permissible format allowed by X Corp. (formerly Twitter Inc.) are Tweet IDs. Without the access to the academic API, these shared Tweet IDs are now meaningless because retrieving a tweet's complete information using its ID is not possible for free anymore. Similar to APIs, some other proprietary data sources such as Social Science One, have strong restrictions on how the data can be shared. The data used in those research are technically not owned by the researchers. Scholars have been debating whether this restrictive access is fair <cit.> or not <cit.>, but the non-universal access to proprietary data surely manifests itself as a barrier of reproducibility. The ups and downs of API access has left CSS flip-flopping between a “Golden Age of Data” <cit.> and a “Post-API age” <cit.>. However, even without restrictions in place, using such data can pose barriers for reproducibility efforts. APIs have always been proprietary black boxes, which were never really intended for academic use <cit.>, and we rarely know what type or quality of data we are analyzing <cit.>. If data explicitly needs to be gathered again to reproduce results, then we cannot guarantee that all relevant data is still available. Moreover, even when the data is still available, the terms of use of data gathered from the API may have changed and make certain research impossible. For instance, recently Reddit decided to not allow training machine learning models with there data anymore <cit.>. These issues render original results potentially irreproducible without access to the raw data. Besides data access, external dependencies on big tech companies also threaten other aspects of the research pipeline. Many heavily used services by researchers to foster computational reproducibility, for instance, Google Colaboratory and GitHub are owned by big tech companies. As with API access, it is imprudent to assume that these companies will maintain the provision of these services without restrictions. OpenAI, an organization originally started as non-profit, does not share their latest models such as GPT-3, Codex, and GPT-4 as open source software. Google also has a history of discontinuing popular research services such as Google Fusion Table and Google Refine. This potentially renders all research studies using these products obsolete along with them. Therefore, one should consider such services having an expiration date, i.e. when the generosity of these big tech companies runs out. A last barrier relating to external dependencies concerns the use of proprietary commercial software such as Stata and Tableau, which impose a preventable economical barrier to reproducibility: external researchers are obliged to purchase licenses just for the purpose of verifying the reproducibility of a result generated from these proprietary applications. This issue also manifests in another way by applying proprietary services in the form of RESTful APIs, e.g. Google Perspective API, Google Translation API, ChatGPT, or Botometer <cit.>. Scholars have argued that these “blackbox” services are intrinsically irreproducible because the algorithms running at the service provider side are constantly changing <cit.>. Similarly, the aforementioned economical barrier applies, if these services (e.g. Google Translation API, ChatGPT) carry a cost. § RESOLVING EXTERNAL DEPENDENCIES VIA ALTERNATIVE DATA SOURCES AND OPEN SOURCE SOFTWARE The dependencies on external stakeholders are a tremendous threat to computational reproducibility and the scientific community should think about alternative ways to study the technology-infused social reality. On the data front, a possible alternative is to shift from relying on the generosity of big tech to the generosity of users, via digital data donations <cit.>. To prevent computational barriers imposed by external dependencies, we advice against the release of academic research software in the form of RESTful API, as in the example set by the widely used Botometer <cit.>. Maintenance of these APIs requires enormous amount of resources and the reproducibility of downstream research using these APIs depends on the sustained allocation of these resources. In our opinion, this kind of effort is not only wasteful, but potentially also hinders researchers from meeting the requirements for 1°CR. Researchers are still calling for Botometer to open source their algorithm <cit.>. [In other fields, there are also research software available as RESTful APIs. The most important example is the web Blast search tool for bioinformatics <cit.>. However, the tool is drastically different from the case in point discussed here: 1) the web tool enjoys sustained support from a major US government research agency and 2) the underlying computer code of Blast is available in public domain.] Research software should be released as open source software. In general, researchers should consider not using commercial API services such as Google Translate API and ChatGPT in their CSS research since they generate opacity in the entire analytical pipeline. When free and open source alternatives are available, those alternatives should be considered first. For example, machine translation can be replaced by locally deployed open source software packages such as OPUS-MT<cit.>, whereas prompt-based LLMs such as ChatGPT can be replaced with Alpaca <cit.>. These locally deployed software might not have the state-of-the-art performance. But that performance impact —usually not important enough for the analysis and can be statistically adjusted <cit.> — is a reasonable price to pay to improve reproducibility. One often ignored aspect of using these commercial API services is that private or sensitive data often get transferred to a commercial entity (e.g. Google or OpenAPI), to whom the participants of the study probably did not give consent for researchers to make the data transfer. In the same vein, researchers should use open source software as much as possible. For CSS research, this is relatively easy due to the de facto duopoly of R and Python as the most used programming languages <cit.>. In a broader sense, this also includes the use of open source operating systems as computational environment. § BARRIERS IMPOSED BY COMPUTATIONAL ASPECTS Computational barriers for reproducibility can be divided into human and technical components. The human induced barriers include the still high unwillingness to actually share <cit.>, and knowledge gaps in programming practices, where the former is partially a consequence of the latter since researchers are afraid that their “bad” programming skills are exposed when sharing code <cit.>. Other often cited reasons for not sharing code are time constraints for making the code camera-ready. For the purposes of this paper, code should be correct and reproducible. All other criteria of code quality are less important. But in order to achieve the two, programming knowledge is needed and knowledge gaps among CSS researchers clearly exist, already due to the high interdisciplinary of the field <cit.>. Beyond disciplinary boundaries, however, the gap also stems from the fact that the vast majority of researchers, even those who develop research software, are primarily self-taught and not equipped with any formal training in software development <cit.>. Researcher can probably write executable code but have varying skills in standard software development practices such as using unit tests and continuous integration <cit.> to show correctness. Code shared by researchers is usually written to “do the job once” and not with longevity or computational reproducibility in mind. The (technical) computational barrier that overshadows the human components is the diversity of computational environments used by researchers. Previous studies showed that most researchers run their analyses on their desktop or laptop computers rather than standardardized computing environments such as High Performance Computing or Cloud environments <cit.>. This creates a great source of diversity in computational environments and the details of the computational environment used in the analysis is usually underdescribed. In order to rerun the shared computer code to check for 3°CR, an external researcher must run the computer code in a compatible computational environment to the one used by the original investigator. And there are many variations in the computational environments that can prevent the code from running. A computational environment can have many moving parts and those moving parts can either stop the code from running or give different results. As a simple model and considering only the software layer, <cit.> list out four components that can vary from one computational environment to another: (A) operating system, (B) system components, (C) the exact version of the programming language, and (D) what and which version of the software libraries. It points to the fact that computational reproducibility is tacitly dependent on the computational environment the computer code is running on. To give an example: The popular R package topicmodels <cit.> does not work on Linux if the underlying system component (Component B) of GNU Scientific Library is not available. In some cases, the output depends on the exact version of a particular software library (Component D). For instance, many code snippets published in the software paper of the popular text analysis package quanteda  <cit.> are not executable with recent releases due to a major rewrite of the software. One must use a release prior to April 2021 to run those code snippets. As the definition of computational reproducibility calls for the same code and data consistently produce the same result, one can attain this by either (1) making sure the code and data produce the same result in all computational environments, or (2) making sure that there is a way to consistently generate a compatible computational environment for the code and data to run on. The former is difficult, while the latter is relatively easy. A simple solution is to produce a reproducible computational environment where at least the four aforementioned components are clearly documented and can be automatically regenerated. § RESOLVING COMPUTATIONAL BARRIERS VIA PROACTIVE REPRODUCIBILITY Thus far, we have mainly addressed reproducibility from a retroactive point of view, i.e. a problem that occurs after a study is published. In this section, we draw a distinction between proactive and retroactive computational reproducibility. If the computational reproducibility is not an afterthought but a built-in feature from the beginning, this kind of proactive computational reproducibility does not require any code cleanup to make the computer code shareable. Instead, this proactive computational reproducibility allows great room for automating the usual chores. Due to the knowledge gap, not all CSS researchers have the same skill set to foster computational reproducibility. Luckily, there has been existing literature on how to write reproducible and testable code <cit.>. Best practices for writing reproducible code should be promoted. Inclusive educational initiatives such as Software Carpentry <cit.> and The Turing Way <cit.> should be promoted and supported. Also, CSS courses should focus more on software engineering fundamentals such as software testing. One way proactive computational reproducibility can help is to organize computer code and data of a research project as a reproducible research compendium from the beginning. A research compendium encourages researchers to organize computer code and data separately into a sensible structure. Computer code is documented and Literate Programming <cit.> techniques can also be used to make sure the reporting in the manuscript is perfectly aligned with the data analysis. Tools have been developed to fully automated the process of authoring a reproducible research compendium <cit.>. Virtualization systems, such as Docker and Apptainer, have been recommended to increase research reproducibility in other fields <cit.>. The declarative description of a computational environment, such as Dockerfile and Apptainer Definition File, is a plain text file and can be shared together inside a research compendium. However, it has also been acknowledged that writing these declarative descriptions requires skills that not many researchers may have <cit.> but tutorials such as <cit.> are available. The procedure for writing a Dockerfile can be automated and tools such as containerit <cit.>, repo2docker <cit.>, and rang <cit.> are available. Figure <ref> shows an example of an automatically generated declarative description that pins down all four software components. We strongly recommend using Linux or Linux-based virtualization systems for CSS research, owing to their maturity, stability, compatibility with containerization, and open licensing, all of which enhance reproducibility. Popular Linux distributions such as Debian, Fedora, and Ubuntu are good enough for reproducibility because the software repositories of these standard Linux distributions have good archives of their old software packages. Our recommendations to enhance reproducibility depending on the sharability of code and data are shown in Table <ref>. In the optimal case where both data and code can be shared, 3°CR can be achieved by following best practice guides to write reproducible code, using a research compendium, and by providing a declarative description of the computational environment. If data cannot be shared, e.g. for privacy reasons, researchers should still follow all mentioned practices for establishing a reproducible pipeline and either choose a journal that allows for a reproducibility check for 2°CR or alternatively deposit their data with a trusted third party which allows for a secure access to sensible data <cit.>. In terms of code, we argued that there is no convincing reason to not share the code that produced the final results of a paper. Hence, if researchers are not willing to share their code, independent of the availability of data, the paper should not be published since it cannot even achieve a minimal computational reproducibility of 1°CR. § CONCLUSION In this paper, we introduced a definition for computational reproducibility that goes beyond the usual binary version of the concept. Incorporating the agent and computational environment as separate dimensions allowed us to define a tier system that covers various levels of verifiable computational reproducibility based on who and where results are reproduced. We argued that most CSS research cannot be considered reproducible given that the minimum requirement, the sharing of code and data, is still not mandated by many journals and researchers rarely share their material voluntarily. But even if researchers are willing to do so, there are many barriers to break through in order to make a piece of research truly reproducible. For many such barriers, we offered alternative approaches that facilitate the process of making research verifiably reproducible. However, there are still some barriers that are harder to eliminate. For the computational environment, we restricted our discussion to the software layer. Increasingly often, modern CSS research demands heavy computational resources such as massive GPU clusters. Researchers who do not have access to this special equipment might not be able to reproduce the work and thus render CSS research “irreproducible”. Even with this, the computational environment dimension in our definition is still useful for making such research reproducible. The original researcher can either allow trusted third parties to have restrictive access to the special equipment to achieve 2°CR; or try to use some off-the-shelf equipment instead (or at least equipment common in public high-performance computing clusters) to make the computational environment nonrestrictive to achieve 3°CR. Implementing our suggestions at a larger scale is a challenging and complex task and would require change on many institutional levels. The most obvious approach is to design a system of incentives that fosters proactive reproducibility either through rewards or penalties. On the one hand, making a study reproducible should not be a burden but rather an achievement of similar quality as publishing a paper itself. The number of papers classified as 3°CR should be an indicator that search committees recognize and help scholars to advance their career. On the other hand, journals could set a minimum required order of reproducibility to be eligible for publication. Both suggestions require large structural change, either by rethinking the “one-dimensional credit system” of academia or restructuring the publication process by hiring dedicated personnel who conduct the reproducibility checks for journals. unsrtnat 78 urlstyle [Nosek et al.(2022)Nosek, Hardwicke, Moshontz, Allard, Corker, Dreber, Fidler, Hilgard, Kline Struhl, Nuijten, Rohrer, Romero, Scheel, Scherer, Schönbrodt, and Vazire]nosek:2022:RRR Brian A. Nosek, Tom E. Hardwicke, Hannah Moshontz, Aurélien Allard, Katherine S. Corker, Anna Dreber, Fiona Fidler, Joe Hilgard, Melissa Kline Struhl, Michèle B. Nuijten, Julia M. Rohrer, Felipe Romero, Anne M. Scheel, Laura D. Scherer, Felix D. Schönbrodt, and Simine Vazire. Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 730 (1):0 719–748, 2022. ISSN 1545-2085. 10.1146/annurev-psych-020821-114157. URL <http://dx.doi.org/10.1146/annurev-psych-020821-114157>. [Shrout and Rodgers(2018)]shrout2018PSK Patrick E. Shrout and Joseph L. Rodgers. Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 690 (1):0 487–510, 2018. 10.1146/annurev-psych-122216-011845. URL <https://doi.org/10.1146/annurev-psych-122216-011845>. [Open Science Collaboration(2015)]OSC2015:E Open Science Collaboration. Estimating the reproducibility of psychological science. Science, 3490 (6251), 2015. ISSN 1095-9203. 10.1126/science.aac4716. URL <http://dx.doi.org/10.1126/science.aac4716>. [Wuttke(2018)]wuttke:2018:WTM Alexander Wuttke. Why too many political science findings cannot be trusted and what we can do about it: A review of meta-scientific research and a call for academic reform. Politische Vierteljahresschrift, 600 (1):0 1–19, 2018. ISSN 1862-2860. 10.1007/s11615-018-0131-7. URL <http://dx.doi.org/10.1007/s11615-018-0131-7>. [Dienlin et al.(2020)Dienlin, Johannes, Bowman, Masur, Engesser, Kümpel, Lukito, Bier, Zhang, Johnson, and et al.]dienlin2020AOS Tobias Dienlin, Niklas Johannes, Nicholas David Bowman, Philipp K Masur, Sven Engesser, Anna Sophie Kümpel, Josephine Lukito, Lindsey M Bier, Renwen Zhang, Benjamin K Johnson, and et al. An agenda for open science in communication. Journal of Communication, 710 (1):0 1–26, 2020. ISSN 1460-2466. 10.1093/joc/jqz052. URL <http://dx.doi.org/10.1093/joc/jqz052>. [Song et al.(2022)Song, Markowitz, and Taylor]song:2022:T Hyunjin Song, David M Markowitz, and Samuel Hardman Taylor. Trusting on the shoulders of open giants? open science increases trust in science for the public and academics. Journal of Communication, 720 (4):0 497–510, 2022. ISSN 1460-2466. 10.1093/joc/jqac017. URL <http://dx.doi.org/10.1093/joc/jqac017>. [Munafò et al.(2017)Munafò, Nosek, Bishop, Button, Chambers, Percie du Sert, Simonsohn, Wagenmakers, Ware, and Ioannidis]mnbbcpswwi-mrs-17 Marcus R. Munafò, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. Ioannidis. A manifesto for reproducible science. Nature Human Behaviour, 10 (1):0 1–9, 2017. ISSN 2397-3374. 10.1038/s41562-016-0021. [Vicente-Saez and Martinez-Fuentes(2018)]vm-osnslrid-18 Ruben Vicente-Saez and Clara Martinez-Fuentes. Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88:0 428–436, 2018. ISSN 0148-2963. 10.1016/j.jbusres.2017.12.043. [Chen et al.(2023a)Chen, Peng, Kim, and Choi]chen:2023:WWC Yingying Chen, Zhao Peng, Sei-Hill Kim, and Chang Won Choi. What we can do and cannot do with topic modeling: A systematic review. Communication Methods and Measures, page 1–20, 2023a. ISSN 1931-2466. 10.1080/19312458.2023.2167965. URL <http://dx.doi.org/10.1080/19312458.2023.2167965>. [Lazer et al.(2009)Lazer, Pentland, Adamic, Aral, Barabási, Brewer, Christakis, Contractor, Fowler, Gutmann, Jebara, King, Macy, Roy, and Alstyne]lazer2009 David Lazer, Alex Pentland, Lada Adamic, Sinan Aral, Albert-László Barabási, Devon Brewer, Nicholas Christakis, Noshir Contractor, James Fowler, Myron Gutmann, Tony Jebara, Gary King, Michael Macy, Deb Roy, and Marshall Van Alstyne. Computational social science. Science, 3230 (5915):0 721–723, 2009. 10.1126/science.1167742. URL <https://www.science.org/doi/abs/10.1126/science.1167742>. [Hutson(2018)]hutson:2018:A Matthew Hutson. Artificial intelligence faces reproducibility crisis. Science, 3590 (6377):0 725–726, 2018. ISSN 1095-9203. 10.1126/science.359.6377.725. URL <http://dx.doi.org/10.1126/science.359.6377.725>. [Fokkens et al.(2013)Fokkens, van Erp, Postma, Pedersen, Vossen, and Freire]fokkens-etal-2013-offspring Antske Fokkens, Marieke van Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. Offspring from reproduction problems: What replication failure teaches us. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1691–1701, Sofia, Bulgaria, 2013. Association for Computational Linguistics. URL <https://aclanthology.org/P13-1166>. [Pedersen(2008)]pedersen-2008-last Ted Pedersen. Last words: Empiricism is not a matter of faith. Computational Linguistics, 340 (3):0 465–470, 2008. 10.1162/coli.2008.34.3.465. URL <https://aclanthology.org/J08-3010>. [Barba(2018)]barba2018terminologies Lorena A Barba. Terminologies for reproducible research. arXiv preprint arXiv:1802.03311, 2018. [The Turing Way Community(2022)]The_Turing_Way:2022 The Turing Way Community. The Turing Way: A handbook for reproducible, ethical and collaborative research, 2022. [Rougier et al.(2017)Rougier, Hinsen, Alexandre, Arildsen, Barba, Benureau, Brown, De Buyl, Caglayan, Davison, et al.]rougier2017sustainable Nicolas P Rougier, Konrad Hinsen, Frédéric Alexandre, Thomas Arildsen, Lorena A Barba, Fabien CY Benureau, C Titus Brown, Pierre De Buyl, Ozan Caglayan, Andrew P Davison, et al. Sustainable computational science: the rescience initiative. PeerJ Computer Science, 3:0 e142, 2017. [Claerbout and Karrenbach(1992)]claerbout1992electronic Jon F Claerbout and Martin Karrenbach. Electronic documents give reproducible research a new meaning. In SEG technical program expanded abstracts 1992, pages 601–604. Society of Exploration Geophysicists, 1992. [Benureau and Rougier(2018)]benureau:2018:RRR Fabien C. Y. Benureau and Nicolas P. Rougier. Re-run, repeat, reproduce, reuse, replicate: Transforming code into scientific contributions. Frontiers in Neuroinformatics, 11, 2018. ISSN 1662-5196. 10.3389/fninf.2017.00069. URL <http://dx.doi.org/10.3389/fninf.2017.00069>. [Niemeyer et al.(2016)Niemeyer, Smith, and Katz]niemeyer:2016:CPS Kyle E. Niemeyer, Arfon M. Smith, and Daniel S. Katz. The challenge and promise of software citation for credit, identification, discovery, and reuse. Journal of Data and Information Quality, 70 (4):0 1–5, 2016. ISSN 1936-1963. 10.1145/2968452. URL <http://dx.doi.org/10.1145/2968452>. [Smith et al.(2018)Smith, Niemeyer, Katz, Barba, Githinji, Gymrek, Huff, Madan, Cabunoc Mayes, Moerman, Prins, Ram, Rokem, Teal, Valls Guimera, and Vanderplas]smith:2018:JOS Arfon M. Smith, Kyle E. Niemeyer, Daniel S. Katz, Lorena A. Barba, George Githinji, Melissa Gymrek, Kathryn D. Huff, Christopher R. Madan, Abigail Cabunoc Mayes, Kevin M. Moerman, Pjotr Prins, Karthik Ram, Ariel Rokem, Tracy K. Teal, Roman Valls Guimera, and Jacob T. Vanderplas. Journal of Open Source Software (JOSS): design and first-year review. PeerJ Computer Science, 4:0 e147, 2018. ISSN 2376-5992. 10.7717/peerj-cs.147. URL <http://dx.doi.org/10.7717/peerj-cs.147>. [Buckheit and Donoho(1995)]buckheit1995wavelab Jonathan B Buckheit and David L Donoho. Wavelab and reproducible research. Springer, 1995. [Peng(2011)]peng:2011:RRC Roger D. Peng. Reproducible research in computational science. Science, 3340 (6060):0 1226–1227, 2011. ISSN 1095-9203. 10.1126/science.1213847. URL <http://dx.doi.org/10.1126/science.1213847>. [Stodden et al.(2018)Stodden, Seiler, and Ma]stodden:2018 Victoria Stodden, Jennifer Seiler, and Zhaokun Ma. An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 1150 (11):0 2584–2589, 2018. ISSN 1091-6490. 10.1073/pnas.1708290115. URL <http://dx.doi.org/10.1073/pnas.1708290115>. [Krawczyk and Reuben(2012)]kr-arferwssm-12 Michal Krawczyk and Ernesto Reuben. (Un)Available upon Request: Field Experiment on Researchers' Willingness to Share Supplementary Materials. Accountability in Research, 190 (3):0 175–186, 2012. ISSN 0898-9621. 10.1080/08989621.2012.678688. [Vines et al.(2014)Vines, Albert, Andrew, Débarre, Bock, Franklin, Gilbert, Moore, Renaut, and Rennison]vaadbfgmrr-arddraa-14 Timothy H. Vines, Arianne Y. K. Albert, Rose L. Andrew, Florence Débarre, Dan G. Bock, Michelle T. Franklin, Kimberly J. Gilbert, Jean-Sébastien Moore, Sébastien Renaut, and Diana J. Rennison. The Availability of Research Data Declines Rapidly with Article Age. Current Biology, 240 (1):0 94–97, 2014. ISSN 0960-9822. 10.1016/j.cub.2013.11.014. [Tedersoo et al.(2021)Tedersoo, Küngas, Oras, Köster, Eenmaa, Leijen, Pedaste, Raju, Astapova, Lukner, Kogermann, and Sepp]tedersoo:2021:D Leho Tedersoo, Rainer Küngas, Ester Oras, Kajar Köster, Helen Eenmaa, Äli Leijen, Margus Pedaste, Marju Raju, Anastasiya Astapova, Heli Lukner, Karin Kogermann, and Tuul Sepp. Data sharing practices and data availability upon request differ across scientific disciplines. Scientific Data, 80 (1), 2021. ISSN 2052-4463. 10.1038/s41597-021-00981-0. URL <http://dx.doi.org/10.1038/s41597-021-00981-0>. [Vines et al.(2013)Vines, Andrew, Bock, Franklin, Gilbert, Kane, Moore, Moyers, Renaut, Rennison, Veen, and Yeaman]vabfgkmmrrvy-mdagiard-13 Timothy H. Vines, Rose L. Andrew, Dan G. Bock, Michelle T. Franklin, Kimberly J. Gilbert, Nolan C. Kane, Jean-Sébastien Moore, Brook T. Moyers, Sébastien Renaut, Diana J. Rennison, Thor Veen, and Sam Yeaman. Mandated data archiving greatly improves access to research data. The FASEB Journal, 270 (4):0 1304–1308, 2013. ISSN 0892-6638, 1530-6860. 10.1096/fj.12-218164. [Begley and Ioannidis(2015)]bi-rs-15 C. Glenn Begley and John P.A. Ioannidis. Reproducibility in Science. Circulation Research, 1160 (1):0 116–126, 2015. 10.1161/CIRCRESAHA.114.303819. [Federer(2022)]f-ladaapo-22 Lisa M. Federer. Long-term availability of data associated with articles in PLOS ONE. PLOS ONE, 170 (8):0 e0272845, 2022. ISSN 1932-6203. 10.1371/journal.pone.0272845. [de Montjoye et al.(2013)de Montjoye, Hidalgo, Verleysen, and Blondel]montjoye:2013:UC Yves-Alexandre de Montjoye, César A. Hidalgo, Michel Verleysen, and Vincent D. Blondel. Unique in the crowd: The privacy bounds of human mobility. Scientific Reports, 30 (1), 2013. ISSN 2045-2322. 10.1038/srep01376. URL <http://dx.doi.org/10.1038/srep01376>. [Fox et al.(2021)Fox, Pearce, Massanari, Riles, Szulc, Ranjit, Trevisan, Soriano, Vitak, Arora, Ahn, Alper, Gambino, Gonzalez, Lynch, Williamson, and L Gonzales]fox:2021:OSC Jesse Fox, Katy E Pearce, Adrienne L Massanari, Julius Matthew Riles, Łukasz Szulc, Yerina S Ranjit, Filippo Trevisan, Cheryll Ruth R Soriano, Jessica Vitak, Payal Arora, Sun Joo (Grace) Ahn, Meryl Alper, Andrew Gambino, Carmen Gonzalez, Teresa Lynch, Lillie D Williamson, and Amy L Gonzales. Open science, closed doors? countering marginalization through an agenda for ethical, inclusive research in communication. Journal of Communication, 2021. ISSN 1460-2466. 10.1093/joc/jqab029. URL <http://dx.doi.org/10.1093/joc/jqab029>. [Van Atteveldt et al.(2020)Van Atteveldt, Althaus, and Wessler]vanatteveldt:2020:TSY Wouter Van Atteveldt, Scott Althaus, and Hartmut Wessler. The trouble with sharing your privates: Pursuing ethical open science and collaborative research across national jurisdictions using sensitive data. Political Communication, 380 (1-2):0 192–198, 2020. ISSN 1091-7675. 10.1080/10584609.2020.1744780. URL <http://dx.doi.org/10.1080/10584609.2020.1744780>. [Crüwell et al.(2023)Crüwell, Apthorp, Baker, Colling, Elson, Geiger, Lobentanzer, Monéger, Patterson, Schwarzkopf, Zaneva, and Brown]cruewell:2023:WB Sophia Crüwell, Deborah Apthorp, Bradley J. Baker, Lincoln Colling, Malte Elson, Sandra J. Geiger, Sebastian Lobentanzer, Jean Monéger, Alex Patterson, D. Samuel Schwarzkopf, Mirela Zaneva, and Nicholas J. L. Brown. What’s in a badge? a computational reproducibility investigation of the open data badge policy in one issue of psychological science. Psychological Science, page 095679762211408, 2023. ISSN 1467-9280. 10.1177/09567976221140828. URL <http://dx.doi.org/10.1177/09567976221140828>. [Vilhuber(2023)]vilhuber2023reproducibility Lars Vilhuber. Reproducibility and transparency versus privacy and confidentiality: Reflections from a data editor. Journal of Econometrics, 2023. [Lawrence(2022)]lawrence:2022:EN Regina G. Lawrence. Editor’s note. Political Communication, 400 (1):0 1–3, 2022. ISSN 1091-7675. 10.1080/10584609.2022.2155758. URL <http://dx.doi.org/10.1080/10584609.2022.2155758>. [Cadwallader and Hrynaszkiewicz(2022)]ch-srcscrpainp-22 Lauren Cadwallader and Iain Hrynaszkiewicz. A survey of researchers' code sharing and code reuse practices, and assessment of interactive notebook prototypes. PeerJ, 10:0 e13933, 2022. ISSN 2167-8359. 10.7717/peerj.13933. [Hardwicke et al.(2018)Hardwicke, Mathur, MacDonald, Nilsonne, Banks, Kidwell, Hofelich Mohr, Clayton, Yoon, Henry Tessler, Lenne, Altman, Long, and Frank]hardwicke:2018:D Tom E. Hardwicke, Maya B. Mathur, Kyle MacDonald, Gustav Nilsonne, George C. Banks, Mallory C. Kidwell, Alicia Hofelich Mohr, Elizabeth Clayton, Erica J. Yoon, Michael Henry Tessler, Richie L. Lenne, Sara Altman, Bria Long, and Michael C. Frank. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal cognition. Royal Society Open Science, 50 (8):0 180448, 2018. ISSN 2054-5703. 10.1098/rsos.180448. URL <http://dx.doi.org/10.1098/rsos.180448>. [Puschmann(2019)]puschmann:2019 Cornelius Puschmann. An end to the wild west of social media research: a response to Axel Bruns. Information, Communication & Society, 220 (11):0 1582–1589, 2019. ISSN 1468-4462. 10.1080/1369118x.2019.1646300. URL <http://dx.doi.org/10.1080/1369118X.2019.1646300>. [Bruns(2019)]bruns:2019:AA Axel Bruns. After the ‘APIcalypse’: social media platforms and their fight against critical scholarly research. Information, Communication & Society, 220 (11):0 1544–1566, 2019. ISSN 1468-4462. 10.1080/1369118x.2019.1637447. URL <http://dx.doi.org/10.1080/1369118X.2019.1637447>. [Grady(2019)]grady2019golden Don Grady. The golden age of data: media analytics in study & practice. Routledge, 2019. [Freelon(2018)]freelon:2018:CRP Deen Freelon. Computational research in the post-api age. Political Communication, 350 (4):0 665–668, 2018. ISSN 1091-7675. 10.1080/10584609.2018.1477506. URL <http://dx.doi.org/10.1080/10584609.2018.1477506>. [Tromble(2021)]t-whadgcradrpa-21 Rebekah Tromble. Where Have All the Data Gone? A Critical Reflection on Academic Digital Research in the Post-API Age. Social Media + Society, 70 (1):0 2056305121988929, 2021. ISSN 2056-3051. 10.1177/2056305121988929. [Morstatter et al.(2013)Morstatter, Pfeffer, Liu, and Carley]mplc-sgecdtsatf-13 Fred Morstatter, Jürgen Pfeffer, Huan Liu, and Kathleen Carley. Is the sample good enough? comparing data from twitter's streaming api with twitter's firehose. In Proceedings of the International AAAI Conference on Web and Social Media, volume 7, pages 400–408, 2013. [Davidson et al.(2023)Davidson, Wischerath, Racek, Parry, Godwin, Hinds, van der Linden, Roscoe, and Ayravainen]davidson2023 Brittany I Davidson, Darja Wischerath, Daniel Racek, Douglas A Parry, Emily Godwin, Joanne Hinds, Dirk van der Linden, Jonathan F Roscoe, and Laura E M Ayravainen. Social media apis: A quiet threat to the advancement of science, 2023. URL <psyarxiv.com/ps32z>. [Davis et al.(2016)Davis, Varol, Ferrara, Flammini, and Menczer]Clayton2016 Clayton Allen Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer. Botornot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW '16 Companion, page 273–274, Republic and Canton of Geneva, CHE, 2016. International World Wide Web Conferences Steering Committee. ISBN 9781450341448. 10.1145/2872518.2889302. URL <https://doi.org/10.1145/2872518.2889302>. [Yang et al.(2022)Yang, Ferrara, and Menczer]yang2022botometer Kai-Cheng Yang, Emilio Ferrara, and Filippo Menczer. Botometer 101: Social bot practicum for computational social scientists. Journal of Computational Social Science, pages 1–18, 2022. [Rauchfleisch and Kaiser(2020)]rauchfleisch:2020:F Adrian Rauchfleisch and Jonas Kaiser. The false positive problem of automatic bot detection in social science research. PLOS ONE, 150 (10):0 e0241045, 2020. ISSN 1932-6203. 10.1371/journal.pone.0241045. URL <http://dx.doi.org/10.1371/journal.pone.0241045>. [Chan et al.(2020)Chan, Zeng, Wessler, Jungblut, Welbers, Bajjalieh, van Atteveldt, and Althaus]chan:2020:REC Chung-hong Chan, Jing Zeng, Hartmut Wessler, Marc Jungblut, Kasper Welbers, Joseph W Bajjalieh, Wouter van Atteveldt, and Scott L. Althaus. Reproducible extraction of cross-lingual topics (rectr). Communication Methods and Measures, page 1–21, 2020. ISSN 1931-2466. 10.1080/19312458.2020.1812555. URL <http://dx.doi.org/10.1080/19312458.2020.1812555>. [Chen et al.(2023b)Chen, Zaharia, and Zou]chen2023chatgpt Lingjiao Chen, Matei Zaharia, and James Zou. How is chatgpt's behavior changing over time? arXiv preprint arXiv:2307.09009, 2023b. [Ohme et al.(2023)Ohme, Araujo, Boeschoten, Freelon, Ram, Reeves, and Robinson]ohme:2023:DTD Jakob Ohme, Theo Araujo, Laura Boeschoten, Deen Freelon, Nilam Ram, Byron B. Reeves, and Thomas N. Robinson. Digital trace data collection for social media effects research: APIs, data donation, and (screen) tracking. Communication Methods and Measures, page 1–18, 2023. ISSN 1931-2466. 10.1080/19312458.2023.2181319. URL <http://dx.doi.org/10.1080/19312458.2023.2181319>. [Altschul et al.(1990)Altschul, Gish, Miller, Myers, and Lipman]altschul1990basic Stephen F Altschul, Warren Gish, Webb Miller, Eugene W Myers, and David J Lipman. Basic local alignment search tool. Journal of molecular biology, 2150 (3):0 403–410, 1990. [Tiedemann and Thottingal(2020)]tiedemann2020opus Jörg Tiedemann and Santhosh Thottingal. Opus-mt–building open translation services for the world. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. European Association for Machine Translation, 2020. [Taori et al.(2023)Taori, Gulrajani, Zhang, Dubois, Li, Guestrin, Liang, and Hashimoto]alpaca Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. <https://github.com/tatsu-lab/stanford_alpaca>, 2023. [Fong and Tyler(2021)]fong_tyler_2021 Christian Fong and Matthew Tyler. Machine learning predictions as regression covariates. Political Analysis, 290 (4):0 467–484, 2021. 10.1017/pan.2020.38. [TeBlunthius et al.(2022)TeBlunthius, Hase, and Chan]TeBlunthius_2023 Nathan TeBlunthius, Valarie Hase, and Chung-hong Chan. How to stop ignoring classification errors. Text as Data Conference (TADA 2022), New York City, (Remote presentation) Oct 6th, 2022., 2022. [Metzler et al.(2016)Metzler, Kim, Allum, and Denman]metzler2016doing Katie Metzler, David A Kim, Nick Allum, and Angella Denman. Who is doing computational social science? trends in big data research. 2016. URL <https://repository.essex.ac.uk/17679/1/compsocsci.pdf>. [LeVeque et al.(2012)LeVeque, Mitchell, and Stodden]leveque2012reproducible Randall J LeVeque, Ian M Mitchell, and Victoria Stodden. Reproducible research for scientific computing: Tools and strategies for changing the culture. Computing in Science & Engineering, 140 (04):0 13–17, 2012. [Sutherland(2018)]s-csshais-18 Mary Elizabeth Sutherland. Computational social science heralds the age of interdisciplinary science. <https://socialsciences.nature.com/posts/54262-computational-social-science-heralds-the-age-of-interdisciplinary-science/>, 2018. [Online; accessed 05-May-2023]. [Hannay et al.(2009)Hannay, MacLeod, Singer, Langtangen, Pfahl, and Wilson]hannay:2009:H Jo Erskine Hannay, Carolyn MacLeod, Janice Singer, Hans Petter Langtangen, Dietmar Pfahl, and Greg Wilson. How do scientists develop and use scientific software? 2009 ICSE Workshop on Software Engineering for Computational Science and Engineering, 2009. 10.1109/secse.2009.5069155. URL <http://dx.doi.org/10.1109/SECSE.2009.5069155>. [Prabhu et al.(2011)Prabhu, Jablin, Raman, Zhang, Huang, Kim, Johnson, Liu, Ghosh, Beard, Oh, Zoufaly, Walker, and August]pjrzhkjlgbozwa-spcs-11 Prakash Prabhu, Thomas B. Jablin, Arun Raman, Yun Zhang, Jialu Huang, Hanjun Kim, Nick P. Johnson, Feng Liu, Soumyadeep Ghosh, Stephen Beard, Taewook Oh, Matthew Zoufaly, David Walker, and David I. August. A survey of the practice of computational science. In State of the Practice Reports, SC '11, pages 1–12, New York, NY, USA, 2011. Association for Computing Machinery. 10.1145/2063348.2063374. [Widder et al.(2019)Widder, Sunshine, and Fickas]wsf-brsp-19 David Gray Widder, Joshua Sunshine, and Stephen Fickas. Barriers to Reproducible Scientific Programming. In 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pages 217–221, 2019. 10.1109/VLHCC.2019.8818907. [Pinto et al.(2018)Pinto, Wiese, and Dias]pinto:2018:H Gustavo Pinto, Igor Wiese, and Luiz Felipe Dias. How do scientists develop scientific software? an external replication. 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2018. 10.1109/saner.2018.8330263. URL <http://dx.doi.org/10.1109/SANER.2018.8330263>. [Wilson et al.(2014)Wilson, Aruliah, Brown, Hong, Davis, Guy, Haddock, Huff, Mitchell, Plumbley, Waugh, White, and Wilson]wabhdghhmpwww-bpsc-14 Greg Wilson, D. A. Aruliah, C. Titus Brown, Neil P. Chue Hong, Matt Davis, Richard T. Guy, Steven H. D. Haddock, Kathryn D. Huff, Ian M. Mitchell, Mark D. Plumbley, Ben Waugh, Ethan P. White, and Paul Wilson. Best Practices for Scientific Computing. PLOS Biology, 120 (1):0 e1001745, 2014. ISSN 1545-7885. 10.1371/journal.pbio.1001745. [Chan and Schoch(2023)]chan:2023 Chung-hong Chan and David Schoch. rang: Reconstructing reproducible r computational environments. PLOS ONE, 180 (6):0 e0286761, 2023. ISSN 1932-6203. 10.1371/journal.pone.0286761. URL <http://dx.doi.org/10.1371/journal.pone.0286761>. [Grün and Hornik(2011)]topicmodels:2011 Bettina Grün and Kurt Hornik. topicmodels: An R package for fitting topic models. Journal of Statistical Software, 400 (13):0 1–30, 2011. 10.18637/jss.v040.i13. [Benoit et al.(2018)Benoit, Watanabe, Wang, Nulty, Obeng, Müller, and Matsuo]benoit:2018 Kenneth Benoit, Kohei Watanabe, Haiyan Wang, Paul Nulty, Adam Obeng, Stefan Müller, and Akitaka Matsuo. quanteda: An R package for the quantitative analysis of textual data. Journal of Open Source Software, 30 (30):0 774, 2018. ISSN 2475-9066. 10.21105/joss.00774. URL <http://dx.doi.org/10.21105/joss.00774>. [Sandve et al.(2013)Sandve, Nekrutenko, Taylor, and Hovig]sandve:2013:TSR Geir Kjetil Sandve, Anton Nekrutenko, James Taylor, and Eivind Hovig. Ten simple rules for reproducible computational research. PLoS Computational Biology, 90 (10):0 e1003285, 2013. ISSN 1553-7358. 10.1371/journal.pcbi.1003285. URL <http://dx.doi.org/10.1371/journal.pcbi.1003285>. [Trisovic et al.(2022)Trisovic, Lau, Pasquier, and Crosas]trisovic:2022 Ana Trisovic, Matthew K. Lau, Thomas Pasquier, and Mercè Crosas. A large-scale study on research code quality and execution. Scientific Data, 90 (1), 2022. ISSN 2052-4463. 10.1038/s41597-022-01143-6. URL <http://dx.doi.org/10.1038/s41597-022-01143-6>. [Wilson(2006)]wilson:2006 Greg Wilson. Software carpentry: Getting scientists to write better code by making them more productive. Computing in Science & Engineering, 2006. Summarizes the what and why of Version 3 of the course. [Knuth(1984)]knuth:1984 D. E. Knuth. Literate Programming. The Computer Journal, 270 (2):0 97–111, 1984. ISSN 0010-4620. 10.1093/comjnl/27.2.97. URL <https://doi.org/10.1093/comjnl/27.2.97>. [Schulte et al.(2012)Schulte, Davison, Dye, and Dominik]schulte:2012:MLC Eric Schulte, Dan Davison, Thomas Dye, and Carsten Dominik. A multi-language computing environment for literate programming and reproducible research. Journal of Statistical Software, 460 (3), 2012. ISSN 1548-7660. 10.18637/jss.v046.i03. URL <http://dx.doi.org/10.18637/jss.v046.i03>. [Weingart et al.(2020)Weingart, Burton, Lavin, and Otis]wblo-dtrnusp-20 Scott B. Weingart, Matt Burton, Matthew J. Lavin, and Jessica Otis. Digits: Two Reports on New Units of Scholarly Publication. The Journal of Electronic Publishing, 220 (1), 2020. ISSN 1080-2711. 10.3998/3336451.0022.105. [Kim et al.(2018)Kim, Poline, and Dumas]kpd-ercsrb-18 Yang-Min Kim, Jean-Baptiste Poline, and Guillaume Dumas. Experimenting with reproducibility: A case study of robustness in bioinformatics. GigaScience, 70 (7):0 giy077, 2018. ISSN 2047-217X. 10.1093/gigascience/giy077. [Nüst et al.(2020)Nüst, Sochat, Marwick, Eglen, Head, Hirst, and Evans]nuest:2020:TD Daniel Nüst, Vanessa Sochat, Ben Marwick, Stephen J. Eglen, Tim Head, Tony Hirst, and Benjamin D. Evans. Ten simple rules for writing dockerfiles for reproducible data science. PLOS Computational Biology, 160 (11):0 e1008316, 2020. ISSN 1553-7358. 10.1371/journal.pcbi.1008316. URL <http://dx.doi.org/10.1371/journal.pcbi.1008316>. [Nüst and Hinz(2019)]nuest:2019 Daniel Nüst and Matthias Hinz. containerit: Generating dockerfiles for reproducible research with r. Journal of Open Source Software, 40 (40):0 1603, 2019. ISSN 2475-9066. 10.21105/joss.01603. URL <http://dx.doi.org/10.21105/joss.01603>. [Ragan-Kelley et al.(2018)Ragan-Kelley, Willing, Akici, Lippa, Niederhut, and Pacer]ragan:2018 Benjamin Ragan-Kelley, Carol Willing, F Akici, D Lippa, D Niederhut, and M Pacer. Binder 2.0-reproducible, interactive, sharable environments for science at scale. In Proceedings of the 17th python in science conference, pages 113–120. F. Akici, D. Lippa, D. Niederhut, and M. Pacer, eds., 2018. [Arenas et al.(2019)Arenas, Atkins, Austin, Beavan, Egea, Carlysle-Davies, Carter, Clarke, Cunningham, Doel, et al.]arenas2019design Diego Arenas, Jon Atkins, Claire Austin, David Beavan, Alvaro Cabrejas Egea, Steven Carlysle-Davies, Ian Carter, Rob Clarke, James Cunningham, Tom Doel, et al. Design choices for productive, secure, data-intensive research at scale in the cloud. arXiv preprint arXiv:1908.08737, 2019. [Recker et al.(2015)Recker, Müller, Trixa, and Schumann]recker2015paving Astrid Recker, Stefan Müller, Jessica Trixa, and Natascha Schumann. Paving the way for data-centric, open science: An example from the social sciences. Journal of Librarianship and Scholarly Communication, 30 (2), 2015.
http://arxiv.org/abs/2307.00676v1
20230702220824
Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation
[ "Jingjie Guo", "Weitong Zhang", "Matthew Sinclair", "Daniel Rueckert", "Chen Chen" ]
cs.CV
[ "cs.CV", "cs.LG" ]
AdaAtlas for Robust 3D Medical Image Segmentation J. Guo et al. Department of Computer Science, Technical University of Munich, Germany Klinikum rechts der Isar, Technical University of Munich, Germany HeartFlow, USA Department of Computing, Imperial College London, UK Department of Engineering Science, University of Oxford, UK [email protected] Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation Jingjie Guo1(), Weitong Zhang 4, Matthew Sinclair3,4, Daniel Rueckert1,2,4, Chen Chen4,5 August 1, 2023 ============================================================================================================= This work was partly done at HeartFlow, Inc. Convolutional neural networks (CNNs) often suffer from poor performance when tested on target data that differs from the training (source) data distribution, particularly in medical imaging applications where variations in imaging protocols across different clinical sites and scanners lead to different imaging appearances. However, re-accessing source training data for unsupervised domain adaptation or labeling additional test data for model fine-tuning can be difficult due to privacy issues and high labeling costs, respectively. To solve this problem, we propose a novel atlas-guided test-time adaptation (TTA) method for robust 3D medical image segmentation, called AdaAtlas. AdaAtlas only takes one single unlabeled test sample as input and adapts the segmentation network by minimizing an atlas-based loss. Specifically, the network is adapted so that its prediction after registration is aligned with the learned atlas in the atlas space, which helps to reduce anatomical segmentation errors at test time. In addition, different from most existing TTA methods which restrict the adaptation to batch normalization blocks in the segmentation network only, we further exploit the use of channel and spatial attention blocks for improved adaptability at test time. Extensive experiments on multiple datasets from different sites show that AdaAtlas with attention blocks adapted (AdaAtlas-Attention) achieves superior performance improvements, greatly outperforming other competitive TTA methods. § INTRODUCTION Convolutional neural networks have shown good results in medical image segmentation <cit.>. However, domain shifts between the training and test data often lead to significant performance degradation, which is a common problem for medical image segmentation since medical images can be acquired by devices from different companies using different scanning protocols in different medical centers. These differences can lead to distribution shifts of the data, causing the model to perform poorly on images from unseen target domains. Directly labeling target domain data for model fine-tuning is not feasible due to high labeling costs. To mitigate this issue, researchers have proposed various approaches such as unsupervised domain adaptation <cit.> and domain generalization <cit.>. However, most unsupervised domain adaptation methods require access to data from target domains at training time, which can be hard to get due to privacy issues. On the other hand, most domain generalization methods require multi-domain source data, which can be challenging since sharing medical images is undesired. Also, the learned `generalized' solution can still be sub-optimal for a specific target domain with unique characteristics. Different from them, test-time adaptation (TTA) methods aim to solve this problem by adapting and optimizing the trained model to each specific domain, with access to target domains only at test time. Since there is no label information available at test time, different surrogate losses have been proposed to guide the model adaptation <cit.>. Many of them focus on utilizing pixel-level information <cit.>, e.g. entropy loss <cit.>. Yet, for segmentation tasks, such information can be limited as it does not consider global shape information. Also, we argue that entropy loss-based TTA such as TENT can be unstable as deep neural networks are known to be over-confident. Another common limitation of existing approaches is that they limit the adaptation to normalization blocks i.e., batch normalization in order to stabilize the adaptation process. The adaptability can, however, be restricted, as they only allow for the rescaling and shifting of the features in the channel dimension. To solve the above limitations, we propose a novel TTA method called AdaAtlas, which uses an atlas, a high-level shape prior for more reliable adaptation at test time. We further consider incorporating attention blocks for higher model adaptability at test time (AdaAtlas-Attention), which allows adapting features in both channel-wise and spatial-wise manner. Of note, while prior works have developed different attention blocks to improve model representation learning capacity, their effectiveness has mostly been evaluated on the source domain whereas their capacity for test-time adaptation on unseen target domains is yet under-explored. Our contribution can be listed as follows: a) We propose a novel atlas-based method for medical image segmentation, which considers 3D shape prior information for reliable test-time adaptation (Sec. <ref>); b) To the best of our knowledge, we are the first to consider employing attention blocks for improved adaptability at test time (Sec. <ref>). Our results show that adapting attention blocks rather than normalization blocks can lead to better performance especially when big distribution shifts are exhibited; c) We perform extensive experiments on multiple datasets from different sites, and show that our method can substantially increase the performance on target domains and outperforms other competitive TTA methods by a large margin (Sec. <ref>). Related work. a) TTA for image segmentation: TTA adapts a source-domain pre-trained model at test time to a specific target domain via minimizing certain loss objectives for improved performance without ground truth provided. Many existing works employ unsupervised loss, such as entropy-based loss <cit.>, or self-supervised loss with pre-defined auxiliary tasks <cit.>. Yet, most of them operate at the pixel level, which can be limited to guide high-level image segmentation models. To address the above limitation, methods that distill higher-order information from the source domain for better supervision have been proposed, including class-ratio prior <cit.>, shape dictionary <cit.>, shape moment-based loss <cit.>, or pseudo anatomical labels generated by denoising autoencoder (DAE) <cit.>. In particular, the DAE-based approach <cit.> needs to train a separate DAE to produce `corrected' segmentation to provide supervision and adapt another image normalization network to adjust the input test image so that the fixed segmentation network can produce anatomically plausible segmentation on the adapted test image. Similarly, our work utilizes a source-domain learned atlas as high-level shape prior but computes the loss in the atlas space. Compared to DAE, we adapt the segmentation network, which does not alter the information in the test image. b) Atlas-based segmentation: Recent efforts have been made to boost the segmentation performance by combining existing deep segmentation networks with an atlas registration network via joint training, and performing atlas-to-subject registration to get a warped atlas as the final prediction for a subject <cit.>. Yet, we found that their methods can easily fail when the target segmentation for a subject has lots of false positives or totally missed predictions. Thus, in this work, we focus on directly improving the quality of predicted segmentation via test-time adaptation, where the atlas is fixed and treated as a reliable shape prior to guide the adaptation of the segmentation network at test time. § METHOD §.§ Atlas-guided test-time adaptation framework (AdaAtlas) As shown in Fig. <ref>, our proposed method first uses a 3D segmentation network _θ to take an image _i ∈^1× H × W × D as input and predicts a class-wise probabilistic segmentation map _i ∈^C × H × W × D with C being the number of predicted classes. A 3D atlas registration (atlas reg.) network _ϕ then takes the predicted segmentation _i and a learned 3D atlas ∈^C × H × W × D as input and predicts a deformation field ∈^3× H × W × D (affine+non-rigid transformation) that registers the predicted map to the atlas. Of note, the segmentation network and the atlas registration network are pre-trained with data from the source domain <cit.>. The 3D atlas is constructed and refined iteratively during training. Specifically, the atlas is first initialized as an average of training samples' labels. It is then updated by warping training labels to the atlas space using the registration network and averaging again across all samples using an exponential moving average at the end of each training epoch. The two networks are jointly optimized by performing segmentation, atlas-to-segmentation registration, and segmentation-to-atlas registration tasks simultaneously. By continuously feeding network predictions to the registration network across all training subjects to learn mappings between subjects and the probabilistic atlas, the registration network is enhanced to capture a wide range of anatomical variations and their correspondence to the atlas in the unified atlas space. As the main focus of this paper is about test-time adaptation, we refer interested readers to <cit.> for more training details. At test time, given a test image _i from an unseen target domain, our goal is to adapt the segmentation network _θ so that the predicted segmentation is improved. Since there is no ground truth to finetune the model on the target domain, we utilize the atlas as a high-level shape prior to guiding the adaptation. As the atlas is used to represent the standardized view of the anatomical structure, we believe that it can be used as a domain-invariant concept to enable reliable adaptation at test time. Specifically, given the prediction _i from the pre-trained segmentation network for test image _i and the learned atlas , we feed both into the pre-trained atlas registration network _ϕ (frozen at test time) to estimate the deformation = _ϕ(_i,) that transforms the prediction from the subject space to the atlas space _i→ a = _i ∘. We then measure the difference between the registered prediction _i→ a and the reliable atlas in the same space and use that as a loss to adapt the segmentation network. The loss _atlas to optimize _θ is defined as: _atlas(_i→ a, ) = 1 - 1/H× W × D∑^H_h=1∑^W_w=1∑^D_d=1cos(_i→ a^h,w,d, ^h,w,d). Here cos(·, ·) is the commonly used cosine function for similarity measurement. §.§ Where to adapt: adaptation blocks in segmentation network Instead of optimizing all the parameters in _θ to minimize _atlas, it is more common to only adapt part of the segmentation network for improved efficiency and effectiveness <cit.>. A commonly adopted method is adapting scaling and shifting parameters in the batch normalization blocks <cit.>. Yet, this type of adaptation only allows channel-wise feature calibration. Inspired by the success of attention-based works <cit.>, we design a dual attention block that can calibrate the features in both channel-wise and spatial-wise fashion, with the goal to improve the adaptation power. These attention blocks can be inserted into any CNN architecture. As shown in Fig. <ref>, given a feature map , a dual attention block first computes a channel-wise attention score _ch∈^c, which recalibrates to ^c via channel-wise multiplication. The intermediate output ^c is then sent to the spatial attention module to compute spatial-wise attention score _sp∈^1 × h × w × d to recalibrate features via spatial-wise multiplication. In practice, we found that sequential dual attention works better than the existing concurrent dual attention in <cit.>. At a high level, it is defined as: = _sp⋆ ( _ch×). Here × represents channel-wise multiplication and ⋆ represents spatial-wise multiplication. To compute _ch for calibrating , we first flatten the feature via average pooling and then employ two fully connected layers (FC1, FC2) and a sigmoid function to get a channel-wise score in [0,1]. To compute _sp, we employ a 3D 1× 1 × 1 convolutional layer (Conv) followed by a sigmoid function to process intermediate channel-wise calibrated feature ^c to final adapted feature . Different from prior work which fixes the kernels of attention blocks at test time, we revolutionize its use for TTA. At test time, the parameters in the dual attention blocks (incl. weights _1∈^c/r× c, _2∈^c ×c/r, _conv∈^1× 1× 1× c and biases _1∈^c/r,_2∈^c, _conv∈ in two FC layers and the Conv layer, respectively) are updated in the direction of minimizing the _atlas. § EXPERIMENTS AND RESULTS Data: prostate multi-site segmentation datasets. To verify the effectiveness of our algorithm, we evaluate our method on the prostate segmentation task from T2-weighted MRI. Specifically, we use multi-site prostate segmentation datasets (collated from four public datasets http://medicaldecathlon.com/MSD <cit.>, https://wiki.cancerimagingarchive.net/display/Public/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+StructuresNCI-ISBI13 <cit.>, https://i2cvb.github.io/I2CVB <cit.> and https://promise12.grand-challenge.org/PROMISE12 <cit.>). They consist of 148 images from seven, different clinical sites. All subjects are uniformly resized to 64 × 64 × 64 and normalized to have zero mean and unit variance in intensity values. We use the single-source MSD dataset (G) as the source domain (22 subjects for training), while the remaining images from the other six, unseen clinical sites (A-F: 30/30/19/13/12/12 images) are used for testing, see Fig <ref>. Implementation and baselines. We conducted experiments using U-Net <cit.> and its variant with additional inserted dual attention blocks after each convolution block in the segmentation network except for the last layer, as suggested by <cit.>. We used the framework described in <cit.> to train the segmentation and the atlas registration network jointly and construct the atlas on the source domain. The training loss consists of a standard supervised segmentation loss and a bidirectional registration loss _bireg, as well as a regularization term to encourage the smoothness of the predicted deformation field. The registration loss computes the dissimilarity between the atlas and the predicted segmentation in both the subject space and the atlas space <cit.>: _bireg = _reg(_i, _a→ i) + _reg(, _i→ a). We trained the networks for 800 epochs with a batch size of 8. Adam optimizer was used with an initial learning rate of 0.001 followed by an exponential decay with a half-life of 400 epochs. At test time, we used the pre-trained model as model initialization and employed Adam optimizer (learning rate=0.001) and _atlas to update the adaptation blocks (norm/dual-attention) for 50 iterations for each single test subject, which took about 20 seconds. Of note, we use `Baseline' to denote the source domain trained models without any adaptation. For fairness and ease of comparison, if not specified, we used the same pre-trained segmentation model (U-Net or U-Net with dual attention blocks) described above for model initialization to compare different TTA methods. Specifically, we compared our method to popular post-hoc TTA methods: TENT <cit.>, which uses entropy loss to optimize norm blocks; three TENT enhanced variants with modified losses for stronger supervision (EATA using confidence threshold-based entropy loss  <cit.>, AdaMI with class-ratio prior <cit.>, and TTAS with shape-moment-based loss <cit.>). We further compared our method to three TTA methods that required additional training modifications: a) TTT <cit.>, which requires attaching a self-supervised image reconstruction decoder to the segmentation network at training and test time to construct a self-supervision signal for updating the shared encoder in the segmentation network; b) DAE <cit.>, which requires to train an additional denoising autoencoder to generate `corrected' pseudo labels, and train and adapt an additional image normalization network to adjust the test image instead of the segmentation network at test time; c) TTR <cit.>, which shares the same source domain training setting as ours. Differently, it refines the registration network at test time (and not the segmentation network) to get an optimized deformed atlas as the final prediction. All methods were implemented in PyTorch with their recommended TTA setups. Results. Quantitative and qualitative results of prostate segmentation using U-Net or U-Net with dual attention blocks across six, unseen test domains are shown in Table <ref> and Fig. <ref>. We can see that model adapted to target domains using different TTA methods consistently outperform the Baseline model without TTA. Among all competitive methods, ours perform the best (∼20 % improvements) in both scenarios, indicating the superiority of using the high-level atlas-based shape prior to guiding TTA, which effectively cleans noisy predictions that are counter-intuitive in anatomical structures, see Fig <ref>. Interestingly, we found TTR can completely fail in some cases, see the top row in Fig <ref>. This can be because TTR highly depends on the quality of predicted segmentation from _θ for the refinement of the registration network, which can mislead the optimization process when the reference prediction is poor. Differently, we choose to use the reliable, domain-invariant atlas as supervision to refine the segmentation network instead, which is proved to be more robust. When comparing our AdaAtlas variants, AdaAtlas adapting the dual attention blocks (AdaAtlas-Attention) achieves higher improvements at test time compared to adapting the normalization blocks (AdaAtlas-Norm), especially on the challenging domain C. This verifies the benefits of increased flexibility using dual attention (channel+spatial) at test time. Although incorporating dual attention into U-Net can already bring a small target domain performance increase before adaptation (mean Dice score: 0.6610→0.6770), refining them using our TTA method can bring much larger performance gains (0.6770→0.8205). Besides prostate segmentation, we also conducted experiments on the atrial segmentation task on public multi-site MRI datasets and obtained similar findings, where our method works the best. Please see the supplementary material for more details. Ablation study. We further performed two ablation studies to systematically analyze the contribution of two key components in our best-performing method (AdaAtlas-Attention), by replacing _atlas with other TTA losses (incl. entropy loss _ent used in <cit.>; the shape-moment-based loss _shape used in the most competitive method TTAS <cit.>; the shape correction loss _DAE used in DAE <cit.> or by restricting the adaptation blocks to be normalization only, channel attention only and spatial attention only, see Table  <ref>. Among all the variants, ours works the best. The success comes from a more detailed, reliable shape knowledge encoded in the atlas compared to vague, abstract shape statistics (moments) in _shape <cit.>. While DAE-based shape correction <cit.> provides an estimated surrogate segmentation label for guidance, unlike ours, the supervision signal is conditioned on the predicted segmentation and can be very unstable and unreliable when the initial prediction is very poor and uninformative, e.g., totally missed segmentation during optimization. § DISCUSSION AND CONCLUSION We propose a novel, effective atlas-based TTA method (AdaAtlas) to distill high-level shape knowledge to better guide the adaptation process on out-of-domain scenarios, which is particularly useful for anatomical structure segmentation. We also investigate and compare different adaptation blocks (i.e., normalization and more flexible dual attention blocks) for an effective adaptation and find that adding dual attention blocks for adaptation (AdaAtlas-Attention) works the best. An interesting finding is that when using a less reliable entropy-based loss _ent to adapt attention blocks, the model performance significantly drops (0.8205→ 0.6835, Table <ref>), which is even lower than using _ent to adapt normalization blocks (0.6933 (TENT), Table <ref>). Our study uncovers the importance of building flexibility in the design of a network for TTA and the reliability of the guiding signal for robust TTA. One limitation of AdaAtlas is that it needs to train an additional atlas registration network at training time. This type of training modification requirement can be also found in other powerful TTA methods <cit.>. Our training dependency can be potentially removed by utilizing public atlases and existing powerful registration toolkits to make AdaAtlas a fully post-hoc method to adapt any existing segmentation network. It is also interesting to explore bi-level optimization for both segmentation and registration network, as well as dynamic learning rate <cit.> and extend it to a multi-atlas solution for further improvements.
http://arxiv.org/abs/2307.02860v4
20230706085228
Scaling Package Queries to a Billion Tuples via Hierarchical Partitioning and Customized Optimization
[ "Anh L. Mai", "Pengyu Wang", "Azza Abouzied", "Matteo Brucato", "Peter J. Haas", "Alexandra Meliou" ]
cs.DB
[ "cs.DB" ]
A package query returns a package—a multiset of tuples—that maximizes or minimizes a linear objective function subject to linear constraints, thereby enabling in-database decision support. Prior work has established the equivalence of package queries to Integer Linear Programs (ILPs) and developed the algorithm for package query processing. While this algorithm was an important first step toward supporting prescriptive analytics scalably inside a relational database, it struggles when the data size grows beyond a few hundred million tuples or when the constraints become very tight. In this paper, we present , a novel algorithm for processing package queries that can scale efficiently to billions of tuples and gracefully handle tight constraints. solves a sequence of optimization problems over a hierarchy of relations, each resulting from an ever-finer partitioning of the original tuples into homogeneous groups until the original relation is obtained. This strategy avoids the premature discarding of high-quality tuples that can occur with . Our novel partitioning scheme, , can handle very large relations with multiple attributes and can dynamically adapt to both concentrated and spread-out sets of attribute values, provably outperforming traditional partitioning schemes such as . We further optimize our system by replacing our off-the-shelf optimization software with customized ILP and LP solvers, called and respectively, that are highly accurate and orders of magnitude faster. A Critical Look at the Current Usage of Foundation Model for Dense Recognition Task Shiqi Yang^1,2Work is done during intern at OMRON SINIC X., Atsushi Hashimoto^3, Yoshitaka Ushiku^3 ^1 Computer Vision Center, Bellaterra, Spain ^2 Department of Computer Science, Universitat Autònoma de Barcelona, Bellaterra, Spain ^3 OMRON SINIC X, Tokyo, Japan [email protected], {atsushi.hashimoto, yoshitaka.ushiku}@sinicx.com August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Package queries <cit.> extend traditional relational database queries to handle constraints that are defined over a multiset of tuples called a “package.” A package has to satisfy two types of constraints: * Local predicates: traditional selection predicates, i.e., constraints that each tuple in the package has to satisfy individually. * Global predicates: constraints that all the tuples within the package have to satisfy collectively. There can be many such feasible packages. A package query selects a feasible package that maximizes or minimizes a linear objective. For example, consider the following package query: An astrophysicist needs to find a certain number of rectangular regions of the night sky that may contain unseen quasars. These regions should have average brightness above a certain threshold and their overall red shift should lie between specified values. Among those regions, the one with the maximum combined likelihood of containing a quasar is preferred <cit.>. This package query can be expressed declaratively using , an SQL-based query language <cit.>: [commandchars= {},codes=`=3`_=8] SELECT PACKAGE(*) AS P FROM Regions R REPEAT 0 WHERE R.explored = 'false' SUCH THAT COUNT(P.*) = 10 AVG(P.brightness)≥θSUM(P.redshift) BETWEENγ_1ANDγ_2MAXIMIZE SUM(P.quasar) In this example, the number of rectangular regions of the night sky can become very large if the surveying resolution is high and/or the surveying volume of the night sky is large. For such a query, the relation size typically ranges from millions to billions of regions while the number of constraints is constant. Every package query corresponds to an Integer Linear Program (ILP) <cit.>. For a relation containingntuples, there arendecision variables, with theith decision variablex_irepresenting the multiplicity (possibly 0) of theith tuple in the package. Thus, black-box ILP solvers like  <cit.> or  <cit.> can be used to compute the optimal package for any package query. Whenngrows beyond several million, however, the foregoing solvers typically do not scale because they employ ILP techniques that haveO(exp(n))worst-case running time. The approximate package-query processing algorithm introduced in <cit.> works well up to tens of millions of decision variables but, beyond this scale, its performance deteriorates in both running time and optimality ratio as shown by our experimental results in Section <ref>. Overview of and its limitations. partitions relations to scalably approximate package queries. Each partition contains similar tuples that are averaged to construct a representative tuple. In , a “sketch” is a package query solution over the representative tuples only. Representative tuples included in the sketch indicate that their groups may have tuples that can be part of the optimal package. The sketch is “refined” by searching through these groups, iteratively replacing each representative tuple with the group's tuples, and re-solving the package-query ILP until a feasible package is constructed from the actual tuples. -1 uses partitioning <cit.> with a fixed number of groups, regardless of the relation size. The number of groups is usually small (e.g., up to 1000 groups for a relation size of tens of millions) resulting in a large number of tuples in each group. While this approach allows aggressive pruning of the relation in the sketch phase, it has three drawbacks. First, the representative tuples may not accurately represent their groups, especially if the underlying distribution of tuples has a high variance. This can lead to false infeasibility, where no solution is found during sketching even though a feasible package does exist. Our experimental results in Section <ref> show that the prevalence of false infeasibility increases significantly as the query constraints become tighter, i.e., as the feasible region shrinks. Second, the aggressive pruning of entire groups might eliminate potential tuples at the periphery of these groups from consideration, i.e., groups that were not selected in the sketch can contain outlying tuples that can improve the overall objective value. Without these tuples, can produce packages with suboptimal objective values. Third, when the relation size increases, the size of the refine queries, which essentially equals the group size, increases. It is challenging at best, and often impossible in practice, to decide exactly how fine the partitions should be. Creating too many groups will result in high computational costs for the partitioning algorithm and for solving the sketch query. Creating too few groups will degrade the accuracy of the sketch query and possibly cause false-infeasibility problems, as well as rendering the solution of each refine query hugely expensive. thus fails to scale to relations on the order of 100M tuples or more. Our new approach. In this work, we introduce , a novel approach for approximately solving package queries over extremely large relations which overcomes the above limitations. Our method relies on a hierarchy of relations comprisingL+1layers of increasingly aggregated representative tuples. Layer0consists of the original tuples from the relation; each layerl>0comprises representative tuples (representing groups) obtained after partitioning the tuples in layerl-1. Each layer comprises a large number of groups—the size of each group is small so that each representative tuple in layerlaccurately summarizes the attribute values of its corresponding tuples in layerl-1. The search for optimal packages starts in layerLby solving a linear program (LP) over all of its representative tuples under the original constraints and objective but with the integrality requirement on the decision variables removed. The chosen tuples found in the LP are augmented with additional nearby "promising" representative tuples to help prevent premature discarding of potentially valuable tuples in layerL-1. Then all of the chosen tuples in layerLare expanded into their corresponding groups in layerL-1. This so-called procedure of augmenting and expanding representative tuples from the current LP solution is executed at each successive level of the hierarchy until we reach layer0, at which point we solve a final ILP to produce the solution package (Section <ref>). Intuitively, the differences between and the iterative procedure of can be described via an analogy to resolution-mapping techniques in fields like geographic or demographic analysis <cit.>. Figure <ref> shows a hierarchy of resolution from low to high of a terrain-height map that is analogous to the hierarchy of relations in . An efficient approach like would start from the lowest resolution and iterate toward the highest resolution. In each iteration, it starts with the 16 highest squares in the current resolution and expands those squares into the next resolution, and, among those, selects the 16 highest squares. This approach diversifies the final result in the highest resolution and thus, captures the irregularities of the terrain. On the other hand, is analogous to simply looking at the highest square (the blue square) in the lowest resolution and analyzing its height in the highest resolution. The former resolution-mapping approach only works when each resolution is not downscaled too drastically to the next lower resolution—otherwise, the search within each higher-resolution square takes too long. For example, going from 64x64 to 4x4 resolution has a downscale factor of 256 since 5376 pixels are partitioned into 16 squares. On the other hand, going from 64x64 to 32x32 resolution has a downscale factor of only 4. Therefore, analogously, the hierarchy of relations in requires a partitioning algorithm that: * Efficiently produces a large number of partitions/groups, e.g., typically about 0.1%-10% of the number of tuples (so a downscale factor between 1000 and 10). A typical partition used in <cit.> has a downscale factor of n/g where n is the relation size and g is the fixed number of groups (at most 1000), which explains why is not particularly suitable for when n is large; and * Supports fast group-membership determination for arbitrary tuple values (not necessarily appearing in the relation), as needed for efficient execution of . We, therefore, provide a novel partitioning algorithm, (), that satisfies these requirements. The advantages of over standard partitioning algorithms such ask-means <cit.>, hierarchical clustering <cit.>, andk-dimensional quad-trees <cit.> are (1) its ability to run under limited memory, (2) its cache-friendliness, and (3) its high parallelizability. Importantly, is a dynamic scheme, which allows it to refine its partitions in response to outliers, i.e., to the shape of the distribution of the tuple's attributes: in our stylized example, a partition on the 64x64 resolution can isolate high peaks into their own groups to maintain low variance within groups. minimizes attribute variance to implicitly ensure that similar tuples are grouped together. At the end of , we end up with an in-memory ILP of a package query with tuples from the original relation. This ILP typically has at least hundreds of thousands of variables. Black-box ILP solvers would require a large amount of time to produce an optimal solution (especially when the underlying ILP is hard to solve) and hence, are unsuitable for . We, therefore, develop , a new heuristic algorithm that can solve a package query over millions of tuples in less than a second with close-to-optimal objective values. It achieves this by first solving an LP that essentially removes the integrality constraints of the ILP and then formulates a second LP with restricted conditions to help prune tuples whose corresponding decision variables likely will not appear in the ILP solution. It effectively shrinks the original ILP into a very small sub-ILP that can be efficiently handled by black-box ILP solvers. As with any heuristic ILP solver, false infeasibility can occur in when the pruning is too aggressive. We handle this issue by gradually reducing the degree of pruning until we end up solving the original ILP using a black-box ILP solver. As seen in Figure <ref>, extensively uses an LP solver for its intermediate layers and an ILP solver for layer 0. To further boost performance, we replace black-box LP solvers with our highly accurate and much faster implementation by exploiting the special structure of the ILPs that arise when solving package queries (Section <ref>). Contributions. In summary, we significantly expand the applicability of package-query technology to handle very large problems with potentially tight constraints via the following contributions: * A novel hierarchical strategy, called , for finding high-quality package tuples that avoids the pitfalls of the approach (Section <ref>). * An effective and efficient partitioning scheme, , for creating the hierarchical data partitions needed by and handling outlying data well, together with an analytical comparison to that verifies 's superior behavior (Section <ref>). * A novel heuristic, , for a very fast approximate solution of the final ILP encountered in that uses a simple pruning strategy and a mechanism to guarantee solvability (Section <ref>), along with an optimized and highly parallelized LP solver, , for accurate solution of LPs encountered in (Section <ref>). * A thorough experimental study showing that, unlike , is scalable beyond hundreds of millions of tuples and, even for smaller relations, it can solve “hard” package queries for which suffers from false infeasibility. When both algorithms can produce feasible packages, is faster and the solution packages have better objective values (Section <ref>). Notably, as part of this study, we define a novel hardness metric and provide a way to generate queries of a specific hardness, thereby providing a means and a benchmark to systematically evaluate package-query solvers (Section <ref>). § The key challenge with directly solving the large ILPs that arise from package queries over large relations is that current solvers require that the corresponding LPs (where the integrality constraints are removed) fit into memory. Both and algorithms avoid this problem by partitioning the large relation into smaller groups that fit in memory and then formulating small ILPs based on the representative tuples corresponding to these groups, thereby obtaining an approximate solution to the original ILP. These two algorithms, however, use very different strategies to obtain these small ILPs. While “refines” the sketch solution by iteratively replacing each chosen representative tuple with the group's tuples, first augments the sketch solution with additional “promising” representative tuples and then replaces all the chosen representative tuples with their group's tuples at once. In doing so, tries to make each intermediate LP as large as possible by augmentation (via ); this improves the quality of the ILP solution of the original tuples relative to by not eliminating potentially high-quality tuples from consideration too early. More specifically, the algorithm tries to always solve an LP or ILP where the number of variables is close to, but does not exceed, an upper bound . We call the augdefaugmenting size; it is chosen so that an LP with variables fits in memory and can be solved relatively fast, e.g., within 1 second for interactive performance <cit.>. Hierarchy of Relations. relies on a hierarchy of relations ofL+1layers where layer0is the original relation and each layerl ≥1is the relation comprised ofr_lrepresentative tuples obtained after grouping then_l-1tuples in layerl-1. That is, layerl-1is partitioned intor_lgroups with= n_l-1/r_l. So the dfdefdownscale factor is, on average, the number of tuples represented by each group. Given , the depthLof the hierarchy is the smallest number of layers such that the final layerLhas a size at most . That is, for a relation havingntuples and a [Online readers can navigate to the definition of the terms by clicking on them], the final layerLhas size approximatelyn/()^L ≤, so that the minimal number of layers isL=⌈log_(n/)⌉. See Figure <ref> for an example. A group in layerl∈[0..L]is defined by intervals[a_j,b_j]where-∞≤a_j < b_j ≤∞for each attributejsuch that all groups are non-overlapping. A tupletbelongs to the group if and only ift.j ∈[a_j, b_j]for allj, wheret.jis the attributejof tuplet. Supporting fast group-membership queries is of particular importance for the performance of ; we will describe how supports such queries in sub-linear time complexity in Section <ref>. To compute the hierarchy of relations, we apply our partitioning algorithm, (Section <ref>), iteratively from layer0to layerL-1with a . overview. A high-level view of is presented in Algorithm <ref>. Given the hierarchy of relations, along with the , processes a package query by starting with the setS_Lof all potential candidates—i.e., the set of all representative tuples—in layerL(line 1) and then iterates through the hierarchy down to layer0using (Algorithm <ref>) to return a setS_l-1of at most potential candidates from layerl-1given the set of potential candidatesS_lfrom layerl(line 4). At layer 0, produces the final solution package from the package queryQ[S_0]using (Section <ref>), our heuristic ILP solver specifically designed to be efficient when solving ILPs arising from package queries (line 7). We describe the various components of in the following subsections. §.§ starts by formulating a package queryQ[S_l]from the tuples inS_l, which leads to an ILP. The algorithm then formulates an LP by removing the integrality conditions of the ILP (line 1). It then solves the LP using (Section <ref>) — our efficient LP solver specifically designed to exploit the fact that package queries have a very low number of constraintsm(line 2). The LP solutionx^*serves only to seed the initial setS'_lof potential candidates, i.e.,S'_lcomprises tuples with positive coefficients inx^*(line 3). The final step is to augment and expand the representative tuples inS'_ltoS_l-1(line 4) via the algorithm (Section <ref>). A potential concern is that expanding the representative tuples inS'_lmight generate an excessive number of candidate tuples at levell-1; that is, the expected number|S'_l|of level-(l-1)tuples will exceed and removal of tuples, rather than augmentation up to size , will be required. This scenario is unlikely, though, because (1)  is typically small (Section <ref>), and (2) for package queries, the number|S'_l|of positive coefficients inx^*is typically small in that|S'_l| ≤⌈m+‖x^* ‖_1 ⌉≪wheremis the number of constraints and‖·‖_1is the L1 norm (see Section <ref> for a proof). Mini-Experiment. We observed that replacing the LP solution in line 2 with the ILP solution makes no difference in terms of the optimality or solvability of . As a result, the LP formulation is preferred in Algorithm <ref> since LPs can be solved much faster than ILPs. §.§ Given the solution tuplesS'_lat layerl, in line 4 of Algorithm <ref> selects tuplesS_l-1from layerl-1. First, it replaces each tuple inS'_lwith the tuples of the group it represents in layerl-1via (line 2 of Algorithm <ref>). It then augments this set with tuples from neighboring groups. Figure <ref> shows a typical representation of 2D groups, demarcated by horizontal and vertical lines. In general, a group can have more than two attributes, with[a_j,b_j]specifying the group boundaries along attributej. Each group is represented by its average tuple (the orange dot in Figure <ref>). Suppose the blue circle represents a "good" region of tuples that are likely to be found in the optimal package. G5's representative tuple (orange dot) lies within this good region and is selected in the candidate solution setS'_l. If we only select G5's tuples for the next iteration, we would miss out on tuples in G1 that lie within the good region. These are hidden outliers— tuples that are potentially in the final solution but are hidden as their groups' representative tuples are far from the “good” region. We want an algorithm that can add tuples from these neighboring groups which are identified by some measure of “closeness” to the selected group G5. Given the selected groupg(line 5), which is defined by a set of intervalsA_g={[a_j, b_j], j=1,...,k}(line 8), we construct a neighboring tupletthat is “just” right outside groupgby setting each attributet.jequal toa_j-ϵ,b_j+ϵ, or(a_j+b_j)/2whereϵis the smallest positive distance between any two tuples in layerlover some attribute (line 1). We letTbe the set of all such tuples (line 9); note that|T|=3^k, wherekis the number of attributes. We now find the groupg'((l,t), line 11), which contains the constructed tuplet, and add its representative tuple toS'_land all its constituent tuples toS_l-1(line 13,14). The efficiency of critically relies on the efficiency of(l,t). A naive implementation of(l,t)would be to linearly scan all the groups to find where tupletbelongs. We will show in Section <ref> that can achieve sub-linear time complexity for the function(l,t). This sampling of neighboring tuples continues as long as|S_l-1| < α(line 4). Mini-Experiment.Does replacing with a random sampling of representative tuples impact the overall performance of ? We ran the queries from described in Table <ref> with query-hardness levels ∈{1,3,5,7,9,11,13,15} (Section <ref>). For each , we randomly sampled 5 sub-relations of size 10 million representing 5 queries for a total of 40 queries. We compared the results between two variants: one with and one where is replaced by a random sampling of tuples. Of the 40 package queries, both variants solved the same 35 queries. On average, the solvable queries showed a 7.6x improvement in the objective value when using . §.§ A key ingredient in is the LP solver. Typical LP sizes range from hundreds of thousands to tens of millions of variables. The standard dual simplex algorithms in commercial systems such as or are sequential and make no assumptions on the number of variables versus the number of constraints. In this generic setting, prior works <cit.> have tried to efficiently parallelize dual simplex for up to 8 processing cores. Specifically, in <cit.>, the authors observed a 2.34x speedup at 8 cores, with 65% of the execution effectively parallelized. We introduce a novel algorithm, , that achieves superior speedup by exploiting the special structure of the ILPs that arise when solving package queries. In textbook ILPs (such as set cover <cit.>, unit commitment <cit.>, knapsack sharing <cit.>, and traveling salesman <cit.>), the numbermof constraints is a polynomial function of the numbernof variables. In contrast, a package query ILP has a constant number of constraintsmthat is much smaller thann. By exploiting this structural difference, we greatly simplify our dual simplex implementation and are also able to efficiently parallelize most of the dual simplex sub-procedures. Roughly speaking, in dual simplex, we quickly move from one solution to another better one by selecting a good direction via pivoting<cit.>. Moving between solutions involves multiplications of ann×mmatrix by anm-vector, which can be parallelized overn. Furthermore, the search for a good direction is a sequential operation but can be parallelized efficiently as we observed in our experiments assuming that we have a few constraintsmand a huge number of variablesn. The technical details can be found in Appendices B and C. Mini-Experiment. We found that our algorithm can scale up to at least 80 cores, attaining a 5.25x speedup, with 82% of the execution effectively parallelized—a significant improvement over generic parallel dual simplex implementations. §.§ is a novel heuristic (Algorithm <ref>) for efficiently and approximately solving the final ILP encountered in (line 7 of Algorithm <ref>). It is a type of Relaxation Enforced Neighborhood Search (RENS) heuristic <cit.>. RENS are characterized by constructing a sub-ILP (to be solved by a black-box ILP solver) where most of the zero decision variables in the LP relaxationx^*are hard-fixed to 0. Number of positive coefficients in the LP solution. initially computes the LP solutionx^*(line 1-2). Forx^*, the theory of the simplex method <cit.> asserts that the number of basic variables that can take fractional values is at most the number of constraintsm. Assuming that the upper bound of each variable is 1, the number of non-basic variables, which can either be 0 or 1, is thereforen-m, wherenis the number of variables. LettingE=∑_i=1^n x^*_i(line 3) be the sum of all decision variables ofx^*, i.e. the L1 norm ofx^*, we see that the number of variables that are 0 is at least⌊n-m-E ⌋. Note that most of the decision variables are 0 inx^*sincen ≫m+Eand only a few variables are positive, i.e., at most⌈m+E ⌉of them. We can now usex^*to construct a reduced-size sub-ILP from the positive variables. Let q be the size of this sub-ILP. Configuring q. Ifq ≈E, i.e. we pruned out all zero decision variables, we may end up with false infeasibilty (the sub-ILP is infeasible but the ILP itself is feasible). Ifqis too large, then we may incur unnecessary and significant computational costs. The right value ofqshould be small enough to allow the sub-ILP to be solved within interactive performance by an off-the-shelf black-box ILP solver, (i.e. sub-second time), yet large enough to comfortably contain the typical solution sizes for package queries. E.g., package queries in our benchmark (Section <ref>) typically have solutions with 10 to 1000 tuples (E ≈[10-1000]), and settingq=5000achieves the right balance of interactive performance and feasibility. Sub-ILP. Fromx^*andq, constructs an auxiliary LPP'such that its solution has approximatelyqpositive variables (lines 4-5). We observe that the sum of all decision variables ofx^*isEwhen the upper bound of each variable is 1. Hence, by limiting the upper bound of each variable toE/q, we hope to have at leastqpositive variables. This simple modification effectively forces the LP solver to distribute its choices evenly across the tuples to produceqpositive decision variables iny^*. now formulates and solves the sub-ILP using tuples with positive coefficients in both the initial and the auxiliary LP solutions,x^*andy^*(lines 6-8). Fallback mechanism. Unlike other RENS heuristics, has a graceful fallback mechanism to handle false infeasibility ifqis insufficiently large (line 9). doublesqand randomly samples more tuples to include in the sub-ILP from the original relation (lines 11-12) until it includes the full relation. In practice, we observed that many of the difficult queries could be solved after one or two fallback iterations, i.e., doubling or quadrupling the initial sub-ILP size, without falling all the way back to the original relation. Mini-Experiment.Does replacing the Auxilary LP with a random sampling of tuples from S to formulate a sub-ILP of size q impact the overall performance of ? We ran the queries from described in Table <ref> with query-hardness levels ∈{1,3,5,7,9,11,13,15} (Section <ref>). For each , we randomly sampled 10 sub-relations of size 1 million representing 10 queries for a total of 80 queries. We compared the results between two variants: one with the Auxiliary LP P' and one with a random sampling, i.e., replacing line 6 of Algorithm <ref> with S' {i | x_i^*>0 ∨ u_i<q/n} where u_i ∼𝒰(0,1). Out of 80 queries, with Auxiliary LP solved 78 while with random sampling solved 54. For queries solved by both variants, we observed an improvement in the objective value by 1.108x on average when using with Auxiliary LP. § PARTITIONING ALGORITHM () is a novel partitioning algorithm that works with multidimensional tuples (Section <ref>) and very large relations (Section <ref>). The algorithm relies on the () subroutine that iteratively selects and partitions a relation one attribute at a time. is unlike traditional partitioning algorithms such as : in one iteration, it partitions an attribute usingp≥2flexible intervals instead of just two intervals separated by the attribute's mean. Given a set S of tuples, and a vector d=(d_0,d_1,…,d_p) where p ≥ 1 and -∞ = d_0 < d_1 < … < d_p-1 < d_p = ∞, the p-partition_d(S, j) of the set S over an attribute j is the disjoint partition {P_1,P_2,…, P_p} of S such that P_i={t ∈ S:d_i-1≤ t.j < d_i} for 1≤ i≤ p, where t.j is the attribute j of tuple t. §.§ The core idea of is to minimize the variance of each subsetP_iin ap-partition by dynamically allocating moreP_i's to partition a spread-out set of attribute values and fewerP_i's to partition a concentrated one. The procedure is given as Algorithm <ref>. Given a specified valueβ> 0called the bounding variance, iterates through the attribute values in increasing order (line 1). The algorithm keeps track of a running variance of the values grouped so far (line 9). Once this variance exceedsβ(line 5), it places a delimiter between the current tuple and the previous one and resets the running variance (lines 6-7). Configuring d_f. Recall that a smaller in Section <ref> yields a smaller expected number of tuples represented by each group. A representative tuple more accurately represents its group's tuples if it has fewer, more concentrated tuples. We also augment the solution packageS'_lat every iteration with neighboring representative tuples (Section <ref>). If we have smaller, and hence more, neighboring groups, we can add more representative tuples toS'_lduring Neighbor Sampling up to the and thus better capture hidden outliers. However, the smaller the , the higher the computational cost of as the depth of the hierarchy of relations increases. We observed that≈[10-1000]achieves the right balance between accuracy and computation cost. Configuring β. In , given a , one wishes to find a bounding varianceβsuch that thep-partition produced by hasp ≈n/wherenis the relation size. However, with a single bounding varianceβcan fail to achieve certain small target values for , especially when the variance of the distribution is low; see Figure <ref>. This is an issue since requires to be very small (≈[10-1000]). overcomes this issue by using multiple bounding variances on multiple attributes and as a result extends to multidimensional settings. §.§ is displayed as Algorithm <ref>. It is a divisive hierarchical clustering algorithm <cit.> where all tuples start in one cluster (line 2) and splits are performed recursively until we reach≈|S|/clusters (line 3) where|S|is the number of tuples. The splitting always prioritizes the clusterP^*with the maximum highest total variance using a max priority queue (line 4) whereσ^2(P,j)is the variance of the attributejof tuples inP. For clusterP^*, we partition on the attributejhaving the highest variance (line 5). As discussed below, the bounding varianceβis set in such a way that thep-partition for the clusterP^*produced by has approximatelysubsets (lines 6-7). Intuitively, is analogous to iteratively partitioning the terrain squares in order of highest squares first in our stylized example (Figure <ref>) where each iteration corresponds to partitioning a square into=4smaller squares. Hence, the first heuristic is to come up with a bounding varianceβ(line 6) so that in each iteration, partitionsP^*into approximatelysubsets (line 7). Letσ^2be the variance of the partitioning attribute ofP^*. We observed that the appropriate form forβiscσ^2 / ^2for a constantc>0sinceP^*is partitioned into approximatelysubsets so the variance of each subset is expected to decrease by a factor of^2. Moreover, the value ofcdepends on the distribution ofP^*and we can accurately find suchc, by simply binary searchingβfor eachP^*assuming that the number of partitioning subsets ofP^*is a decreasing function ofβ. However, this approach requires us to do multiple executions of overP^*in each iteration and hence is slow in practice. To avoid this, we can approximatec_jfor each attributejbefore the iterations via the function (line 1) which essentially samples the attribute values and then does a binary search. The technical details of the function can be found in Appendix D. For our datasets, we foundc=13.5to work well. The second heuristic is to choose a ranking that best captures the variability of a multi-dimensional subsetP ∈(line 4). For each subset of tuples, one can compute the variance or the total variance (i.e., variance times set size) for each attribute and take the maximum over all the attributes. We observed empirically that using the total variance would produce much better solutions compared to using the variance. There are several advantages to using : * It partitions on multiple attributes and produces partitions for any given number of partitioning subsets. * The actual partitioning operation is usually executed on a partition much smaller than S. This allows sorting algorithms on these smaller subsets to be much faster and cache-friendly. * The average number of passes through the relation is 𝒪(log_ n/) where n is the relation size and is the . §.§ in Large Relations When designing algorithms for large relations, three important desiderata are: (1) ability to run on limited in-memory storage, (2) cache-friendliness, and (3) high degree of parallelization. In , it is required to produce ap-partition wherep ≈n/. Algorithm <ref> only satisfies requirement (2) and cannot run on limited in-memory storage since it needs to store at leastpsets of partitioning information in a max priority queue. We can extend to satisfy the three desiderata via a bucketing scheme <cit.>. Specifically, supposing thatrtuples can fit into memory, we choose an attribute with the highest variance and partition its range into a number of equally-spaced buckets such that each bucket has at mostrtuples. Then for each bucket, we can reliably execute Algorithm <ref> as usual. The extended algorithm is still cache-friendly since the majority of the work is via Algorithm <ref>. It can be run on limited in-memory storage by choosing an appropriate value ofr. Because bucketing is highly parallelizable, we only have to parallelize Algorithm <ref>. The simplest solution is to provide a locking mechanism for modifying the max-priority queue. also requires fast group membership queries for arbitrary tuples. Specifically, each partitioning subset—i.e., each group in the relation—needs to store information about the ranges of each of its attributes to quickly determine if a tuple belongs within the group. Our implementation stores partitioning information such as the mean, the variance, and the range of each attribute in a table <cit.>. Ranges are represented using 's built-in range data type to enable the efficient execution of range queries, i.e., determining if a point is contained in an interval. also has a Generalized Search Tree (GiST) index <cit.> which further accelerates these range queries, achieving sub-linear running time. Our creates a multi-column GiST index on all the columns in order of highest variance column first. A membership query on the GiST index of a table of size 10 million runs in 0.5ms on average. This implementation choice allows our to be stored compactly without needing a full representation of a tree-like structure as with . §.§ Comparison to Partitioning score. Partitioning of a relation groups similar tuples together. A representative tuple can then be computed as an average over the similar tuples in the group. This similarity can be quantified by the tuples' distances to the representative. If we take the average of the squared distances between these tuples and the representative then this measure corresponds to the variance of the tuples' attributes. Therefore, the variance of the tuples' attributes reflects, on average, how spread out or clustered the group's tuple attribute values are and thus the similarity of tuples within the group. A good partitioning algorithm will create groups with more tightly clustered tuples, i.e., low within-group variance. Consequently, a useful measure of how well a partitioning algorithm performs will reflect the changes in within-group variance before and after partitioning. We define the Ratio Score as such a measure. For simplicity, we will restrict our analysis to partitions over one-dimensional tuples: For the p-partition _d(S) of the set S of one-dimensional tuples, let σ_i^2 be the variance of the tuple values in partition P_i (1≤ i≤ p) and σ^2>0 be the variance of tuple values in the unpartitioned set S. The ratio score z(_d(S)) is ∑_i=1^p σ_i^2 / σ^2. Intuitively, the ratio score is the ratio between the sum of the subsets' variance and the set's variance. Hence, the lower the score, the better the partitioning algorithm. The lowest score is 0 when all the subsets have a variance of 0. On the other hand, if all of the subsets are empty except oneP_i'thenσ_i^2=0fori ≠i'whileσ^2_i'=σ^2. Hence, the score is 1 for such a trivial partition. Ratio scores that exceed 1 are possible, as shown in Theorem <ref> below. versus . is also a divisive hierarchical clustering algorithm <cit.> where a cluster is always split into two smaller clusters using the mean value. In <cit.>, a clusterP_iis considered for splitting if it satisfies one of the two conditions: (1) its size|P_i|is more than size thresholdτ≥1; and (2) its radiusr_iis more than radius limitω≥0wherer_i=max_x ∈P_i |x-μ(P_i)|andμ(P_i)is the mean ofP_i. The following result shows that on some data sets the clustering performance of degrades totally while attains almost perfect clustering. For any radius limit ω>0, there exists a sequence {S_n} of sets of one-dimensional tuples whose variances converge to 0 such that for any size threshold τ≥ 2, 's ratio score tends to ∞ as n →∞. On the other hand, using with a bounding variance β=24σ^2(S_n)/|S_n|^2, the ratio score converges to 0. Let S_n be a set of n+2 tuples which has 1 value of -ω, 1 value of ω and n values of ω+ϵ where ϵ=3ω/n. Denote μ(S_n) and σ^2(S_n) be the mean and the variance of S_n respectively. Then μ(S_n)=n(ω+ϵ)/|S_n|=n+3/n+2ω>ω Hence μ(S_n) is between ω and ω+ϵ. Thus, first splits S_n into P_1 and P_2 using μ(S_n) where P_1 contains two tuples of values -ω and ω and P_2=S_n ∖ P_1. The radius of P_1 is ω and hence P_1 is not considered for splitting. Moreover, since P_2 contains tuples whose values are ω+ϵ, all the subsequent splits from P_2 have subsets of variance 0. Therefore, for the p-partition produced by , ∑_i=1^p σ^2(P_i)=σ^2(P_1)=ω^2. To compute the ratio score, we first compute the variance of S_n: σ^2(S_n) =𝔼[S_n^2]-μ(S_n)^2 =2ω^2+n(ω+ϵ)^2/|S_n|-n^2(ω+ϵ)^2/|S_n|^2 =2ω^2[1/n+2+(n+3)^2/n(n+2)^2] Observe that lim_n→∞σ^2(S_n) = 0. Hence, z(_d(S_n))=∑_i=1^p σ^2(P_i)/σ^2(S_n)=ω^2/σ^2(S_n) goes to infinity as n →∞. This proves the first assertion. To prove the second assertion, consider the p-partition _d(S_n) using with a bounding variance β=24σ^2(S_n)/|S_n|^2=48ω^2[1/(n+2)^3+(n+3)^2/n(n+2)^4] Since σ^2({-ω,ω})=ω^2>β for n ≥ 4, we have P_1 contains only one tuple of value -ω. In addition, since σ^2({ω,ω+ϵ})=1/4ϵ^2=9/4ω^2/n^2 > β for n ≥ 39, it follows that P_2 contains only one tuple of value ω. Hence, for , the first two values -ω and ω are isolated from each other so that all the variances of P_i are zero. Therefore, for n ≥ 39, the ratio score by is 0 and the result follows. We include the full proofs of our theoretical results in Appendix A. To prove Theorem <ref>, we construct a sequence{S_n}of sets of one-dimensional tuples as in Figure <ref>. We force to group two very dissimilar values by exploiting the fact that the splitting intervals of are fixed as long as the mean of the values does not change. , on the other hand, with an appropriate choice of bounding variance, overcomes this issue. Indeed, we now show that has a low ratio score for virtually any large relation. Specifically, for any set ofnone-dimensional tuples, , using the above bounding variance, achieves anO(1/n)ratio score. Moreover, the corresponding partitioning is nontrivial in that there exist partitions with at least two tuples, i.e.,p<n. [Universal bounded ratio score] Let S be a set of one-dimensional tuples of size n≥ 2 with variance σ^2 > 0. Then with a bounding variance β=24σ^2/n^2 will produce a p-partition _d(S) where p ≤ (3/4)n+1/2—so that the partitioning is nontrivial—and z(_d(S)) ≤ 24/n. Let S be a set of n one-dimensional tuples whose values are x_1 ≤ x_2 ≤ ... ≤ x_n. Consider n-1 intervals [x_i, x_i+1] where i=1,2,...,n-1. The length of each interval is simply | x_i-x_i+1|=x_i+1-x_i. Let s ∈ [0,n-1] be the number of intervals whose length is greater than 2√(β)=4√(6)σ/n. We call these critical intervals. We first claim that s is at most n/2. To prove our claim, we assume without loss of generality that s ≥ 2; if s<2 then we are done. Denote by μ the mean of the values in S and observe that each of the critical intervals either lies on the left of μ or the right of μ or contains μ. Let l,r be the number of critical intervals that lies completely on the left and the right of μ respectively. Observe that there are at least l endpoints such that distance from the ith endpoint to μ is ≥ 2i√(β) for i∈[1..l]. Let g(x)=(2x(x+1)(2x+1)β)/3 and A_L be the sum of squares of those distances so that A_L ≥∑_i=1^l (2i√(β))^2=4β∑_i=1^l i^2=g(l). We analogously define A_R where A_R ≥ g(r). Because the variance σ^2 of S is the mean of the squared distances between the x_i's and μ, we have σ^2 ≥1/n(A_R+A_L) ≥1/n(g(r)+g(l)) ≥2/n g(r+l/2) by Jensen's inequality since g(x) is a convex function when x ≥ 0. For the case l+r=s, i.e., μ does not lie in any of these critical intervals, then σ^2 ≥2/ng(s/2)=β/3ns(s+1)(s+2) > β/3ns^3. For the case where l+r=s-1, i.e., where μ lies in one of these critical intervals, let a_L,a_R be the distance from μ to the left and right endpoints of the critical interval that contains μ and observe that a_L+a_R > 2√(β). Also set h(x,y)=xy^2 and note that h is a convex function when x,y ≥ 0 and an increasing function in x and y respectively. Observe that the l critical intervals that lie completely on the left of μ are now essentially shifted to the left by a distance a_L compared to the first case. Hence A_L ≥∑_i=1^l (2i√(β)+a_L)^2 ≥∑_i=1^l (2i√(β))^2 + a_L^2 =g(l)+la_L^2 =g(l)+h(l,a_L). Similar reasoning shows that A_R ≥ g(r)+h(r,a_R). Putting these results together, we find that σ^2 ≥1/n(A_L+A_R) ≥1/n(g(l)+g(r)+h(l,a_L)+h(r,a_R)) ≥1/n(2g(l+r/2)+2h(l+r/2, a_L+a_R/2)) > 1/n(2g(s-1/2)+2h(s-1/2,√(β))) =1/n(β/3(s^3-s)+(s-1)β) =1/n(β/3s^3+2β/3s-β) >β/3ns^3 since s ≥ 2. Hence, in both cases, we have proved that σ^2 > β/3ns^3=8σ^2/n^3s^3, which implies that s^3 < n^3/8 and hence s < n/2 as claimed. Now consider the execution of DLV with bounding variable β. Denote by E_i the event in which the current running variance exceeds β and forces DLV to delimit between the current element x_i and the previous element x_i-1. Event E_1 happens as we start. We claim that after the partitioning is done, if P_i={x_j} for some j<n then [x_j,x_j+1] is a critical interval. To see this, observe that since x_j is not the last element (j<n) then both E_j and E_j+1 have happened. Once E_j happens, we know that P_i={x_j}. However, since E_j+1 has also happened, the variance of {x_j,x_j+1} must have been greater than β which implies that the distance between x_j+1 and x_j must have been greater than 2√(β), hence implying that [x_j,x_j+1] is a critical interval. Assume that we have more than s+1 subsets P_i such that |P_i|=1, then from x_1,...,x_n-1 (excluding that last element), we would have more than s subsets P_i such that |P_i|=1. This implies that we would have more than 2s endpoints (repetitions counted) that belong to some critical intervals. However, given only s critical intervals, one cannot have more than 2s such endpoints. Therefore, we can have at most s+1 subsets P_i such that |P_i|=1. Then at least n-s-1 points belong to some subset P_i where |P_i| ≥ 2. Hence the number p of subsets P_i satisfies p ≤ s + 1 + ⌊n-s-1/2⌋≤n+s+1/2 < n+n/2+1/2=3/4n+1/2. By construction, ensures that σ^2_i≤β for all i and thus we know that z(_d(S)) ≤pβ/σ^2≤(3/4n+1/2) 24 σ^2/n^2/σ^2≤ 24(3/4n+1/4n)=24/n and the desired result follows. At a high level, our proof proceeds by estimating the number of so-called critical intervals, which are intervals of consecutive values in increasing order such that the two endpoints of any such intervals cannot be in one partitioning subset as it would violate the above bounding varianceβ. This, in turn, allows us to upper-bound the number of partitioning subsets containing a single value, i.e., not all partitioning subsets will contain a single value and thus upper-boundpas well. As a result, the ratio score is bounded by24/nwherenis the number of tuples. in practice. Figure <ref> shows the ratio scorezof various algorithms using the samepartitioning on a normal distribution𝒩(0,1)with10^5samples. Mini-Experiment.How efficient is compared to when producing a large number of groups? We ran and the implementation as in <cit.> to partition the dataset described in Section <ref>. partitioned a relation of 10^8 tuples in 155s using 80 cores to produce approximately 10^6 groups while executed in 290s to produce approximately 10^3 groups. ( is not well-suited to produce as many groups as due to efficiency issues as well as the inability to directly control the number of groups produced). For a relation of 10^9 tuples, took 1650s to produce approximately 10^7 groups while ran out of memory. § EVALUATION In this section, we demonstrate experimentally that is very effective at overcoming the false infeasibility issues of the prior art (i.e., failing to derive a solution for feasible queries) while achieving superior scalability. We first describe the setup of our evaluation, including datasets, queries, and metrics, and then proceed to showcase our results. §.§ Experimental Setup Software and platform. We use v14.7 for our experiments to use built-in features such as range types to store 's partitioning information and GiST indexes over these ranges types, as described in Section <ref>. The main algorithms are implemented in C++17, which uses the libpq library as an efficient API to communicate with and the eigen library for efficient vector/matrix operations. For parallel implementation, we use C++ OpenMP for multi-processing computation <cit.>. For solving a sub-ILP in , we use v9.5.2 as our black-box ILP solver <cit.>. We run all experiments on a server with Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz with 377GB of RAM and 80 physical cores, running on Ubuntu 20.04.4 LTS. Our implementation is available at <cit.>. Datasets. We demonstrate the performance of our algorithms using both real-world and benchmark data. The real-world dataset consists of 180 million tuples extracted from infrared spectra (APOGEE/APOGEE-2) of the Sloan Digital Sky Survey () <cit.>. For the benchmark dataset, we use the LINEITEM table from V3 <cit.> with a scale factor of 300; the table contains 1.8 billion tuples. In order to make results comparable across the two datasets, we use the same query structure with constraint bounds that are set to achieve a specific query hardness level given the mean and standard deviation of the attributes in the dataset. Queries. Given a dataset, we have developed a novel method for systematically generating queries of varying hardness, rather than generating queries in an ad hoc manner. This approach allows comprehensive benchmarking and yields a better empirical assessment of the generalizability of a technique to other data sets or package queries, which can be arbitrarily easy or hard. Specifically, we use a package query template and systematically vary the constraint bounds to expand or shrink its feasibility region. As a simple example, a package query with the constraint∑_j x_j < bis trivially feasible forb = ∞and infeasible forb < 1(since thex_i's are integer). The complexity of finding a solution also depends on the objective function and the shape of the feasible region. Query Hardness. To precisely define query hardness, letℰbe the expected package size, i.e., the expected number of tuples in the solution to a package query. Without loss of generality, consider a constraint,C_iof the form∑_j a_ij x_j < b_i. Suppose attributeA_iis a random variable with meanμand varianceσ^2. Then with a large enoughℰand by the central limit theorem,ℰ^-1∑_j=1^ℰA_ifollows a normal distribution𝒩(μ,σ^2ℰ^-1).ConstraintC_ican be reformulated asℰ^-1∑_j=1^ℰA_i < b_i/ℰ. The probability,P(C_i), that a random sample ofℰtuples satisfyC_iis simply given by the cumulative distribution function (CDF) of the normal distribution𝒩(μ,σ^2ℰ^-1)evaluated atb_i/ℰ. Withmconstraints,C_1, ..., C_m, and assuming the attributes are independent[This is a strong assumption that does not hold in practice, but it is useful for the purpose of generating queries for benchmarking.], the probability that a random sample ofℰtuples satisfy all the constraints isP(C_1, C_2, ... C_m) = ∏_i^m P(C_i).Since the chances of satisfying a harder query's constraints with a random sample of tuples are much lower, we can define hardness as follows: :=-log_10∏_i^m P(C_i) Given a template package query with constraintsC_1,…, C_m, their boundsb_1,…, b_mas parameters, and the expected package size,ℰ, we can now instantiate a specific query of a specified hardness by setting the bounds accordingly. In particular, we can setP(C_1) = P(C_2) = ⋯=P(C_m) = 10^-/mand invert the CDF function to derive the boundb_ifor whichP(C_i) = 10^-/m. Table <ref> provides information on the underlying data distribution statistics of both data sets, the package query templates, and the bounds set for query instances of a particular hardness level, where∈{1,3,5,7}. Approaches. Our evaluation contrasts three approaches: * ILP solver <cit.>: This is a state-of-the-art solver that computes solutions to the ILP problem directly, without any considerations of partitioning. It provides the gold standard with respect to accuracy but struggles to scale to large data sizes. * <cit.>: The prior state-of-the-art in package query evaluation employs a data partitioning and divide-and-conquer strategy to achieve scalability. * : Our approach employs a multi-layer partitioning strategy that smartly augments the size of ILP subproblems to avoid false infeasibility, and a novel mechanism to parallelize and reduce solving time. Metrics. We evaluate the efficiency and effectiveness of all methods. The first metric is running time. We measure the wall-clock time to generate a solution for each method. This includes the time taken to read data from and the time taken for the method to produce the solution. In particular, for , the running time is computed when running 80 cores in parallel while and run sequentially. We limit the maximum running time of any method to 30 minutes. If a method fails to produce a solution within this time limit, it is registered as a failed run, i.e., no solution found. -1 The second metric is the integrality gap. Recall that the solution of the LP relaxations of an ILP is readily available because we can efficiently solve the LP problem using the Simplex algorithm <cit.>. Hence, we use the LP objective value as the upper bound for an ILP solution in a maximization problem, and as the lower bound in a minimization problem. The integrality gap for maximization is then defined as the ratio ILP objective over LP objective:(Obj_ILP+ϵ)/(Obj_LP+ϵ)whereϵ=0.1is required to avoid numerical instability when|Obj_LP|is too small. For minimization, we simply invert the ratio. Therefore, it is always the case that the integrality gap is at least 1 assuming the objective is always positive. Hyperparameters. We set hyperparameters as follows. * 's MIP gap: We keep the default value of 0.1%. will terminate when the gap of the lower and upper bound of the optimal objective value is less than 0.1% of the incumbent objective value. * 's partitioning size threshold: We find that the default setting proposed by  <cit.> (10% of the relation size, or ≈ 10 partitions) results in infeasibility in the sketch phase for all queries with hardness >2 in our benchmark. We instead set the threshold to 0.1%. This increases the number of partitions (≈ 1000) allowing for smaller groups with more similar tuples and better representatives leading to a higher solve rate, without degrading the performance of the index. * 's and : Using grid search, we find =100,000 and =100 to be optimal. Lower would cause the partitioning time to be much higher while higher would cause the partitioning groups to be less accurate since each group would contain more tuples. Higher significantly increases the query time with marginal gains to solution quality. Lower results in a significant drop in solved queries. §.§ Results Query performance as relation size increases.-1 Figure <ref> demonstrates the performance of each method as the relation size increases, as well as the effect of increasing hardness on the running time and the integrality gap. The set of hardness values that we experimented with encompasses easy to average difficulty (∈{1,3,5,7}). We generate five relation instances for each relation size by sampling independent sub-relations from the original dataset. In both datasets, only scales up to a size of one million tuples, and the running time grows exponentially with the relation size. scales relatively well up to ten million tuples, but cannot sustain scaling beyond that, since the size of refined queries increases linearly with the relation size. In addition, fails to find any solutions for hardness 3 or higher for and hardness 7 for . In contrast, always finds solutions in both datasets and achieves well below 5s running time even for one billion tuples. In terms of the integrality gap, achieves close-to-optimal solutions for and as seen by its integrality gap curve staying as close to that of . , on the other hand, produces solutions with 20% worse objective in . For , the extremely high value of integrality gap for is due to the fact that the objective column tmass_prox in has many zero values. This produces an LP solution with an objective value of 0. If only finds an ILP solution with a positive objective value, this value will be divided byϵ=0.1, i.e., will be scaled by a factor of 10. False infeasibility as hardness increases. We next examine the occurrence of false infeasibility in and as the hardness level becomes very high, shrinking the feasible region (up to 15 in our benchmark). For each query, we generate ground truth feasibility by running on the query with its objective function removed. This will allow to terminate as soon as it finds a feasible solution. We restrict the relation size to one million since this is the maximum size that can solve within the time limit. Furthermore, for each dataset and hardness level, we randomly sample 20 sub-relations of size one million representing 20 queries and compute the number of queries for which each of the methods can find a solution. Figure <ref> displays our results. For , solves 15 out of 20 queries solved by at=1and none at all for>1, whereas can solve as many queries solves up to=11and then fails to solve up to 4 queries compared to for>11. For , solves as much as . By comparison, solves only half as much for usual workload of∈{1,3,5}but then fails to solve most of the queries for the hard queries where>5. § RELATED WORK In-database optimization. Recent research aims to integrate complex analytics capabilities into DBMSs. SolveDB <cit.> provides extensible infrastructure for integrating a variety of black-box optimization solvers into a DBMS, whereas focuses on ILP solvers and “opens up the black box” in order to scale to large problems. SolveDB offers built-in problem partitioning which is only applicable when there are sub-problems that can be solved independently, i.e., constraints that only exist within each sub-problem. However, such a partitioning strategy is ineffective for solving package queries because most of the tuples can connect via a single constraint and thus cannot be partitioned further. provides a simple solution to the specific needs of package queries by partitioning a very large relation into similar tuples. Resource allocation problems. Partitioned Optimization Problems (POP) <cit.> is a recent technique to solve large-scale “granular” resource allocation problems that can be often formulated as ILPs whose structures are different from ILPs formulated by package queries, i.e., the number of constraints in POP can be as large as the number of variables <cit.>. POP achieves high scalability by randomly splitting the problem into sub-problems and aggregating the resulting sub-allocations into a global allocation—an approach similar to <cit.>. Thus, POP still suffers from the same disadvantages as when we increase the scale because the number of sub-problems is up to 32 in POP. Moreover, the partitioning in POP is online while is a large-scale package query solver that runs on an offline partition produced by . Semantic window queries. Semantic windows <cit.> are related to packages. A semantic window refers to a subset of a grid-partitioned space that is contiguous and has certain global properties. For example, astronomers may divide the night sky into a grid and search for areas where the overall brightness exceeds a particular threshold. Semantic windows can be expressed by package queries with a global condition to ensure that all cells in the package are contiguous. Searchlight <cit.>, a recent method for answering semantic window queries, uses in-memory synopses to quickly estimate aggregate values of contiguous regions. This approach is analogous to our hierarchical partitioning strategy using where a relation in layerlaggregates tuples from a relation in layerl-1. However, Searchlight enumerates all of its feasible solutions and retains the best one—a very expensive computation—whereas efficiently finds potentially optimal solutions via LP. Neural Diving. Neural Diving <cit.> is a machine learning-based approach to solving ILPs that trains a deep neural network to produce multiple partial assignments of variables in the input ILP, with the remaining unassigned variables defining smaller sub-ILPs that can be solved using a black-box ILP solver. The neural network is trained on all available feasible assignments to give a higher probability to the ones that have better objective values instead of only the optimal ones, which can be expensive to collect. The authors of <cit.> evaluate the method on diverse datasets containing large-scale MIPs from real-world applications such as Google Production Planning, Electric Grid Optimization <cit.>, and so on. Unlike Dual Reducer, Neural Diving does not prune variables using an auxiliary LP but instead uses a pre-trained neural network. This approach requires expensive training over a large dataset of similar problem instances in order to learn effective heuristics. Moreover, solving package queries beyond millions is currently out of reach for Neural Diving since it requires the neural network—whose size scales with the number of variables and constraints—to fit in memory. § CONCLUSIONS AND FUTURE WORK In this paper, we expand our ability significantly beyond prior art <cit.> to solve challenging package queries over very large relations. Our novel strategy uses a hierarchy of relations created via a sequence of partitionings, smartly augments the size of ILP subproblems to avoid false infeasibility, and provides a novel mechanism to parallelize and reduce solving time. In future work, we plan to investigate combining Neural Diving and to potentially solve a wide range of ILP problems (not just package queries) in arbitrarily large relations. Although Neural Diving is not currently a feasible approach, running large-scale neural networks inside a DBMS will eventually become efficient, e.g., by integrating tensor technology into DBMS <cit.>. Then—because it is straightforward to generate different package queries with various hardnesses and query structures—a potential approach for solving package queries with high hardness would train Neural Diving using the feasible solutions generated from . Acknowledgements. This work was supported by the ASPIRE Award for Research Excellence (AARE-2020) grant AARE20-307 and NYUAD CITIES, funded by Tamkeen under the Research Institute Award CG001, and by the National Science Foundation under grants 1943971 and 2211918. § LINEAR PROGRAMMING IN OUR SETTING §.§ Linear Program In our setting, a linear program (LP) of n variables and m constraints is an optimization problem where one aims to find a real vector x that solves the following: I[ minimize c^Tx; subject to Ax≤b; l≤x≤u ] where c, x, l, u are vectors of size n, b is vector of size m, and A is a matrix of size m × n. For our package query application, we assume that x is finitely bounded between l and u. This assumption allows us to finitely bound b_l≤Ax≤b_u. Rewriting the above LP (<ref>) into a standard form by adding the slack variables representing as vector s of size m, we have: II[ minimize c^Tx; subject to -Ax + Is = 0; l≤x≤u; b_l≤ s ≤b_u ] where I is the identity matrix of size m × m. Now we convert our original LP of n variables to an LP of n+m variables by concatenating x and s into x. III[ minimize c^Tx; subject to Ax = 0; l ≤ x ≤ u ] where c=[c | 0], A=[-A | I], x=[x | s], l = [l | b_l], u = [u | b_u]. The LP formulation in (<ref>) is in standard form, with n+m variables and m constraints that our simplex method will solve. §.§ Simplex Algorithms Since its inception, the simplex algorithm and its variants have dominated the approaches to solving LP problems <cit.>. Alternative approaches such as the interior-point methods <cit.> have also gained popularity due to better theoretical guarantees. However, in terms of simplicity and ease of parallelization, the simplex algorithm is far more preferable. We focus on the “bounded” simplex algorithm for LPs with upper and lower bounds on the components of x, as given in (<ref>) above. Intuitively, the set of feasible solutions to the LP forms a convex polytope. (A polytope generalizes a 2D polygon and a 3D polyhedron to n>3 dimensions.) A basic feasible solution corresponds to a vertex of the polytope. For such a solution, every variable is either: * nonbasic and takes values either at the lower bound or upper bound, or * basic and its value can be determined from the nonbasic variables; the set of basic variables is always of size m. If there exists an optimal solution to the LP, then at least one basic feasible solution is optimal, and so simplex algorithms search through the basic feasible solutions. For a basic feasible solution, the set B called the basis, comprises the indices of the basic variables; its complementary set N is the set of indices of nonbasic variables. We denote by x_B the sub-vector of x containing values of basic variables and by x_N the sub-vector of x containing values of nonbasic variables and similarly define c_B and c_N. It is not practically feasible to examine, in a brute-force manner, all n+mm basic feasible solutions that correspond to all possible choices for the set of basic variables. This is especially true because, for each combination, we also need to decide whether each nonbasic variable should be set to its lower or upper bound. Instead, the most basic type of simplex method, a “primal” simplex method, starts with an initial basis and, for each simplex iteration, swaps a basic variable with a nonbasic one so as to maximally decrease the objective value. The simplex algorithm terminates when no further improvement is possible, and the resulting basic feasible solution is guaranteed to be optimal. Geometrically, this procedure corresponds to starting at some random vertex and then repeatedly moving from the current vertex to an adjacent vertex until an optimal solution is found. (The red arrow on the left side of Figure <ref> shows a single step in the case of a 3D polytope.) The following result gives the necessary and sufficient conditions for a basic feasible solution to be optimal. A basic feasible solution with basis B is optimal if and only if B is both primal-feasible and dual-feasible, where * B is primal-feasible if l_B ≤ x_B ≤ u_B * B is dual-feasible if for all nonbasic variables x_i (i ∈ N): * If x_i is at lower bound then d_i ≥ 0 * If x_i is at upper bound then d_i ≤ 0 where d=c-(A_B^-1A)^Tc_B is the reduced cost and A_B is the sub-matrix of A where the columns are indexed by B. The original simplex algorithm developed by Dantzig <cit.> is a primal simplex algorithm where it starts with a primal-feasible basis B (which can be dual-feasible or dual-infeasible) and tries to reduce dual infeasibility until the optimality conditions are satisfied. On the other hand, a dual simplex algorithm maintains a dual-feasible basis while reducing primal infeasibility. The argument favoring the dual version of simplex over the primal in our setting is the existence of the Bound Flipping Ratio Test (BFRT) procedure. Intuitively, BFRT works well when the geometry of the LP problem allows “long-step” iterations. As discussed above, the primal simplex algorithm moves from the current vertex to the most beneficial adjacent vertex at each step. In the dual simplex algorithm with BFRT, an iteration can be equivalent to many such steps and is called a long step. (The long step must be dual-feasible.) Therefore, the dual version requires many fewer simplex iterations compared to the primal version. § PARALLEL DUAL SIMPLEX Given the foregoing considerations, dual simplex is the best choice in our setting since we have found that in many of our LP runs, the geometry of our LP problem enables many such long-step iterations to happen, for example, we usually observe that the first simplex iteration is always a long-step iteration that is equivalent to approximately n/2 single-step iterations. Below is a discussion of simplifications and optimizations in our implementation of . We repeatedly take advantage of the fact that m is small (on the order of 10–20 constraints) and n≫ m. §.§ Finding an Initial Dual-Feasible Basis Recall that the dual simplex algorithm assumes that we start with a dual-feasible basis. Therefore, finding a dual-feasible basis to start with is the main goal of phase-1 of the simplex algorithm. The standard approach to phase-1 is usually a modification to the objective function to reflect the dual infeasibility of the basis. Hence minimizing such artificial objective function would allow us to eventually find a dual-feasible basis. Therefore, phase-1 simplex usually requires the execution of the simplex algorithm itself. We show that in our setting, phase-1 does not require such execution since the basis is readily available. Indeed, let us pick the basis as the set of slack variables represented by vector s. Then c_B=0 since the objective coefficients of the slack variable in our standard form is zero. Hence, the reduced cost d=c-(A_B^-1A)^Tc_B=c. Hence, to make B dual-feasible, we set: * x_i=l_i when c_i ≥ 0 for i ∈ N. * x_i=u_i when c_i ≤ 0 for i ∈ N. This procedure gives us x_N which will determine x_B via x_B=-A_B^-1A_Nx_N. Notice that such a basis is readily available since we have assumed that x is finitely bounded in the first place. §.§ Inverting the Matrix A_B In the dual simplex algorithm, there are important procedures called FTran and BTran where we have to solve a system of linear equations involving A_Bp=q where p, q are vectors of size m. Standard approaches maintain LU-factorization that stores A_B=LU where L and U are lower and upper triangular matrices, respectively. Whenever a swapping occurs, an update is made to the LU-factorization of A_B. In our setting, since the number of constraints m is from 1 to 20 at most, A_B^-1 can be updated and stored directly. This allows lower memory usage as well as efficient updates. §.§ Parallelization Opportunities In our setting, there are two procedures of the dual simplex that take most of the execution time: Pivot Computation (45%) and Bound Flipping Ratio Test (35%). The first procedure involves a matrix-vector multiplication A_N^Tp where A_N is a matrix of size m × n and p is a vector of size m. In our setting, since n can be at a scale of hundreds of millions, it is efficient to parallelize the computation with respect to n. For the second procedure, the nature of BFRT is sequential. However, in our setting, it is possible to efficiently parallelize it. In layman's terms, BFRT is equivalent to the following problem: Bill is an enthusiastic traveler who wants to go to as many locations as he can out of n possible locations. Each location i has: * A scenic score si. * A cost ci. Bill must travel to these locations in ascending order of scenic score without skipping any location while staying within his total budget G. How can we help Bill quickly identify the sequence of locations that he will travel to, given a budget G? Due to the geometry of our LP problems, in the first simplex iteration, Bill will have a very large budget G which allows him to visit approximately 50% of the n possible locations. For subsequent iterations, G is much smaller and he can only visit a very small number of locations (0-200). Therefore, we have two parallelization algorithms for large budget and small budget G respectively given p the number of cores used. BFRT First Iteration (Large budget G) * Compute si and ci for all n locations in parallel with time complexity O(n/p). * Parallel sort the entire set of locations based on si using MapSort by EdaHiro <cit.> with complexity O(p^-1nlog n). * Compute the consecutive sum of sorted ci in parallel with complexity O(p+2np^-1). * Binary search on the consecutive sum of ci to find the last location that Bill can go in O(log n). Hence the overall time complexity of O(p^-1nlog n). BFRT Subsequent Iterations (Small budget G) * While computing si and ci for all n locations in parallel, each core maintains a max-heap to keep z locations (z ≪ n and z can vary among the cores) of the lowest-scoring scenic locations while keeping the sum of ci of these z locations above G. The complexity can be done in O(p^-1n log z). * After computing si and ci, we would have a set of candidate locations collected from all the max-heaps. The size of such a set would approximately be zp where z is the average size of the max-heap in each core by the end of the computation. * Use a min-heap to sequentially pop the lowest-scoring scenic locations from the set while keeping the budget within G. This can be done in O(p zlogp z). Hence the overall time complexity is O(p^-1n log z) where z is the size of the largest max-heap in a core. However, in our observations, such z does not deviate too much from the average z. § Algorithm <ref> approximates the constants c_j for each attribute j of a relation S such that partitions a set P over attribute j into approximately subsets via a bounding variance β=c_j σ^2_j / ^2 where σ^2_j is the variance of the attribute j of the tuples in P. The algorithm first samples a subset P of the relation S (line 1). For each attribute j, it computes the minimum and maximum possible variance of a subset of P (line 3,4) and then runs a binary search of the bounding variance β on this interval to compute β such that partitions P into approximately n ≈ subsets (line 7). § HANDLING LOCAL PREDICATES Local predicates are constraints that each tuple in the package has to satisfy individually. Depending on the selectivity of the local predicates, the representative tuples in can become increasingly inaccurate. For example, low selectivity means a few numbers of tuples satisfying the predicates which causes the representative tuples to present fewer tuples than it should. To compensate for this inaccuracy, we implement the naive approach of adjusting the representative tuples according to the local predicates during query processing. This adjustment happens at all the layers in the hierarchy of relations for . We conducted experiments to measure the running time and integrality gap of in the presence of numeric local predicates with different selectivities. We considered numeric local predicates of the form “a<x” or “a>x” for a specified attribute a. In terms of the integrality gap, performs on par even with low selectivity predicates. However, the scalability of is diminished especially when the relation size increases and selectivity decreases. In particular, over all the selectivity, has an average running time of 15s for a size of 10 million while 300s for a size of 100 million. Therefore, this is a limitation when it comes to local predicates for using the naive adjustment approach. ACM-Reference-Format
http://arxiv.org/abs/2307.01083v1
20230703150615
Using Solar Orbiter as an upstream solar wind monitor for real time space weather predictions
[ "R. Laker", "T. S. Horbury", "H. O'Brien", "E. J. Fauchon-Jones", "V. Angelini", "N. Fargette", "T. Amerstorfer", "M. Bauer", "C. Möstl", "E. E. Davies", "J. A. Davies", "R. Harrison", "D. Barnes", "M. Dumbović" ]
physics.space-ph
[ "physics.space-ph" ]
R. Laker1, T. S. Horbury1, H. O'Brien1, E. J. Fauchon-Jones1, V. Angelini1, N. Fargette1, T. Amerstorfer2, M. Bauer2, C. Möstl2, E. E. Davies2, J. A. Davies3, R. Harrison3, D. Barnes3, M. Dumbović4 1Imperial College London, Blackett Laboratory, South Kensington, SW7 2AZ 2Austrian Space Weather Office, GeoSphere Austria, Reininghausstraße 3, 8020 Graz, Austria 3RAL Space, STFC Rutherford Appleton Laboratory, Harwell Campus, Didcot OX11 0QX, UK 4Hvar Observatory, Faculty of Geodesy, University of Zagreb, Croatia Ronan [email protected] * Real time data from Solar Orbiter was used to predict the arrival time of a coronal mass ejection at Earth more than a day in advance * In situ measurements at 0.5 AU were used to reduce the mean absolute error in arrival time from 10.4 to 2.5 hours with the ELEvoHI model * The in situ Bz profile was comparable to the geomagnetic response at Earth, despite being separated by 0.5 AU and 10^∘ longitude Coronal mass ejections (CMEs) can create significant disruption to human activities and systems on Earth, much of which can be mitigated with prior warning of the upstream solar wind conditions. However, it is currently extremely challenging to accurately predict the arrival time and internal structure of a CME from coronagraph images alone. In this study, we take advantage of a rare opportunity to use Solar Orbiter, at 0.5 AU upstream of Earth, as an upstream solar wind monitor. We were able to use real time science quality magnetic field measurements, taken only 12 minutes earlier, to predict the arrival time of a CME prior to reaching Earth. We used measurements at Solar Orbiter to constrain an ensemble of simulation runs from the ELEvoHI model, reducing the uncertainty in arrival time from 10.4 hours to 2.5 hours. There was also an excellent agreement in the B_z profile between Solar Orbiter and Wind spacecraft, despite being separated by 0.5 AU and 10^∘ longitude. Therefore, we show that it is possible to predict not only the arrival time of a CME, but the sub-structure of the magnetic field within it, over a day in advance. The opportunity to use Solar Orbiter as an upstream solar wind monitor will repeat once a year, which should further help assess the efficacy upstream in-situ measurements in real time space weather forecasting. § PLAIN LANGUAGE SUMMARY Coronal mass ejections (CMEs) are large eruptions of plasma from the Sun that can significantly disrupt human technology when directed at Earth. Much like weather on Earth, the consequences of these `space weather' events can be lessened with warning of their arrival, e.g., putting satellites into safe mode. This is usually done by identifying a CME in telescope images, and then predicting if and when it will arrive at Earth. However, the current forecasting models have large uncertainties in arrival time, and struggle to predict the in situ properties of the CME, which can significantly alter the severity of the event. In this paper, by taking advantage of a period in March 2022, we were able to use real time measurements from halfway between the Sun and the Earth taken by the Solar Orbiter spacecraft. This allowed us, for the first time, to predict the arrival time and magnetic structure of a CME, more than a day before it arrived at Earth. We also show that the Solar Orbiter measurements can be used to constrain a CME propagation model, significantly improving the accuracy and precision of the forecasted arrival time. § INTRODUCTION Space weather can create significant disruption to human technology both in space and on Earth, including loss of satellites, damage to power grids and communication blackouts <cit.>. Fortunately, many of these effects can be mitigated with prior warning, meaning that timely and accurate predictions of the arrival and severity of space weather events are extremely important <cit.>. The majority of severe geomagnetic storms at Earth are driven by coronal mass ejections <cit.>CMEs,>[]Richardson2001, large and complex structures impulsively released from the Sun's corona. Remote sensing observations of the corona can reveal the release of a CME from the Sun, whose propagation through the ambient solar wind can then be simulated to predict an arrival time at Earth. While there are many sophisticated solar wind and CME models, they often have arrival time errors of ± 10 hours, partly due to uncertain estimates of the CME's initial parameters <cit.>. In addition, the interaction between the CME and the ambient solar wind can deflect, deform and rotate the CME, which can also significantly affect the arrival time <cit.>. Information regarding the orientation of the CME's magnetic field, a primary indicator of geo-effectiveness, is often as valuable as the predictions of arrival time <cit.>. Knowing the potential impact of the CME, rather than just its arrival time, can limit the number of false positives and make predictions more useful for those commercial applications where there is a high cost of mitigation, e.g., putting a spacecraft into a safe mode. Currently, information about the internal magnetic structure and plasma parameters of an earth-directed CME are provided by a number of spacecraft at L1: Wind, ACE and DSCOVR. However, their placement just ahead of Earth only provides a lead time of around an hour, which is often not long enough to take appropriate mitigating measures. To address many of the problems with the current prediction framework, future upstream solar wind monitors have been proposed, which would be positioned further from Earth than L1. However, such a mission would rely on currently inaccessible solar sail technology or a large constellation of orbiting probes <cit.>, meaning that there are still open questions regarding the efficacy of these future proposals. In the case of the probe constellation, it would be useful to know the minimum separation in longitude that would provide continuous prediction capabilities. In addition, the optimal position for a spacecraft would be a trade-off between improved lead time and the accuracy of the prediction. For example, placing a spacecraft within Mercury's orbit would give several days lead time, but the solar wind structures seen by the spacecraft may have evolved significantly after travelling to Earth. While several case studies have already shown how an upstream spacecraft can be useful in predicting the arrival time and geo-effectiveness <cit.>, such a concept has never been attempted in real time. In this paper, we take advantage of an unprecedented opportunity to use Solar Orbiter <cit.> as a real time upstream solar wind monitor. During a period between February and March 2022, Solar Orbiter crossed the Sun-Earth line at a heliocentric distance of 0.5AU, observing two CMEs in situ and, for the first time, providing predictions of the arrival time and magnetic structure before they arrived at Earth. In Section <ref>, we outline the operational constraints of the Solar Orbiter mission for this purpose and also detail the models used for arrival time prediction. We then present the results of our two CME case studies in Section <ref>, showing how these measurements can be used to improve the uncertainty in predicted arrival time prior to reaching Earth. Finally, the similarity of the magnetic field structure at 0.5 AU and Earth is investigated in Section <ref>, opening up the possibility to predict the sub-structure of a CME event, not just the arrival time. § METHODOLOGY §.§ Operations During February and March 2022 Solar Orbiter travelled from 0.8AU to 0.32AU heliocentric distance, crossing the Sun-Earth line on 6 March 2022, as summarised by Fig. <ref>. While the spacecraft has crossed the Sun-Earth line before, for this crossing, Solar Orbiter had the capability to return data sufficiently quickly to predict the onset and severity of geomagnetic storms at Earth. This was made possible by the low latency data products returned by the instruments at 100 bits/second, which are intended to help with the `very short term planning' of the remote sensing instruments – pointing them in the direction of relevant solar structures, such as active regions or the polar coronal holes <cit.>. Since Solar Orbiter was not designed to be an upstream solar wind monitor, there were a few operational constraints that affected our prediction capability. First, the data was only downloaded in an 8-hour window per day, referred to as a `pass'. Therefore, if a CME arrived between passes, we could not provide predictions until the next pass began, which could be up to 16 hours later. Under normal operation, the low latency data taken between passes is downloaded within 30 minutes of the start of the next pass, while within the pass the latency is less than 5 minutes. Since this is only intended to be quick look data for short term planning, this data typically still has artefacts that make it unsuitable for science. To overcome these challenges, the Solar Orbiter MAG team <cit.> created a pipeline to produce real time data that was closer to science quality. Housekeeping data, provided in the low latency packets, was used to remove over 50 different heater signals from the spacecraft, as well as interference from other instruments aboard Solar Orbiter <cit.>. This custom pipeline was then run on demand during the pass, and could achieve a latency of just 12 minutes from taking the measurement to being science quality on the ground, which included a 4-minute light travel time. However, for the purposes of this study, the real time data was only available for the magnetic field, with the plasma data from the Proton Alpha Sensor <cit.>PAS, >[]Owen2020 becoming available at the end of the 8-hour pass. With Solar Orbiters position, our lead time would be around 35hours for a 600km/s CME at 0.5AU, meaning that in practice we still had access to the plasma data before the CME arrived at Earth. For future radial alignments, both plasma and magnetic field data will be available in real time. §.§ Modelling After identifying a CME within the Solar Orbiter in situ data, we then attempted to predict its arrival time at Earth. As well as providing an estimate simply using distance divided by speed from PAS, which inherently neglects effects such as drag, we also applied the ELEvoHI model in real time <cit.>. With this particular model, it was not just a fortunate line-up between Solar Orbiter and Earth, but the position of STEREO-A was such that it could provide a side view of any CMEs directed towards Solar Orbiter. A full description of this model, and the underlying assumptions, can be found in Amerstorfer2021 and Bauer2021. In essence, this model first uses the heliospheric imager <cit.>HI,>[]Eyles2009 data from STEREO-A to obtain an elongation track that is then converted to radial distance using the ELlipse Conversion method <cit.>ELCon,>[]Rollett2016. Tracing the CME front, and fitting the time-distance profile with a drag based model <cit.>DBM,>[]Vrsnak2013, allows for the estimation of the CME kinematics, which can then extrapolated to Earth with the ELEvo model <cit.>. This framework also requires the CME direction, which is provided by the Fixed-ϕ fitting (FPF) model <cit.>, giving a total of five inputs to the ELEvoHI model. By varying the input parameters according to Bauer2021, an ensemble of 210 model runs are generated, which can be seen in Fig. <ref>. This provides a range of arrival times that are treated as the uncertainty in the overall ELEvoHI ensemble model. For real time applications the beacon, rather than science, data from STEREO-A must be used. Although this is not as high quality, Bauer2021 demonstrated that it can still provide an arrival time prediction with a mean absolute error (MAE) of 11.4±8.7 hours, as opposed to 8.8±3.2 hours using science data. In theory, this prediction uncertainty can be significantly reduced by rejecting those ensemble members that do not match the in situ measurements from a spacecraft within 1AU. Constraining a CME model with in situ data has been successful in previous studies, although, these were carried out in hindsight of the CME event <cit.>. However, with the real time availability of Solar Orbiter data we have, for the first time, constrained the ELEvoHI model with measurements at ∼0.5 AU before the CME arrived at Earth. § RESULTS AND DISCUSSION As Solar Orbiter crossed the Sun-Earth line, two CMEs were observed by the spacecraft, at the times depicted as red dots in Figure <ref>. Both case studies were then subsequently observed by Wind at L1, with the specific timings listed in Table <ref>. We will first discuss the accuracy of the arrival time predictions in Section <ref>, before investigating the similarity in the magnetic structure between the Solar Orbiter and Wind in Section <ref>. §.§ Arrival Time After observing the first event in Solar Orbiter at 2022-03-07 22:49 (case 1), we then tracked the CME front in the time-elongation maps generated from STEREO-A HI beacon data, from 6 March onwards. Following the method of Bauer2021, this procedure was repeated four more times and interpolated to an equally spaced time axis. This produced a single profile, as seen in the upper panel of Fig. <ref>, that was input into the ELEvoHI model to simulate the CME propagation towards Earth. As discussed in Section <ref>, an ensemble of 210 model runs are used to make the final prediction of arrival time, as shown in lower panel of Fig. <ref>. The difference between the simulated and true arrival times over the ensemble are shown in Fig. <ref>, ranging from -7.8 hours to 47.1 hours (negative values indicate the prediction was earlier than the true arrival time). The mean error (ME) in arrival time at Earth was +8.1 hours, with a mean absolute error (MAE) of 10.4 hours (Table <ref>), representing typical values for such a simulation <cit.>. Using the arrival time at Solar Orbiter as a constraint, with a ± 4hour threshold, we were able to reject 111 of 210 ensemble members, leaving only 99 runs (red) in Figs. <ref> and <ref>. This significantly improved the accuracy and precision of the simulation, lowering the ME to -2.4 hours and the MAE to 2.5 hours. In addition, by removing many of the erroneous runs, the range of arrival times was now between -7.7 hours to -2.3 hours. While constraining simulations in this way has been attempted in previous studies <cit.>, we have shown, for the first time, that this can lead to a drastically improved prediction in real time. This same method was also applied to case 2, where the ME was reduced from +2.2 hours to -0.1 hours and the MAE improved from 2.7 hours to 1.1 hours. Although this was not done in real time, but still based on HI beacon data, it provided another successful demonstration of the benefits of constraining the ensemble runs. Such an improvement in arrival time was achieved even when the Solar Orbiter spacecraft was 9.6^∘ away from the Sun-Earth line. While this is only one example, it does suggest that a constellation of orbiting probes can be separated by at least 19^∘ at 0.5 AU and still provide useful prediction capabilities. Even without a complex model, but only using an estimated speed from PAS, we were still able to produce accurate arrival time predictions (Table <ref>). This is likely due to the fact that CMEs are relatively unaffected between 0.5 AU to 1AU, with any significant deflections having already mainly occurred closer to the Sun <cit.>. Of course, CMEs are still known to deflect and deform in the solar wind depending on the downstream conditions <cit.>. Such a problem could be addressed in future with an improved ELEvoHI model <cit.>, or with a 1D model that can capture CME deformation <cit.>. §.§ Magnetic field structure Knowledge of the upstream CME conditions, namely proton speed, density and B_z, is arguably of equal importance as the arrival time <cit.>. Such parameters influence the dynamic pressure of the CME and its ability to trigger magnetic reconnection at the magnetopause <cit.>, which leads to the onset of geomagnetic storms. The orientation of the magnetic field in the CME, whether B_z is positive or negative, is the primary indicator of storm severity <cit.>, although predicting this property remains a challenging problem for CME models <cit.>. Therefore, it would be a major advantage if the transient structures seen by a spacecraft within 1AU were correlated with those subsequently seen at Earth. To evaluate the similarity in structure between the spacecraft, Fig. <ref> shows the in situ measurements for case 2 from both Solar Orbiter and Wind, with the former time shifted to match up the shock fronts. Both spacecraft depict a typical CME profile, with a shock front occurring before a denser, more variable sheath region that was followed by the smooth rotation of the magnetic field in the flux rope. There was an excellent agreement in the B_z profile between the two spacecraft, with three periods of negative B_z (highlighted regions) followed by a flux rope with a mainly northward orientation. As expected, the first B_z<0 region in Solar Orbiter is now part of the CME sheath at Wind, having been overtaken by the shock. The B_z profile was relatively unchanged at Wind due to the low inclination of the shock at Solar Orbiter, which had an azimuth (ϕ) of 56^∘ and an elevation (θ) of 4.4^∘ found using the cross product of the magnetic field either side of the shock[RTN coordinate system, where ϕ is the angle in the R-T plane where 0^∘ points along R⃗ and 90^∘ along T⃗. θ is the angle out of the R-T plane.]. Similarly, the B_z signature was comparable in the CME flux rope between the two spacecraft, although, this was less clear for the other magnetic field components shown in Fig. <ref>. To further assess the similarity between the two CMEs, we fit the flux rope signatures with a simple force-free model assuming cylindrical symmetry and using Bessel function solutions <cit.>. We find a helicity sign of -1 for both events, and an azimuth, elevation and impact factor of (-67±5^∘, 31±2.5^∘, 0.18±0.02 R_E (0.8%)) for Solar Orbiter and (-76± 6^∘, 43±2.5^∘, -6.3±0.24 R_E (19%)) for Wind. While the fitting is sensitive to variations in CME boundary definition, the results are consistent enough to demonstrate that the orientation of the flux rope remains fairly stable, with the changes in magnetic field component being mostly due to a change in impact factor. Such a scenario is consistent with the relative positions of the two spacecraft, with Solar Orbiter being 9.6^∘ away from the Sun-Earth line at 0.5 AU. Therefore, both the orientation of the CME sheath and flux rope were similar between the spacecraft, demonstrating that CME structures can remain coherent between an upstream monitor and Earth. Interestingly, the magnetic response at Earth (Dst and SYM/H indices) was consistent with the B_z signature at Solar Orbiter, displaying three dips before the flux rope slowly rotated northward. So, at least for this event, the magnetic storm indicators at Earth were strongly correlated to the magnetic structure seen at 0.5 AU around 40 hours prior. This suggests that with an upstream spacecraft, it is possible to predict not only the arrival of a CME, but the sub-structure of the flux rope and sheath region using measurements from 0.5 AU. For this case, the magnetic field at Solar Orbiter was sufficient to capture the general trends in Dst at Earth, although, there were still some clear differences in Fig. <ref>. Much like the improvements in arrival time from Section <ref>, knowledge of the in situ CME properties, such as shock or flux rope orientation, could be used as the initial parameters for a CME propagation model <cit.>. Again, this would not have to account for deflections in the corona, and should make for more accurate predictions of magnetic structure compared to those based on coronagraph observations. This would be another step towards being able to predict how the geomagnetic storm at Earth will develop on an hourly timescale. It is important to note that this is only a single CME event, and a more extensive statistical study is needed. Indeed, the in situ structure of case 1 was a complex interaction of two flux ropes, rather than the typical sheath, flux rope profile seen in case 2. Nevertheless, the ability to predict CME structure is still extremely challenging for current models <cit.>, especially within the sheath region, which are known to be major drivers of geomagnetic storms <cit.>. In future, given knowledge of the shock orientation at 0.5 AU, the Rankine-Hugoniot conditions could be applied to the ambient solar wind along the CME path to simulate the growth of the sheath region <cit.>. § CONCLUSIONS In this study, we took advantage of an opportunity to use Solar Orbiter as an upstream space weather monitor between February and March 2022. As Solar Orbiter crossed the Sun-Earth line, we were able to use in situ data taken only 12 minutes earlier. In combination with the favourable position of STEREO-A, we were able to model the kinematics of two CME events with the ELEvoHI model. We first demonstrated how knowledge of arrival time at Solar Orbiter could be used to constrain the ELEvoHI model. Under normal operation, this model uses an average of 210 ensemble members to estimate the arrival time of a CME at Earth. However, by only keeping those ensemble members that were within ±4 hours of the arrival time at Solar Orbiter, we improved both the accuracy and precision of the model. Specifically, the MAE was reduced from 10.4 hours to 2.5 hours and 2.7 hours to 1.1 hours in the two case studies. Therefore, for the first time, a numerical model was constrained with data from 0.5 AU to produce an updated prediction before the actual arrival of the CME at Earth. As well as demonstrating how a spacecraft at 0.5 AU could provide a lead time of over 40 hours, we also showed that the predictions were still accurate when Solar Orbiter was 9.6^∘ away from the Sun-Earth line. Such a result provides motivation for a future constellation mission housing in situ instrumentation, since the individual spacecraft could be separated by at least 19^∘, making the concept more viable. While we could have used another CME model, we found it particularly beneficial to model the CMEs with the aid of STEREO-A HI data away the Sun-Earth line. Similar dedicated real time HI data can hopefully be provided by ESA's Vigil mission in the near future. We also found that even simple estimates of arrival time, using just the average CME speed at Solar Orbiter, could produce accurate arrival times at Earth. This represents a major benefit of an upstream solar wind monitor, as it reduces the need to model complex interactions and deflections in the corona, that can drastically alter arrival time. Of course, there is still a need to account for compression, distortion and rotation as the CME propagates in the solar wind. This, and the interaction between CMEs, could be handled by a 1D model <cit.> or an improved version of ELEvoHI in the future <cit.>. Comparing measurements from Solar Orbiter and Wind revealed that the magnetic structure of the CME sheath and flux rope was remarkably similar for the second case study, despite being separated by 0.5 AU. Crucially, these periods of negative B_z were also seen to match well with the magnetic response at Earth (Dst and SYM/H profiles), opening up the possibility to predict the evolution of a geomagnetic storm from measurements at 0.5 AU. This would also be important for reducing the number of false positive predictions, since the geo-effectiveness could be evaluated more than a day in advance. In future, we hope that more models can make use of this data, either to constrain the output or to initiate a CME simulation away from the complex environment near the Sun. Fortunately, the opportunity to use Solar Orbiter for this purpose repeats once a year, with more CMEs being released as the Sun approaches solar maximum. § OPEN RESEARCH SECTION The data used in this paper are available at the following places: Solar Orbiter data can be found on the Solar Orbiter archive (<https://soar.esac.esa.int/soar/>); Wind data can be found on CDAWeb (<https://cdaweb.gsfc.nasa.gov/>); quick look Dst data is available from <https://wdc.kugi.kyoto-u.ac.jp/dst_realtime/index.html>; the SYM/H data is available from <https://wdc.kugi.kyoto-u.ac.jp/aeasy/> and STEREO/HI data are available from <https://www.ukssdc.ac.uk/solar/stereo/data.html>. The ELEvoHI model is available on GitHub (<https://github.com/tamerstorfer/ELEvoHI/releases/tag/v1.0.0.0>). RL was supported by an Imperial College President’s Scholarship and TSH by STFC ST/S000364/1. The Solar Orbiter magnetometer was funded by the UK Space Agency (grant ST/T001062/1). We acknowledge the work of all the engineers who supported the instrument development and calibration, together with the engineering and technical staff at the European Space Agency, including all the Solar Orbiter instrument teams, and Airbus Space. T.A and M.B thank the Austrian Science Fund (FWF): P 36093, P31659. C.M. and E. E. D. are funded by the European Union (ERC, HELIO4CAST, 101042188). The HI instruments on STEREO were developed by a consortium that comprised the Rutherford Appleton Laboratory (UK), the University of Birmingham (UK), Centre Spatial de Liège (CSL, Belgium) and the Naval Research Laboratory (NRL, USA). The STEREO/SECCHI project, of which HI is a part, is an international consortium led by NRL. J.D., R.H and D.B. recognise the support of the UK Space Agency for funding STEREO/HI operations in the UK. M.D. acknowledges the support by the Croatian Science Foundation under the project IP-2020-02-9893 (ICOHOSS). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
http://arxiv.org/abs/2307.01179v1
20230703174456
Large deviations for the $q$-deformed polynuclear growth
[ "Sayan Das", "Yuchen Liao", "Matteo Mucciconi" ]
math.PR
[ "math.PR", "math-ph", "math.CO", "math.MP" ]
Fitting an ellipsoid to a quadratic number of random points Afonso S. Bandeira, Antoine Maillard, Shahar Mendelson, Elliot Paquette August 1, 2023 =========================================================================== In this paper, we study large time large deviations for the height function (x,t) of the q-deformed polynuclear growth introduced in <cit.>. We show that the upper-tail deviations have speed t and derive an explicit formula for the rate function Φ_+(μ). On the other hand, we show that the lower-tail deviations have speed t^2 and express the corresponding rate function Φ_-(μ) in terms of a variational problem. Our analysis relies on distributional identities between the height function 𝔥 and two important measures on the set of integer partitions: the Poissonized Plancherel measure and the cylindric Plancherel measure. Following a scheme developed in <cit.> we analyze a Fredholm determinant representation for the q-Laplace transform of (x,t), from which we extract exact Lyapunov exponents and through inversion the upper-tail rate function Φ_+. The proof of the lower-tail large deviation principle is more subtle and requires several novel ideas which combine classical asymptotic results for the Plancherel measure and log-concavity properties of Schur polynomials. Techniques we develop to characterize the lower-tail are rather flexible and have the potential to generalize to other solvable growth models. § INTRODUCTION §.§ The Model and Main Results In this paper, we consider the q-Deformed Polynuclear Growth, or q-PNG in short, which is a growth process introduced rather recently <cit.>. The q-PNG is a solvable one-parameter deformation of the more well-studied Polynuclear Growth model (PNG), famously analyzed by Prähofer and Spohn <cit.> to characterize universal processes of growing interfaces (i.e. the Airy_2 process) and also more recently by Mateski-Quastel-Remenik <cit.> for its connection to integrable systems. The literature on the PNG is somewhat broad, given its well understood connections with Ulam's problem, determinantal point processes, free fermions, Bethe Ansatz, and last passage percolation to name a few (see <cit.>). On the other hand, the solvability structures of the q-deformation of the PNG we consider here are only recently starting to emerge: this paper makes advancements in this direction, while in parallel establishing its probabilistic properties. §.§.§ The model In the q-PNG the height function 𝔥(x,t) is a piecewise constant function taking integer values and with jump discontinuities of unit size. Such height function can be seen as the profile created stacking on top of each other islands one unit size thick. As time moves forward, islands expand laterally at a constant speed, which we assume to be unitary. This lateral expansion might lead two islands to collide: in this situation, the two colliding islands merge and with probability q∈(0,1), on top of the contact point a new island of infinitesimal width gets created. Moreover, new islands of infinitesimal width randomly appear on the surface and these additional “nucleation" events follow a Poisson point process in space and time of intensity Λ >0. A snapshot of the evolution of the height function 𝔥 is given in <ref>. A special choice of the initial condition, which is the one we focus on in this paper, is given setting 𝔥(x,t=0) = 0 if x=0, -∞ if x ≠ 0. This is referred to as droplet initial condition. In place of this rather informal description of the process, we might take a different perspective and draw in the (x,t) plane the trajectories of up and down unit jumps of the height 𝔥. Consider the half plane ℝ×ℝ_+ and on the cone 𝒞={ (x,t) : |x|≤ t} sample a Poisson process 𝒫 = { (x_i,t_i) }_i ≥ 0 of intensity Λ>0. Every point p∈𝒫 emanates two rays, one directed north-eastward and the other directed north-westward respectively with angles 45^∘ and 135^∘. Whenever two rays emanating from different vertices intersect, with probability 1-q they terminate at the intersection point, and with probability q they cross each other and continue along their trajectory. To any point (x,t)∈𝒞 we associate the height 𝔥(x,t) as the number of rays that intersect with a straight segment from (x,t) to (0,0). By agreement we set 𝔥(x,t)=-∞ if (x,t) ∉𝒞. This procedure is depicted in <ref>. §.§.§ Main results Recently several properties of the q-PNG with droplet initial condition have been established. In the original paper <cit.>, Aggarwal, Borodin, and Wheeler characterized the large time fluctuations of the height around the expected value, which obey the Tracy-Widom GUE law <cit.> as lim_t^2-x^2→ +∞ℙ( 𝔥(x,t) - v_Λ,q (t^2-x^2)^1/2/σ_Λ,q (t^2-x^2)^1/6≤ s ) = F_GUE(s), for v_Λ,q = Λ/(1-q) and σ_Λ,q = √(Λ/2(1-q) ). Another interesting result from <cit.> is that, under a proper scaling in space and time, the height function 𝔥 converges, as q→ 1, to the solution of the KPZ equation <cit.> in the sense of 1-point distribution. Later, Drillick and Lin <cit.> established the strong law of large numbers for by constructing a colored version of the model[Both in <cit.> and <cit.> authors denote the deformation parameter by t, while we use the letter q.]. In this paper, we consider another important question, from the probabilistic standpoint, which is that of characterizing the probability of rare events for the growth of the height function. While restricting ourselves to the particular case of droplet initial condition, we focus on two particular, although very natural, questions: determining the decay of large deviation events where the height function assumes values, larger or smaller (of order t) respectively, than those prescribed by the law of large numbers. Intuitively, upper- and lower-tail events should possess different decay rates. In fact, for the height 𝔥(x,t) function to assume values larger than the expected value it is sufficient to require that nucleation events at location x take place at an unusually high rate along the time window [0,t]: this suggests that ℙ(𝔥(x,t) > (v_Λ,q+ξ)t ) =e^-O(t). On the other hand for the height function 𝔥(x,t) to assume values smaller, of order t, than the expected value, we need to require that through the entire backward light cone { (y,s) : |x-y| < t-s } nucleations have taken place with unusually slow rate, suggesting that ℙ(𝔥(x,t) < (v_Λ,q-ξ)t ) = e^-O(t^2), where t^2 signifies the area of the backward light cone. The following theorems confirm these qualitative heuristics, quantifying explicitly the decay rate functions. Let 𝔥 be the height function of the q-PNG with intensity Λ=2(1-q) and droplet initial condition. Then, for μ≥ 2 we have -lim_t→∞1/tlog((0,t) ≥μ t) = Φ_+(μ), where the rate function is Φ_+(μ) = 2μ arccosh(μ/2)-2√(μ^2-4). Let 𝔥 be the height function of the q-PNG with intensity Λ=2(1-q) and droplet initial condition. Then, for μ∈ [0,2], we have -lim_t →∞1/t^2logℙ( 𝔥(0,t) ≤μ t ) = Φ_-(μ), where the rate function Φ_-: →∪{+∞} is a weakly decreasing, convex, continuous on [0,∞) function with Φ_-(μ)=+∞ for μ<0, Φ_-(0)=1-q, and Φ_-(μ)=0 for μ≥ 2. Furthermore, it possesses the following expression, Φ_-(μ) = sup_y∈ℝ{ℱ(y) - log q^-1/2 (μ-y)^2 }. where the function ℱ is described in <ref>. Few remarks related to the above theorems are in order. * The intensity Λ of the q-PNG is assumed to be 2(1-q) for convenience. Under this intensity, we have the law of large numbers lim_t→∞(0,t)/t=2, which is free of q. The large deviation principle (LDP) for the general intensity case can be obtained with minor modifications in our arguments. * In <ref>, we show that for each t>|x|>0, we have the following equality in distribution (in the sense of one-point marginals): (x,t) d=(0,√(t^2-x^2)) This allows us to derive LDPs for the height function of a q-PNG model at a general location from our main theorems. * Note that the upper-tail rate function Φ_+ is free of q, and in fact, matches with the upper-tail rate function for the (q=0) PNG model obtained in <cit.>. This matching of the upper-tail rate function of a positive temperature model with its zero-temperature counterpart was observed in the context of the KPZ equation <cit.> and ASEP <cit.> as well. A geometric explanation for this matching can be given using the theory of Gibbsian line ensembles. We refer to the recent work of Ganguly and Hegde <cit.> where this line ensemble approach was carried out successfully in obtaining sharp estimates on the upper-tail of the KPZ equation and other general models under certain assumptions. The lower-tail rate function, however, depends on the positive temperature parameter (see <ref>). §.§.§ Description of ℱ The explicit characterization of the function ℱ requires the introduction of several objects. Throughout the paper, we will use the notation :=log q^-1. We define the set 𝒴_1 := { h:ℝ→ℝ_+: h is 2-Lipschitz, non-decreasing in ℝ_-, weakly decreasing on ℝ_+ and h_L^1=1} and the functional 𝒲^(q)(κ,h;x) := 1+κlogκ + κ𝖩_(h;x/√(κ)), for κ > 0, h∈𝒴_1, x∈ℝ, where 𝖩_η(h;y) := - 1/2 + η[-y]_+^2/2 + 1/2ϕ_VKLS - h _1^2 + 2 ∫_ℝ h(ξ) ( 1_[√(2),+∞) (|ξ|) arccosh| ξ/√(2)| + η/21_[y/√(2),+∞)(ξ) )ξ. Above the norm · _1 is the Sobolev H^1/2 norm defined in <ref>, the function ϕ_VKLS∈𝒴_1 is the Vershik-Kerov-Logan-Shepp optimal shape, explicitly given in <ref> in the text and [a]_+:=max{a,0}. We also define the minimizer of the functional 𝒲^(q) for any fixed x as ℱ(x) := inf_κ > 0inf_h ∈𝒴_1𝒲^(q)(κ,h;x). The following theorem describes some of the main properties of ℱ we prove in this paper. For any q∈(0,1). The function ℱ(x) is weakly decreasing, non-negative, convex, and continuously differentiable with derivative ℱ'(x) being -Lipschitz. Moreover, there exists x_q∈ such that ℱ(x) = (1-q) + x^2/2 x≤ x_q. Furthermore, we have ℱ(x) = inf_y∈ℝ{Φ_-(y) + /2 (x-y)^2 }. The functional 𝒲^(q) possesses an alternative representation reported below in (<ref>). The function ℱ is the Moreau envelope of the lower-tail rate function Φ_- with parameter 1/. The Moreau envelope of a function f with parameter λ is defined as the infimal convolution of f with x^2/2λ <cit.>. It has applications in convex and variational analysis (see e.g. <cit.>) and can be understood as a general mechanism to smoothen convex functions. In <ref> we conjecture an explicit formula for the function ℱ, given in terms of a solution of a certain non-linear differential equation. §.§ Proof Ideas In this section, we describe the key ideas behind the proofs of our main theorems. §.§.§ Connections between different solvable models Our proof is facilitated by the connections between q-PNG and two important measures on the set of integer partitions: the Poissonized Plancherel measure and the cylindric Plancherel measure. We observe that the height function after a random shift by χ, a q-geometric random variable (see (<ref>)), is equal in distribution to the largest row of a random partition distributed according to a cylindric Plancherel measure. This distributional identity is proven in <ref> and essentially comes from a more general result established first in <cit.>. The cylindric Plancherel measure and the more general Periodic Schur measure were introduced in the seminal paper by Borodin <cit.>. A remarkable property of these measures, as observed by Borodin, is that upon a random shift by S_ζ, a Theta(q,ζ)-distributed random variable (see (<ref>)), they become determinantal point processes (<ref>). The explicit correlation kernel 𝖪 (defined in (<ref>)) for the S_ζ-shifted cylindric Plancherel measure allows us to derive Fredholm determinant formulas for the one-point probability distribution of the height function after the combined (χ+S_ζ)-shift: ℙ((0,t)+χ+S_ζ≤ s) =ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ_1+S_ζ≤ s) =( 1 - 𝖪_ζ,t(1-q))_ℓ^2(s+1/2, s+3/2,…). On the other hand, the one-point probability distribution of (χ+S_ζ)-shift of the height function 𝔥 is also equal to the expectation of a certain multiplicative functional of the Poissonized Plancherel measure, derived in <cit.> as a special case of a more general result by Borodin <cit.> and also follows from the results in <cit.> (see <ref>). This leads to a second formula for the probability distribution of (χ+S_ζ)-shifted height function : ℙ((0,t)+χ+S_ζ≤ s) =ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ_1+S_ζ≤ s) =𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i ≥ 11/1+ζ q^s+i-λ_i). These formulas form the starting point of our analysis towards large deviation results for q-PNG. §.§.§ Upper-Tail In this subsection, we present a brief sketch of the proof for the upper-tail. The main component of our proof is the Fredholm determinant formula from (<ref>). Recall that a Fredholm determinant can be defined as a series: (I-𝖪_ζ,t(1-q))=1-(𝖪_ζ,t(1-q))+∑_L=2^∞ (-1)^L(𝖪_ζ,t(1-q)^∧ L), where the notation 𝖪_ζ,t(1-q)^∧ L comes from the exterior algebra definition (see <ref>). We shall collectively call the L≥ 2 series as “higher-order term”. In the upper-tail regime, we expect the Fredholm determinant to behave perturbatively, i.e., leading order of 1-(I-𝖪_ζ,t(1-q)) is given by the trace term (𝖪_ζ,t(1-q)). Indeed, direct analysis of the trace should yield -lim_t→∞1/tlog((0,t)+χ+S_ζ≥μ t) =Ψ_+(μ):= Φ_+(μ) μ∈ [2,q^1/2+q^-1/2] μ+2q^-1/2-2q^1/2 μ∈ [q^1/2+q^-1/2,∞) , where Φ_+ is defined in (<ref>). However, it is not straightforward to extract the upper-tail rate function from the above result. From the precise tail behavior of χ+S_ζ (notice that χ has an exponential right tail and no left tail, while S_ζ has Gaussian left and right tails), the previous limit suggests that, assuming that the upper-tail rate function Φ_+ of 𝔥 exists, Φ_+ can be found as a solution of the relation Ψ_+(μ) = inf_p∈{Φ_+(μ-p)+ℒ(p) }, where ℒ(p):=max{p,0}·. By <cit.>, taking Legendre transform of both sides we find that Ψ_+^*=Φ_+^*+ℒ^*. Since Ψ_+ and ℒ are linear with slope after a certain point, we have Ψ_+^*(x)=ℒ^*(x)=∞ for all x>. Thus for all x>, any Φ_+^*(x) ∈ (-∞,∞] satisfies Ψ_+^*(x)=Φ_+^*(x)+ℒ^*(x). This indicates that the above deconvolution problem does not have a unique solution. Indeed, there are infinitely many proper convex closed functions including Φ_+ defined in (<ref>) that all satisfy the above deconvolution problem. This suggests that direct analysis of the Fredholm determinant would not produce the exact rate function. To bypass the above problem, we utilize the Fredholm determinant in a different way following the strategy of <cit.>. First, using the explicit distribution formula for χ+S_ζ, one can check that ((0,t)+χ+S_ζ/√(q)≤ 0) is equal to a certain q-Laplace transform of (0,t), defined as [𝖥_q(ζ q^-(0,t))] where 𝖥_𝗊(ζ):=∏_k≥ 0 (1+ζ q^k)^-1. This q-Laplace transform may then be inverted in the following way to extract a formula for the moment-generating function of (0,t). [e^p(0,t)] =(-1)^n∫_0^∞ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ/(-1)^n∫_0^∞ζ^-α𝖥_q^(n)(ζ) ζ, p>0, where n:=⌊ p⌋+1, and α:= p-⌊ p⌋. The (-1)^n factor above ensures both the numerator and the denominator are positive. The above formula is not hard to verify via Fubini's theorem and properties of 𝖥_q <cit.>. Armed with the Fredholm determinant formula from (<ref>), we utilize the above relation to compute an exact expression for p-th Lyapunov exponent for (0,t), which is the limit of the logarithm of [e^p(0,t)] scaled by time: lim_t→∞1/tlog[e^p(0,t)]=4sinh(p/2). The upper-tail rate function can then be computed from the Lyapunov exponent by a standard Legendre-Fenchel transform technique. Let us now mention briefly how we derive (<ref>) from (<ref>). We focus on the right-hand side of (<ref>). The denominator does not depend on t and vanishes in the -1/tlog limit. For the numerator, we wish to plug in the Fredholm determinant formula from (<ref>) (with ζ↦ζ/√(q)) and analyze the derivatives of the Fredholm determinant series. However, direct analysis of these derivatives is still quite delicate as the derivatives of the higher-order term (which also has a similar series representation) have an oscillatory behavior for large values of ζ. To circumvent this issue, we first split the range of integral in the numerator into two parts: [0,] and [,∞) based on a carefully chosen (see (<ref>)). Using the decay properties of 𝖥_q and the precise value of , the integral on the range [,∞) can easily be shown to be subdominant. For the [0,] range integral, we now feed in the Fredholm determinant formula and write the integral as a sum of the following two terms: (-1)^n+1∫_0^ζ^-α^n/ζ^n(𝖪_ζ/√(q),t(1-q))ζ, (-1)^n∫_0^ζ^-α^n/ζ^n∑_L=2^∞ (-1)^L(𝖪_ζ/√(q),t(1-q)^∧ L) ζ. Relying on the explicit expression of the kernel which involves Bessel functions, we develop precise estimates for the traces of the kernel and its derivatives in <ref>. This allows us to show that the first term above yields the correct Lyapunov exponent whereas the second term is subdominant. §.§.§ Lower-Tail For the characterization of lower-tail probabilities, rather than the explicit Fredholm determinant expression for the law of the height 𝔥, we use its connections with Poissonized Plancherel and cylindric Plancherel measures. We start with analyzing the multiplicative functional formula from (<ref>). The fact that it is worthwhile to probe into multiplicative functional formulas to derive lower-tail asymptotics for KPZ models was first noted by Corwin and Ghosal <cit.>, who obtained sharp lower-tail estimates for the KPZ equation. But, as we will see below, in our case, the analysis of the right-hand side of (<ref>) alone only gives us partial information. For a complete characterization of the lower-tail LDP, we will need new ideas based on the distributional equality between the law of the height 𝔥 and the first row of the cylindric Plancherel measure. Utilizing the precise form of the functional in (<ref>) and leveraging regularity properties of the Poissonized Plancherel measure allow us to to establish the large deviation principle -lim_t→ +∞1/t^2logℙ((0,t) + χ + S_1 ≤ xt) = ℱ(x), where ℱ(x) is defined in (<ref>). Observing the structure of the functional 𝒲^(q) defined in (<ref>) we see that the q-dependent term /2[-y]_+^2 + ∫_y/√(2)^+∞h(ξ) ξ originates from the limit of the product ∏_i≥ 1(1+q^s+i-λ_i)^-1 appearing in the right-hand side of (<ref>), while the remaining terms come from a Poissonization of the Vershik-Kerov limit of the Plancherel measure <cit.>. From the precise tail behavior of χ+S_1, the previous limit suggests that, (a) assuming that the lower-tail rate function Φ_- of 𝔥 exists, Φ_- can be found as a solution of the relation ℱ(x) = min_y∈ℝ{Φ_-(x-y) + y^2/2}. Unlike the case of the upper-tail discussed around (<ref>), because of the presence of the quadratic term y^2/2, the problem (<ref>), admits a unique solution, (b) assuming that ℱ, Φ_- are proper convex and closed. The solution then is expressed <cit.> as Φ_-(μ) = (ℱ^*-g^*)^*(μ) = sup_y∈ℝ{ℱ(y) - η_q/2(μ-y)^2 }. Here, again, the superscript ^* denotes the Legendre transform. To rigorously support the aforementioned argument, it is necessary to establish the two assumptions (a) and (b) made earlier. This is where we rely on several innovative ideas originating from the asymptotic results for the Poissonized Plancherel measure and log-concavity properties of Schur polynomials. We first look for regularity properties of the rate function ℱ. These regularity properties will allow us to first identify uniquely the function Φ_- and further to prove that the lower-tail large deviation function for exists and it is given by Φ_-. To this end, we find the distributional equality between the height function 𝔥 and the cylindric Plancherel measure to be crucial. In fact, using log-concavity properties of the (skew) Schur polynomials proved by Lam-Postnikov-Pylyavyskyy <cit.> we are able to conclude, through a Laplace principle-type arguments, that both ℱ and (any possible) Φ_- are necessarily convex. The final important ingredient to the proof of <ref> and hence of <ref> is the explicit characterization of the function ℱ(x) as x→ - ∞. The parabolic behavior (<ref>) of the function ℱ(x) for x negative sufficiently large follows from an exact minimization of the functional 𝒲^(q), owing to classical results of Vershik-Kerov <cit.>, Logan-Shepp <cit.> along with further non-trivial inequalities we establish. Interestingly, this explicit knowledge of ℱ for large negative x can be leveraged to extract uniform equicontinuity bounds of the pre-limit lower-tail probabilities for the height function 𝔥; see <ref> in the text. Then, a compactness argument, in the style of Arzelá-Ascoli theorem, allows us to conclude that the limit in the left-hand side of (<ref>) exists and hence, by its convexity and Lipschitz properties, is equal to the infimal deconvolution between the function ℱ and a parabola, as in (<ref>). One of the key novelties of this paper is that our methods to establish large deviation principles for lower-tail events are rather flexible and we envision them to be adaptable to other interesting situations such as ASEP or stochastic six vertex model, where similar algebraic structures are present. The fact that the function ℱ is the Moreau envelope of a convex, closed function, i.e. Φ_- implies, thanks to results from convex analysis <cit.>, that ℱ is also continuously differentiable. This is remarkable because only little explicit knowledge of the function ℱ is required to establish this sort of regularity. In the presence of further regularity of Φ_-, namely it being C^2, we could find an explicit expression for ℱ (which would become C^2 in a half line [μ_q,+∞), from properties of the Moreau envelope), which would follow from solving a certain highly non-linear differential equation derived (under the assumption that ℱ∈ C^2) in <ref>. In general the presence of a rich mathematical structure around the study of the q-PNG invites to several alternative approaches, which we do not pursue in this paper. We collect further directions, which include connections to potential theory and Riemann-Hilbert method, approaches through connections to dimer models, or approaches through probabilistic-geometric methods from the theory of last passage percolation in <ref>. §.§ Comparison with other large deviations results for growth processes. In this section we review some of the results and available techniques to solve LDP problems for KPZ models. §.§.§ Upper-Tail LDP literature Zero-temperature models, such as PNG, Totally Asymmetric Simple Exclusion Process (TASEP), and first and last passage percolation (LPP), have sub- or super-additive structures. The existence of the upper-tail rate function in these models can be deduced easily from a standard subadditive argument <cit.>. Thus, it is the explicit form of the upper-tail rate function that is interesting in these models. For the PNG model, i.e., q-PNG with q=0, <cit.> first computed the upper-tail rate function explicitly. Seppäläinen's proof was based on a coupling between the superadditive process with a suitable particle system that admits certain known stationary initial conditions. This general coupling approach to extract the upper-tail rate function was later applied in other models such as TASEP <cit.>, LPP with inhomogeneous exponential weights <cit.>, LPP in the Bernoulli environment <cit.>, and Brownian LPP <cit.>. A different proof for the PNG upper-tail rate function was given by <cit.> based on Young diagrams. The LDP problem for TASEP (equivalently Exponential LPP) was also explored by Johansson in his seminal paper <cit.>. He related TASEP to the largest eigenvalue for the Laguerre unitary ensemble, which is a particular example of continuous Coulomb gas that arises in random matrix theory. Leveraging potential theory tools, Johansson developed a comprehensive framework for obtaining upper- and lower-tail LDP results for both discrete and continuous Coulomb gases. This framework has been successfully extended to integrable discretizations of Coulomb gases as well <cit.>. Over the past two decades, it has been discovered that some of the solvable models mentioned above possess determinantal structures. A direct perturbative analysis of the Fredholm determinants provides an alternative route to extract the one-point upper-tail rate function in those models. In a more recent work, <cit.> employed exact solvability to prove multi-point LDP for TASEP. In the case of positive temperature models, the available techniques are limited and even showing the existence of the rate function is highly nontrivial. Nonetheless, in certain instances, the techniques used in zero-temperature models can be applied to specific positive temperature models that possess a rich structure. Two such models are log-gamma polymer <cit.> and O'Connell-Yor polymer <cit.>. Indeed, exploiting the fact that these polymer models can be coupled with their stationary counterparts, the approach in <cit.> was successfully implemented to obtain the upper-tail rate function in these models <cit.>. The Lyapunov exponent approach that we used here to obtain the upper-tail rate function for q-PNG was first implemented in the context of the KPZ equation <cit.> with droplet initial condition (see <cit.> for earlier physics works, and see the rigorous work <cit.> for general initial data). The scheme was also later utilized in extracting upper-tail rate functions for half-space KPZ <cit.> and ASEP <cit.> (see <cit.> for earlier work on one-sided upper-tail estimates for ASEP). Lastly, we mention the recent paper by <cit.> which utilizes the line ensemble framework to address the upper-tail problem for the KPZ equation and related zero-temperature models satisfying suitable hypotheses. In fact, their techniques also provide a one-sided multi-point upper-tail LDP for the KPZ equation (see also <cit.> for a multi-point upper-tail LDP for the KPZ equation in a different regime). Although the approach presented in <cit.> shows promise for application in other solvable models, many of the inputs of their paper have not yet been established for those models. §.§.§ Lower-Tail LDP literature The explicit study of lower-tail probability for solvable growth processes in the KPZ class is harder, from a methodological perspective than that of fluctuations or upper-tail large deviations. The reason for this is that explicit formulas such as Fredholm determinant representations for the probability distribution of the height function have oscillatory behavior in this regime and hence are hard to analyze in this particular setting. The first explicit result concerning the evaluation of the lower-tail rate function for a growth process is found in the seminal paper by Logan and Shepp <cit.>, where authors, using potential theoretic arguments, derived an upper bound (which easily becomes a sharp equality <cit.>) for the probability law of the longest increasing subsequence of a random permutation. The calculations presented in <cit.> (which in the paper are attributed to B.F. Logan) can be also used to derive explicitly the lower-tail of the PNG <cit.>, through a simple Poissonization argument. Large deviations of the TASEP were proved in the seminal paper by Johansson <cit.>, where lower-tail were expressed in terms of a variational problem. The connection between TASEP and the edge of Laguerre Unitary Ensemble offers another route to derive the explicit lower-tail rate function borrowing results of <cit.>. For first passage percolation, <cit.> proved a t^2 speed one-sided LDP using geometric techniques. More recently, considerable attention has been given to the characterization of rare events for the KPZ equation height function. The approach of using the multiplicative functional formulas to study lower-tail asymptotics was first done in <cit.> where the authors established sharp lower-tail estimates for the KPZ equation under the droplet initial condition. The full lower-tail LDP was resolved in <cit.>, proving conjectures from the physics literature <cit.>. The different routes used in these physics works along with Tsai's approach were later shown to be closely related in <cit.>. A Riemann-Hilbert approach presented later in <cit.>, highlights similarities between the lower-tail problem for the KPZ equation and that of zero temperature models. A key step to the derivation of the rate function amounts to finding a g-function (in the jargon of the Riemann-Hilbert Problem), which solves a one-cut singular integral equation (i.e. the support of g is connected). Besides the above methods, the physics work <cit.> discusses another route to obtain LDP for the KPZ equation by using its connection to KP equation <cit.>. For the O'Connell-Yor and log-gamma polymer models, recently Landon and Sosoe <cit.> developed a systematic approach to obtaining lower tail estimates by combining exact formulas and coupling arguments (adapted from <cit.>) and geometric arguments (adapted from <cit.>). While their results provide the correct cubic-order exponents in the moderate deviation regime, it is not clear how to extend them to the large deviation regime. Before our work, the lower-tail LDP problem had been successfully solved only for the KPZ equation among the positive temperature models. Compared to the existing literature, our situation, from a technical standpoint, is considerably different. As defined in (<ref>) the rate function ℱ is found through a minimization of a quadratic functional: in this case, the explicit minimizer could be found as the solution of a multi-cut Cauchy integral equation, which gives rise to complicated transcendental relations seemingly hard to manipulate (this is discussed in <ref>). Without an explicit characterization of the optimizer of the quadratic functional 𝒲^(q), we find through the use of log-concavity properties of Schur polynomials, that the function ℱ is strictly convex, which allows us to compute rigorously its deconvolution and to establish large deviation principle, hence avoiding to rely on explicit knowledge of ℱ. §.§ Organization The rest of the article is organized as follows. In <ref>, we describe the connections between q-PNG and other solvable models of interest. The upper-tail LDP and lower-tail LDP are proven in <ref> and <ref> respectively. In <ref>, we discuss possible different approaches that could lead to an explicit characterization for the lower-tail rate function. §.§ Notation and Conventions Throughout the paper we fix a q∈ (0,1) and use the notation :=log q^-1. We use (x,y,z,…)>0 to denote a generic deterministic positive finite constant that is dependent on the designated variables x,y,z,…. #A denotes the cardinality of a finite set A. §.§ Acknowledgments We thank Ivan Corwin for his feedback on an earlier draft of the paper. MM thanks Mattia Cafasso and Giulio Ruzza for several comments about the results of this paper and Vadim Gorin for suggesting references about the problem of equilibrium measure with singular potentials. SD's research was partially supported by Ivan Corwin's NSF grant DMS-1811143, the Fernholz Foundation's “Summer Minerva Fellows” program, and also the W.M. Keck Foundation Science and Engineering Grant on “Extreme diffusion”. The work of MM has been supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101030938. The work of YL was supported by the EPSRC grant EP/R024456/1. § Q-DEFORMED POLYNUCLEAR GROWTH AND OTHER SOLVABLE MODELS In this section, we show how the q-Deformed Polynuclear Growth is related to several other solvable models in probability and algebraic combinatorics. In <ref> and <ref>, we connect q-PNG to q-PushTASEP and Cylindric Plancherel measures respectively. In <ref>, we describe a sampling procedure for cylindric Plancherel measure which lead us to derive crucial moment bounds for the observables in that model. Finally, in <ref> we derive exact formulas descending from the above connections. These formulas will be a central tool for our analysis of probabilities of tail events in <ref>. §.§ q-PNG as a limit of the q-PushTASEP The scope of this subsection is to relate the q-PNG described in the introduction with another solvable model, the q-PushTASEP <cit.>. This connection had been predicted in <cit.>, where the q-PNG was defined as a scaling limit of another stochastic model. The q-PushTASEP is a discrete-time interacting particle system whose distribution is related to the q-Whittaker measure <cit.>. In order to describe the model we need to define a special probability distribution that generalizes the beta binomial law, given next. Here and below we will make use of the notion of q-Pochhammer symbol (z;q)_n := (1-z) (1-q z) ⋯ (1-q^n-1 z), for z∈ℂ and n∈ℕ∪{+∞}. The q-deformed beta binomial distribution φ_q,μ,ν(· |m) is given by φ_q,μ,ν(k|m) = μ^k (ν/μ;q)_k(μ;q)_m-k/(ν;q)_m(q;q)_m/(q;q)_k (q;q)_m-k, k∈{0,…,m}. By virtue of the q-Chu-Vandermonde identity <cit.> we have ∑_k=1^m φ_q,μ,ν(k|m)=1, whenever the sum converges. Moreover, several choices of q,μ,ν guarantee that φ_q,μ,ν(k|m) is positive. A relevant specialization of the q-deformed beta binomial distribution comes from setting ν=0, m=+∞. We say that a random variable Y has q-Geometric distribution of parameter μ∈ (0,1) if ℙ(Y = k) = φ_q,μ,0(k|+∞) and in this case we write Y ∼ q-Geo(μ). We are now ready to define the q-PushTASEP. Let N ∈ℕ and consider locations y_1>⋯>y_N with y_i ∈ℤ. The N-particles q-PushTASEP is the process {𝗑_k(T):k=1,…,N, T=1,2,…}, where * at time T=0, we have 𝗑_k(0) = y_k, for k=1,…,N; * at time T the k-th particle evolves as 𝗑_k(T) = 𝗑_k(T-1) + 𝖩_k,T + 𝖯_k,T, where 𝖩_k,T∼ q-Geo(a) with a>0 and 𝖯_k,T∼φ_q^-1,q^x_k(T-1)-x_k-1(T-1),0(· | 𝗑_k-1(T) - 𝗑_k-1(T-1)). Here we assume 𝗑_0(T)=-∞ by convention. When y_i=i for i=1,…,N we call this the q-PushTASEP with step initial condition. The q-PushTASEP model is of particular interest as it degenerates to various well-known models. Indeed, in <cit.>, it was shown that taking q→ 1 limit of the model, one obtains the log-gamma polymer model <cit.>, the only known solvable vertex-disorder discrete polymer model. The log-gamma polymer model itself degenerates to continuum directed random polymer under intermediate disorder regime <cit.> whose free energy is given by the KPZ equation <cit.>. Presently, we describe how q-PushTASEP degenerates to q-PNG as well. To this end, we first construct a vertex model from the q-PushTASEP particle system (see <cit.> also). At each location (k,T)∈ ([1,N]∩)×, consider the random variables J= 𝗑_k-1(T) - 𝗑_k-1(T-1), J'= 𝗑_k(T) - 𝗑_k(T-1), G= 𝗑_k(T-1)-𝗑_k-1(T-1), G'=𝗑_k(T)-𝗑_k-1(T). We interpret J and G to be the number of arrows entering from below and left into (k,T) respectively, and J' and G' to be the number of arrows exiting above and to the right from (k,T) respectively (see <ref>). Note that the arrows satisfy certain conservation property: J-G=J'-G'. Let us write 𝖱(j,g;j',g'):=[J'=j',G=g' | J=j,G=g ]. We have 𝖱(j,g;j',g') = ∑_k = 0^j'φ_q^-1,q^g,0(j'-k|j) a^k/(q;q)_k (a;q)_∞. Considering the specialization a=ε^2 θ^2 we see that, up to order ε^2, the relevant cases are when j,g,j',g'∈{0,1} and we find 𝖱(0,0;0,0) , 𝖱(1,0;1,0) , 𝖱(0,1;0,1) = 1+O(ε^4). Thus most of the vertices in the lattice will have no arrows. If there is a single arrow entering, it exits in the same direction with high probability. However, note that 𝖱(1,1;0,0) = 1-q + O(ε^2), 𝖱(1,1;1,1) = q + O(ε^2). Thus if two arrows enter at a point, they pass through each other with a probability close to q and annihilate each other with a probability close to 1-q. Finally, we observe 𝖱(0,0;1,1) = ε^2 θ^2/1-q + O(ε^4). Thus with O(^2) probability a pair of exiting arrows is created, or nucleates. Taking k,T of the order ^-1, we thus expect O(1) many nucleations, and they are distributed according to a Poisson point process in the ↓ 0 limit. Given the definition of q-PNG from <ref>, the above heuristic computation suggests that upon a 45^∘ rotation, the above-constructed vertex model is converging to q-PNG in the ↓ 0 limit. Let {𝗑_k(T):k=1,…,N, T=1,2,…} be the q-PushTASEP with step initial condition. Then, under the scaling a=ε^2 θ^2, N = ⌊ε^-1 2^-1/2 (t+x) ⌋, T = ⌊ε^-1 2^-1/2 (t-x) ⌋, we have, as a process in (x,t) 𝗑_N(T) - N 𝔥(x,t) in the sense of distribution w.r.t the uniform-on-compact topology. On the right-hand side 𝔥 denotes the height function of the q-PNG with intensity Λ=θ^2/(1-q) and droplet initial condition. A complete proof of <ref> can be adapted from the proof of <cit.> given in Appendix B of the same paper. To avoid repetition, we direct the readers to <cit.> for details. §.§ Relationships between q-PNG and Cylindric Plancherel measures The connection between the q-PNG and the q-PushTASEP elaborated in the previous subsection allows us to establish a relation between the q-PNG and cylindric Plancherel measure. This is because of a more general connection relating the periodic Schur measure <cit.> and the q-PushTASEP discovered in <cit.>. In order to discuss these developments we need to introduce some notation. A partition λ is a weakly decreasing sequence of non-negative integers λ_1 ≥λ_2 ≥⋯ which eventually become zero λ_m=λ_m+1=⋯=0. The size of a partition λ is the sum of its elements |λ|=λ_1+ λ_2 + ⋯. For any n∈ℕ∪{0} we denote 𝗉_n = #{λ partition : |λ|=n}. It is well-known (see <cit.> for example) that 𝗉_n is of exponential order in the square root of n. Indeed, there exists a constant C>0 such that 𝗉_n ≤ e^C√(n) holds for all n. Graphically a partition is represented by its Young diagram, obtained drawing, one above the other, left justified rows of cells of lengths given by elements λ_1,λ_2,… as done below in <ref>. We are going to identify a partition with its Young diagram and for us, the two notions will be equivalent. Given a partition λ we also define its transpose λ' = (λ_1',λ_2',…), where λ_i'=#{ j : λ_j ≥ i}; the Young diagram of λ' is obtained from that of λ by a reflection with respect to the diagonal. A partition ρ is contained in another partition λ if ρ_i ≤λ_i for all i=1,2,… and in this case we write ρ⊂λ. This relation also implies that all cells of the young diagram of ρ belong also to that of λ. Whenever ρ⊂λ we define the skew partition λ/ρ which is the set of cells of the Young diagram of λ which do not belong to ρ. The size of a skew partition (or equivalently of a skew Young diagram) λ/ρ is |λ / ρ| = |λ| - |ρ|. Labeling cells of skew Young diagrams with natural numbers defines Young tableaux. We say that a Young tableau T of shape λ/ρ is partial if T is a labeling of cells of λ/ρ, where values are increasing row-wise and column-wise and each value appears at most once. A tableau T is standard if it is a partial tableau where values range in {1,… , |λ/ρ|}. Clearly, partial and standard Young tableaux are also defined when ρ = ∅, and in this case we say that their shape is straight (as opposed to skew). The content of a tableau T is the set of its labels cont (T) = { T(i,j) : (i,j) ∈shape(T) }. For a skew partition λ/ρ we define the number f^λ/ρ := #{ T : T is a standard Young tableau of shape λ/ρ}. Counting standard Young tableau of a given shape has been a problem of great interest in algebraic combinatorics and representation theory. In the case of straight shapes the celebrated hook length formula <cit.> provides the closed expression f^λ = |λ|!/∏_c ∈λ h_λ(c) , where h_λ(c)-λ_i - i + λ_j' -j +1 is the hook length of the cell c = (i,j) in λ. A similar formula is available for the number of standard Young tableaux of skew shape: f^λ/ρ = |λ/ρ|! ∑_C∈ℰ(λ/ρ)∏_c ∈λ∖ C 1/h_λ(c). where the sum is performed over the ℰ(λ / ρ), the set of all excited Young diagrams of λ / ρ (which, for the sake of brevity we will not define here). This formula was discovered by Naruse <cit.> and a proof can be found in <cit.>. There exist also other formulas for f^λ/ρ in terms of Littlewood–Richardson coefficients, determinants <cit.>, and reverse semistandard tableaux <cit.>. With these definitions in place, we are ready to define an important measure on the set of partitions. Fix any γ>0. The cylindric Plancherel measure with intensity γ is the probability measure on the set of skew partitions ℙ_𝖼𝖯𝗅𝖺𝗇(γ)(λ/ρ) = q^|ρ|γ^2 |λ/ρ|( f^λ/ρ/|λ/ρ|!)^2 e^-γ^2/(1-q) (q;q)_∞. It was shown in <cit.> that the measure of the whole set of skew partitions is one, i.e. ∑_ρ⊂λℙ_𝖼𝖯𝗅𝖺𝗇(γ)(λ/ρ) =1. The cylindric Plancherel measure is a particular case of the periodic Schur measure, which is also a probability measure on the set of skew partitions proportional to q^ρ s_λ/ρ(a_1,…, a_N) s_λ/ρ(b_1,…, b_T) (∏_i=1^N ∏_j=1^T(a_ib_j;q)_∞) (q;q)_∞. It was introduced by Borodin in <cit.> and more recently revisited by Betea-Bouttier <cit.>. In (<ref>) functions s_λ/ρ are the Schur polynomials <cit.> and a_1,…,a_N,b_1,…,b_T ∈ [0,1). The cylindric Plancherel measure is recovered from (<ref>) setting N=T=n, a_i=b_j=γ/n for all i,j and taking the limit n→ +∞. Under this limit we have s_λ/ργ^|λ/ρ|f^λ/ρ/|λ/ρ|! and ∏_i=1^N ∏_j=1^T(a_ib_j;q)_∞ e^-γ^2/(1-q). The limit q→ 0 of the cylindric Plancherel measure recovers the Poissonized Plancherel measure <cit.>, which we denote by ℙ_𝖯𝗅𝖺𝗇(t)(λ) = γ^2 |λ|( f^λ/|λ|!)^2 e^-γ^2. Notice that the Poissonized Plancherel measure is supported on straight partitions λ. On the other hand, taking the limit γ→ 0, the cylindric Plancherel measure becomes the volume measure ℙ_𝗏𝗈𝗅(ρ) = q^|ρ| (q;q)_∞. The following theorem states an equivalence in law between the height function of the q-PNG and the length of the first row in a cylindric Plancherel measure. Fix any θ>0. Let 𝔥 be the height function of the q-PNG with intensity Λ=θ^2/(1-q) and droplet initial condition and let χ∼ q-Geo(q) be an independent random variable. Then, for all t>|x|>0 we have 𝔥(x,t) + χd=λ_1, where λ_1 is the first row of a random partition λ where λ/ρ∼ℙ_𝖼𝖯𝗅𝖺𝗇(θ√((t^2-x^2)/2)). In <cit.> it was shown that 𝗑_N(T)-N +χ = λ̅_1, where 𝗑_N(T) is the N-th particle in the q-PushTASEP with step initial conditions and λ̅_1 is the first row of a partition λ̅ distributed according to the periodic Schur measure (<ref>) with parameters a_i=b_j=√(a) for all i,j. This is a consequence of <cit.> and <cit.>, which are respectively rephrasing of <cit.> and <cit.>. Taking the scaling limit (<ref>), in the periodic Schur measure on the right-hand side of (<ref>) we see that s_λ / ρ(a_1,…, a_N) ( θ(t+x)/√(2))^|λ/ρ|f^λ/ρ/|λ/ρ|!, s_λ / ρ(b_1,…, b_T) ( θ(t-x)/√(2))^|λ/ρ|f^λ/ρ/|λ/ρ|! and hence the limiting law of the partition λ̅ becomes ℙ_𝖼𝖯𝗅𝖺𝗇(θ√((t^2-x^2)/2)). Since by <ref> under the same scaling limit 𝗑_N(T)-N converges to 𝔥(x,t), this proves (<ref>). Fix any θ>0. Let 𝔥 be the height function of the q-PNG with intensity Λ=θ^2/(1-q) and droplet initial condition. Fix any x∈ and t>0 such that t>|x|>0. We have the following equality in distribution (in the sense of one-point marginals): (x,t)d=(0,√(t^2-x^2)). Since our main results deal with only one-point marginals of the height function of q-PNG, we shall consider the height function at the origin, i.e. (0,t), for the remainder of the paper. §.§ Sampling the cylindric Plancherel measure The cylindric Plancherel measure can be sampled leveraging a combinatorial construction discovered by Sagan and Stanley in <cit.>, which is a generalization of the celebrated Robinson-Schensted correspondence <cit.> (see e.g. <cit.>). Although we will not discuss precisely this construction, whose details are not used in this paper, we like to give a general idea of this sampling mechanism. The procedure we present below is a generalization of the canonical use of the Robinson-Schensted correspondence to sample the Plancherel measure <cit.>. We use the notion of partial permutation matrices, that are square matrices M=(M_i,j)_i,j=1^n such that M_i,j∈{0,1} and for all i,j, we have ∑_k M_i,k , ∑_k M_k,j∈{ 0,1}. In other words, partial permutation matrices M have at most one non-zero element per row and column and we define cont(M) := { i:∑_k M_i,k =1 }, # M := ∑_i,j=1^n M_i,j. The Sagan-Stanley correspondence can be formulated as a bijection (P,Q; M ) ⟷ (P,Q) where P,Q are a pair of partial tableaux of the same shape λ/ρ, M is a partial permutation matrix and P,Q are a pair of partial tableaux of the same shape μ/λ, where cont(P) ⊔cont(M) = cont(P), cont(Q) ⊔cont(M^T) = cont(Q), |μ/λ| = |λ /ρ| + |M|. where recall the content of a tableau was defined in (<ref>). The correspondence (<ref>) can be used iteratively to build random skew partitions from random permutation matrices. For this let us consider {ℳ_k}_k ≥ 0, a sequence of independent Poisson point processes on the square (0,1) × (0,1), where ℳ_k has intensity γ^2 q^k and ν, an independent random partition taken with law ℙ_𝗏𝗈𝗅 as in (<ref>). We define the number of points in each of the Poisson point processes ℳ_k as 𝒜_k := #ℳ_k, 𝒜_k ∼Poi(q^k γ^2) and we also define random variables 𝖭 = max{ k: 𝒜_k > 0 } 𝗇 = ∑_k≥ 0𝒜_k. By the Borel-Cantelli theorem, since the intensities of the Poisson random variables 𝒜_k decay exponentially we have that 𝖭 is almost surely finite, whereas a simple calculation shows that 𝗇∼Poi(γ^2/(1-q)). Let us write ℳ to denote the point process obtained by superimposing {ℳ_k}_k≥ 0. From any realization of the Poisson point processes {ℳ_k}_k ≥ 0 and hence of 𝖭=N, 𝗇= n, we construct a sequence of n × n partial permutation matrices {M_k}_k ≥ 0 as follows: For each point p=(p_x,p_y)∈ℳ_k, we set M_k(i_p,j_p)=1 where i_p:=∑_k'≥ 0#{ℳ_k'∩ ([0,1] × [p_y,1])}, j_p:=∑_k'≥0#{ℳ_k'∩ ([0,p_x] × [0,1])}. The remaining entries of M_k are set to be zero (see <ref>). Notice that almost surely none of the n points of the Poisson point processes will share an x or y coordinate. Thus almost surely, M_k's are partial permutation matrices with # M_k=#ℳ_k. After constructing the sequence of matrices M_k, of which only the matrices M_1,…,M_N will be not identically zero we start building up random partitions. Let P_0,Q_0 be the pair of skew tableaux of shape ν/ν (i.e. with no labeled cells) and construct, through the correspondence (<ref>), the sequence of pairs of partial tableaux (P_i,Q_i), for i=0,…,N+1 such that (P_i,Q_i,M_N-i) ⟷ (P_i+1,Q_i+1). We set (P,Q)=(P_N+1, Q_N+1) and it is clear, by construction that this is a pair of standard tableau with the same shape, which we denote by λ/ρ, where |λ/ρ| = n and |ρ| = |ν| + ∑_k ≥ 0 k · (# M_k). The construction just describes, along with the fact that the Sagan-Stanley correspondence is a bijection, proving the following proposition. The random skew partition λ/ρ constructed through the procedure described above is distributed according to the cylindric Plancherel measure ℙ_𝖼𝖯𝗅𝖺𝗇(γ). The above sampling scheme allows us to derive exponential moment bounds for the size of random partitions λ and ρ, where λ/ρ is distributed according to the cylindric Plancherel measure. We report them in the following proposition. Fix γ>0. Suppose λ/ρ∼_𝖼𝖯𝗅𝖺𝗇(γ). Then |λ / ρ| ∼Poi(γ^2/(1-q)), For all θ∈ (0,log q^-1), there exists a constant =(q,θ)>0 such that [e^θ |ρ|]≤exp(γ^2qe^θ/1-qe^θ). Note that (<ref>) is a direct consequence of (<ref>) and the first equality in (<ref>). Let us focus on proving (<ref>). From the second equality in (<ref>), we deduce that |ρ|=|ν|+∑_k≥ 0k𝒜_k where ν∼ℙ_𝗏𝗈𝗅 and 𝒜_k∼Poi(q^kγ^2) are all independent. Using the explicit moment generating function for poisson random variables for θ∈ (0,log q^-1), we obtain 𝔼( e^θ∑_k≥ 1k 𝒜_k) = ∏_k≥ 1[e^θ k𝒜_k] = e^γ^2( qe^θ/1-qe^θ - q/1-q)≤ e^qγ^2 e^θ/1-qe^θ. Since (|ν|=k)=(q;q)_∞𝗉_k q^k and 𝗉_k≤ e^√(k) from (<ref>), for any θ∈ (0,log q^-1) we have [e^θ|ν|]≤(q,θ)<∞. Combining this with (<ref>), we arrive at (<ref>). §.§ Exact formulas for the height function of q-PNG The connection between the cylindric Plancherel measure and the height function of the q-Deformed Polynuclear Growth with droplet initial condition can be leveraged to write down explicit formulas for the distribution of 𝔥. Formulas analogous to those described below were found in <cit.> through a similar matching with the Poissonized Plancherel measure. As found in the seminal paper <cit.> the cylindric Plancherel measure becomes, after a certain random shift, a determinantal point process with an explicit correlation kernel. Given ζ>0, we say that a random variable S_ζ has Theta(q,ζ) distribution if ℙ(S_ζ=k) = q^k^2/2ζ^k/ϑ(q,ζ), where the normalization constant can be evaluated as the Jacobi triple product ϑ(q,ζ) = (q,-√(q)ζ, -√(q)/ζ;q)_∞. Here we are using the convention that (a_1, …,a_m;q)_∞ = (a_1,;q)_∞⋯ (a_m;q)_∞. Fix any γ,ζ>0. Let λ / ρ∼ℙ_𝖼𝖯𝗅𝖺𝗇(γ) and S_ζ∼Theta(q,ζ) be independent random variables. Consider the point process 𝔖(λ,S_ζ) = {λ_i -i + S_ζ + 1/2 : i=1,2,…}. Then 𝔖(λ,S_ζ) is a determinantal point process on ℤ':=ℤ+1/2 with correlation kernel 𝖪_ζ,γ(a,b) := ∑_ℓ∈ℤ'1/1+ζ^-1q^ℓJ_a+ℓ(2γ/1-q)J_b+ℓ(2γ/1-q), where J_m are the Bessel functions of the first kind. By the general theory of determinantal point processes <cit.>, we have the following corollary. Fix any γ,ζ>0. Let λ/ρ∼ℙ_𝖼𝖯𝗅𝖺𝗇(γ) and S_ζ∼Theta(q,ζ) be independent random variables. Then, for any s∈ℤ we have ℙ(λ_1+S_ζ≤ s) = ( 1 - 𝖪_ζ,γ)_ℓ^2(s+1/2, s+3/2,…). For the next result, we define the function 𝖥_q: [0,∞)→ [0,∞) as 𝖥_q(ζ) := ∏_k ≥ 01/1+ζ q^k. Fix any θ,ζ>0. Let 𝔥 be the height function of the q-PNG with intensity Λ=θ^2/(1-q) and droplet initial condition. We have [ 𝖥_q(ζ q^-(0,t))] =(I-𝖪_ζ/√(q),θ t/ √(2))_ℓ^2(ℕ'). where ℕ':={1/2,3/2,5/2,…}. Let χ∼ q-Geo(q) and S_ζ∼Theta(ζ,q) independent of q-PNG. By <cit.> ℙ(χ + S_ζ≤ n) = 1/(-ζ q^n+1/2;q)_∞ = 𝖥_q(ζ q^n+1/2), for any n∈ℤ and hence we have ((0,t) + χ+S_ζ≤ 0) = [ 𝖥_q(ζ q^1/2-(0,t))] . By <ref> we know (0,t)+χ is equivalent in distribution to the first row λ_1 of a partition λ∼ℙ_𝖼𝖯𝗅𝖺𝗇(θ t/√(2)). Therefore, (0,t)+χ+S_ζ is equal in law to λ_1+S_ζ whose probability distribution is given in <ref>. Taking ζ↦ζ/√(q), γ↦θ t/√(2) and s↦ 0 in (<ref>), in view of (<ref>), we arrive at (<ref>). This completes the proof. We next give another formula for the height function of the q-PNG stemming from a relation between the periodic Schur and the (usual) Schur measure observed in <cit.>. This formula was also derived in <cit.> as a special case of a more general result by Borodin <cit.>. Fix θ,ζ>0. Let 𝔥 be the height function of the q-PNG with intensity Λ=θ^2/(1-q) and let χ∼ q-Geo(q), S_ζ∼Theta(q,ζ) be independent random variables, independent of as well. Then for s∈ we have ℙ(𝔥(0,t)+χ+S_ζ≤ s) = 𝔼_𝖯𝗅𝖺𝗇(θ t/√(2)(1-q))( ∏_i ≥ 11/1+ζ q^s+i-λ_i) It was shown in <cit.> that ℙ(λ̅_1+S_ζ≤ s) = 𝔼( ∏_i ≥ 11/1+ζ q^s+i-λ̃_i), where in the left-hand side the partition λ̅ is taken with respect to the periodic Schur measure (<ref>), while in the right-hand side the partition λ̃ obeys a Schur measure (i.e. (<ref>) with q=0) with specializations in geometric progression (a_1,qa_1,q^2a_1, …, a_N,qa_N,q^2a_N,…) and (b_1,qb_1,q^2b_1, …, b_T,qb_T,q^2b_T,…). We now take the scaling (<ref>) after setting a_i=b_j=√(a) and x=0 to deduce (<ref>) from (<ref>). The limit of the left-hand side was already taken in (<ref>). Considering the limits s_λ(a_1,qa_1,q^2a_1, …, a_N,qa_N,q^2a_N,…) ( θ t/√(2) (1-q))^|λ|f^λ/|λ|!, s_λ(b_1,qb_1,q^2b_1, …, b_T,qb_T,q^2b_T,…) ( θ t/√(2)(1-q))^|λ|f^λ/|λ|!, we can confirm that the right-hand side of (<ref>) converges to the right-hand side of (<ref>). § UPPER-TAIL LDP FOR Q-PNG The goal of this section is to prove the upper-tail large deviation principle for the height function of q-PNG. In <ref>, we elaborate on the brief sketch given in <ref> and reduce our proof to establishing asymptotics and estimates for the leading term and higher-order term respectively. In <ref>, we utilize Bessel function tail behavior to extract various estimates related to the trace of the kernel and its derivatives. The leading term and the higher-order term are analyzed in <ref> and <ref> respectively. §.§ An outline of proof of <ref> In this subsection, we present the main argument for the proof of the large deviation principle for the upper-tail of the height function 𝔥(0,t). For this, we define the function Υ(p) := 4 sinh(p/2), for p>0, whose Legendre transform is the rate function Φ_+ defined in (<ref>). Indeed, it can be easily checked that Φ_+(μ) = sup_p>0( p μ - Υ(p) ). We are now ready to state the main theorem of this section, which computes the large time asymptotics of the moment-generating function of 𝔥. Fix any p>0. Let 𝔥 be the height function of the q-PNG with intensity Λ=2(1-q) and droplet initial condition. We have lim_t→∞1/tlog[ e^p(0,t)] = Υ(p), where Υ(·) is defined in (<ref>) Let us complete the proof of <ref> assuming the <ref>. Deriving large deviation rate functions from the asymptotic limit of moment generating function, i.e. from the Lyapunov exponent, is common practice and in the context of the KPZ equation, this idea has been worked out in <cit.>. Indeed appealing to a general result,<cit.>, we obtain that the upper-tail LDP of (0,t) is given by the Legendre transform of Υ(·), which is Φ_+ due to the relation in (<ref>). This proves the upper-tail LDP. We now give the proof of <ref> elaborating on the strategy outlined in <ref>. We will need to define some parameters which we will use throughout the rest of the section. For p>0 and q∈ (0,1), we set s=p/log q^-1>0, n=⌊ s ⌋+1, α=s-⌊ s ⌋. Set δ:=(Υ(p)-2p)/4>0 (in fact any δ∈ (0,Υ(p)-2p) will work). For each t>0, we define =(t):=e^-(Υ(p)-δ)t/s. We are going to express the moment generating function of the height function 𝔥(0,t) using a “Fubini trick" from <cit.> as [e^p(0,t)] =[q^-s(0,t)] =(-1)^n∫_0^∞ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ/(-1)^n∫_0^∞ζ^-α𝖥_q^(n)(ζ) ζ. where 𝖥_q is defined in (<ref>). Let us analyze the right-hand side of (<ref>). First, from <cit.> we know (-1)^n∫_0^∞ζ^-α𝖥_q^(n)(ζ) ζ is strictly positive, finite and free of t. Thus we may ignore the denominator while computing 1/tlog limit of the right-hand side of (<ref>). Moving to the numerator, we are going to split the integration over two intervals as (-1)^n∫_0^∞ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ = ( ∫_0^ + ∫_^∞) (-1)^nζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ, where the parameter was defined in (<ref>). The second integral in the right-hand side of (<ref>) can be bounded, in absolute value, as follows. ∫_^∞| ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] | ζ = ∫_^∞|ζ^-α[q^-n(0,t)𝖥_q^(n)(ζ q^-(0,t))]| ζ ≤sup_x>0 |x^n𝖥_q^(n)(x)|·∫_^∞ζ^-n-αζ = ^-s/ssup_x>0 |x^n𝖥_q^(n)(x)|, where in the last line we have used that s=n+α-1. By <cit.>, we know that sup_x>0 |x^n𝖥_q^(n)(x)| is finite (and independent of t), whereas, by the choice of (<ref>) we have ^-s=e^(Υ(p)-δ)t. This shows that for any fixed p>0 we have | ∫_^∞ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))]d ζ| ≤ C e^(Υ(p) - δ) t for some constant C=C(p). Let us now examine the first integral on the right-hand side of (<ref>). To analyze this remaining term, we use the exact representation of [𝖥_q(ζ q^-(0,t))] from (<ref>), which allows us to write [𝖥_q(ζ q^-(0,t))]=(I-K_ζ,t)_ℓ^2(ℕ') where the correlation kernel K_ζ,t equals 𝖪_ζ/√(q),θ t/√(2) defined in (<ref>) with θ=√(2)(1-q) (as we work q-PNG with intensity Λ=2(1-q)). In other words, we set K_ζ,t(a,b):=𝖪_ζ/√(q),t(1-q)(a,b)=∑_ℓ∈'v_q,ℓ(ζ)J_a+ℓ(2t)J_b+ℓ(2t), v_q,ℓ(ζ):=1/1+ζ^-1q^ℓ+1/2, for a,b∈'. For the remainder of this section, we shall always work with ℓ^2(ℕ') space, and drop it from the Fredholm determinant notation in (<ref>). We now state two propositions, whose proofs are postponed to <ref> and <ref> below. For each t>0, (K_ζ,t):=∑_a∈ℕ' K_ζ,t(a,a) is differentiable at each ζ∈ (0,1). For each p>0, we have lim_t→∞1/tlog[(-1)^n+1∫_0^ζ^-α^n/ζ^n(K_ζ,t)ζ] = Υ(p). For each p>0, there exists a constant =(p)>0 such that for all t large enough we have ∫_0^ζ^-α|^n/ζ^n[(I-K_ζ,t)+tr(K_ζ,t)]| ζ≤ e^Υ(p)t-1 t. Making use of the identity in (<ref>), we write the first integral on the right-hand side of (<ref>) as (-1)^n∫_0^ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ = (-1)^n+1∫_0^ζ^-α^n/ζ^n(K_ζ,t)ζ +(-1)^n∫_0^ζ^-α^n/ζ^n[(I-K_ζ,t)+tr(K_ζ,t)] ζ and employing the convergence result (<ref>) along with the bound (<ref>) we get lim_t→∞1/tlog[(-1)^n∫_0^ζ^-α^n/ζ^n[𝖥_q(ζ q^-(0,t))] ζ]=Υ(p). Combining the previous limit with the bound (<ref>) we complete the proof of (<ref>). §.§ Bessel estimates In this section we collect various estimates related to Bessel functions that will be useful in our later analysis. The key reason why Φ_+ defined in (<ref>) appears as the upper-tail rate function is that governs the tail asymptotics of Bessel functions. Indeed, from classical estimates for Bessel functions (see Lemma 9.1 and Eq. (9.17) in <cit.> for example) we have the following result. For each n∈_>0 and for all 0<2t<n we have J_n^2(2t)≤π/8√(n^2-4t^2) e^-tΦ_+(nt). where J_m are the Bessel functions of the first kind. Furthermore, for each fixed u>2 we have lim_t→∞ 2π· t √(u^2-4)· J_⌊ ut ⌋^2(2t)e^tΦ_+(u)=1. For each v>0, we define the function U_v(x):=vx-Φ_+(x). The following lemma collects useful properties of Φ_+, Υ, and U_v functions. We have the following. * Φ_+, Φ'_+ are strictly increasing on [2,∞) with Φ_+'(x)=2logx+√(x^2-4)/2 and Φ_+”(x)=2/√(x^2-4). * U_v(x) has a unique maximizer on [2,∞) given by x_v^*:=2cosh(v/2) with maximum U_v(x_v^*)=Υ(v)=4sinh(v/2). * Υ(v)/v is strictly increasing with lim_v↓ 0Υ(v)/v=2. * For all v>0, v x_v^*=2vcosh(v/2)>Υ(v). The above lemma can be checked easily and hence its proof is skipped. In our analysis in subsequent sections, we shall often encounter the double sum ∑_a∈ℕ'∑_ℓ∈ℤ'J_a+ℓ^2(2t) e^vℓ for v>0, which is related to the trace of the kernel K_ζ,t in (<ref>) and its derivative (which we will define in the next subsection). We shall now devote a few lemmas to understanding the asymptotics as t→∞ of the above double sum restricted to various subsets of ℕ'×ℤ'. Fix any D∈ and v>0. For each t>0, we have ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ≤ DJ_a+ℓ^2(2t) e^vℓ≤e^Dv+2v/(e^v-1)^2. Recall that for integers n>0, we have J_-n(x)=(-1)^nJ_n(x). Combining this with the uniform estimate for nonnegative order Bessel functions from <cit.>, we have that sup_x∈ |J_m^2(x)| ≤ 1 for all m∈. Hence, ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ≤ DJ_a+ℓ^2(2t) e^vℓ≤∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ≤ D e^vℓ. The required estimate now follows from geometric series estimates. Let r>0 and fix any (2+r)<γ_1<γ_2≤∞. Fix any v≥ 0 and t≥ 1. There exists a =(r,v)>0 such that ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ∈ [γ_1t,γ_2t]J_a+ℓ^2(2t) e^vℓ≤· e^t ·max_x∈ [γ_1,γ_2]U_v(x). where the function U_v is defined in (<ref>). Using the estimate from (<ref>) we get ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ∈ [γ_1t,γ_2t]J_a+ℓ^2(2t) e^vℓ ≤π/8t√(4r+r^2)∑_a∈ℕ' e^-va∑_ℓ∈ℤ': a+ℓ∈ [γ_1t,γ_2t] e^-tΦ_+(a+ℓt)+v(a+ℓ) ≤π/8t√(4r+r^2)1/1-e^-v∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v(k/t) Now we can estimate the sum in the right-hand side of (<ref>) using Laplace's method, which can be applied since U_v is concave as shown in <ref>. In case γ_1 ≤ x_v^* ≤γ_2, one can easily show that ∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v(k/t)≤√(t ) e^t U_v(x_v^*). On the other hand, when γ_2<x_v^*, we have ∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v(k/t) ≤ e^t U_v(γ_2)∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v'(γ_2) (k/t -γ_2) ≤ e^t U_v(γ_2), while, if x_v^* ≤γ_1, we have ∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v(k/t) ≤ e^t U_v(γ_1)∑_k∈ℤ∩ [γ_1t, γ_2t] e^t U_v'(γ_1) (k/t -γ_1) ≤ e^t U_v(γ_1). Plugging these estimates back in (<ref>) yields the desired result. Fix any r,v>0 and t≥ 1. Fix any σ∈ (2+r,∞]. There exists a =(r,v)>0 such that ∑_a∈ℕ'∑_ℓ∈ℤ': ℓ≤σ tJ_a+ℓ^2(2t) e^vℓ≤· e^t · U_v(min{σ,x_v^*}). where the function U_v is defined in (<ref>). Recall that Φ_+ is increasing and Φ_+(2)=Φ_+'(2)=0. We can thus choose a number ξ>0 such that ξ≤min{r,x_v^*-2} and v-Φ_+'(2+ξ) >0. Take 0 <δ≤min{v^-1ξ(v-Φ_+'(2+ξ)),ξ}. Note that as Φ_+' is increasing we have Φ_+(2+ξ)=∫_2^2+ξΦ_+'(y) y ≤ξΦ_+'(2+ξ). Thus for any v>0, we have U_v(2+ξ)=v(2+ξ)-Φ_+(2+ξ) ≥ v(2+ξ)-ξΦ_+'(2+ξ) ≥ v(2+δ). Based on this δ, we split the double sum in (<ref>) into three parts given by the following three index sets: T_1 :={(a,ℓ)∈ℕ'×ℤ'| a+ℓ≤ (2+δ)t}, T_2 :={(a,ℓ)∈ℕ'×ℤ'| a+ℓ> (2+δ)t, ℓ≤ (2+δ)t}, T_3 :={(a,ℓ)∈ℕ'×ℤ'| a+ℓ> (2+δ)t, (2+δ)t< ℓ≤σ t}. We write ∑_a∈ℕ'∑_ℓ∈ℤ': ℓ≤σ tJ_a+ℓ^2(2t) e^vℓ=(∑_(a,ℓ)∈ T_1+∑_(a,ℓ)∈ T_2+∑_(a,ℓ)∈ T_3)J_a+ℓ^2(2t) e^vℓ. For the first sum, using <ref> we have ∑_(a,ℓ)∈ T_1J_a+ℓ^2(2t) e^vℓ≤· e^v(2+δ)t. For the second sum using (<ref>) we get ∑_(a,ℓ)∈ T_2J_a+ℓ^2(2t) e^vℓ ≤π/8t√(4δ+δ^2)∑_ℓ∈ℤ': ℓ≤ (2+δ) t e^vℓ∑_a∈ℕ', a+ℓ≥ (2+δ)t e^-tΦ_+(a+ℓt). Using the fact that Φ_+ is convex we see that for a+ℓ≥ (2+δ)t we have tΦ_+(a+ℓt) ≥ tΦ_+(2+δ)+((a+ℓ)-2t-δ t)Φ_+'(2+δ). This forces, after a change of variable k=a+ℓ in (<ref>), ≤· e^-tΦ_+(2+δ)∑_ℓ∈ℤ': ℓ≤ (2+δ) t e^vℓ∑_k∈ℕ' e^-kΦ_+'(2+δ). where we recognize that both the above sums are just geometric series. As Φ_+'(2+δ)>0, overall we have ∑_(a,ℓ)∈ T_2J_a+ℓ^2(2t) e^vℓ≤· e^t· U_v(2+δ). Finally, for the third sum we have ∑_(a,t)∈ T_3J_a+ℓ^2(2t) e^vℓ ≤π/8t√(4δ+δ^2)∑_ℓ∈ℤ': (2+δ)t≤ℓ≤σ t e^vℓ∑_a∈ℕ', a+ℓ≥ (2+δ)t e^-tΦ_+(a+ℓt). Using the estimate Φ_+(a+ℓt) ≥Φ_+(ℓt)+atΦ_+'(2+δ) for the term in the exponent, which holds for (a,ℓ) ∈ T_3, we bound the rightmost sum to get ≤·1/t∑_ℓ∈ℤ': (2+δ)t≤ℓ≤σ t e^vℓ e^-tΦ_+(ℓt). The last sum can be bounded in a similar manner as done in the proof of <ref>, to yield that the above term is at most · e^t· U_v(min{σ,x_v^*}). Since U_v(min{σ,x_v^*}) ≥ U_v(2+ζ) ≥max{U_v(2+δ), v(2+δ)}, we see that the third sum dominates. This gives us the desired result. §.§ Trace Analysis The main goal of this section is to prove <ref>. We first recall a few basic definitions and results from operator theory. For an operator with kernel T:ℤ'×ℤ'→, we define its norm as T:=(√(T^*T)). We say T is trace-class when T<∞. The operator T is said to be it is positive if ∑_a,b∈ℤ' T(a,b)f(a)f(b) ≥ 0 for all f:ℤ'→. For a positive operator, we have T=(T). To prove that the kernels are positive and trace class we will often rely on the standard square root trick result which we recall for the reader's convenience. Consider kernel R:ℕ' ×ℤ' →ℝ, with ∑_a∈ℕ'∑_ℓ∈ℤ' R^2(a,ℓ)<∞. The operator K with the kernel K(a,b):=∑_ℓ∈ℤ' R(a,ℓ)R(b,ℓ) is positive and trace-class, with K=(K)=∑_a∈ℕ'∑_ℓ∈ℤ' R^2(a,ℓ). A continuous version of this result appears in <cit.>. The proof in the reference can be adapted easily to show the above lemma and hence we do not report the details. Recall the kernel K_ζ,t from (<ref>). A formal computation suggests that the `derivatives' of the kernel K_ζ,t are given by K_ζ,t^(n)(a,b) :=∑_ℓ∈ℤ'( ^n/ζ^n v_q,ℓ(ζ)) J_a+ℓ(2t)J_b+ℓ(2t) = (-1)^n+1n!∑_ℓ∈ℤ'q^ℓ+1/2/(ζ+q^ℓ+1/2)^n+1J_a+ℓ(2t)J_b+ℓ(2t). The following lemma shows the derivative of K_ζ,t^(n-1) is indeed K_ζ,t^(n) in the trace norm sense. For convenience, we write K_ζ,t^(0):=K_ζ,t. The kernel K_ζ,t defines a positive trace class operator on ℓ^2(ℕ'). For each n≥ 1 * (-1)^n+1K_ζ,t^(n) defines a positive trace class operator on ℓ^2(ℕ'). * K_ζ,t^(n-1) is differentiable in ζ at each ζ>0 in the trace norm, with derivative being equal to K_ζ,t^(n), i.e., lim_ζ'→ζK_ζ',t^(n-1)-K_ζ,t^(n-1)/ζ'-ζ-K_ζ,t^(n)=0. We now turn to the proof of <ref>. From <ref> it follows that for each t>0 and M>1 ∑_a∈ℕ'∑_ℓ∈ℤ'J_a+ℓ^2(2t) M^ℓ = ∑_a∈ℕ'∑_ℓ∈ℤ':a+ℓ≤ 3tJ_a+ℓ^2(2t) M^ℓ+∑_a∈ℕ'∑_ℓ∈ℤ':a+ℓ>3tJ_a+ℓ^2(2t) M^ℓ<∞. This forces ∑_a∈ℕ'∑_ℓ∈ℤ'q^ℓ+1/2J_a+ℓ^2(2t)/(ζ+q^ℓ+1/2)^n+1<∞, ∑_a∈ℕ'∑_ℓ∈ℤ'J_a+ℓ^2(2t)/1+ζ^-1q^ℓ+1/2<∞. for each ζ,t>0, q∈ (0,1) and n≥ 1. In view of <ref>, we have that K_ζ,t and (-1)^n+1K_ζ,t^(n) are positive and trace-class. We focus on proving the differentiability of K_ζ,t^(n-1). Towards this end, set D_ζ',ζ:=K_ζ',t^(n-1)-K_ζ,t^(n-1)/ζ'-ζ-K_ζ,t^(n). Assume ζ'>ζ. We have D_ζ',ζ(a,b)=∑_ℓ∈ℤ'J_a+ℓ(2t)J_b+ℓ(2t)∫_ζ^ζ'(ζ'-r)/2(ζ'-ζ)·(-1)^n+2(n+1)!q^ℓ+1/2/(ζ+q^ℓ+1/2)^n+2ζ. Applying <ref> we see that D_ζ',ζ=∑_a∈ℕ'∑_ℓ∈ℤ'J_a+ℓ^2(2t)∫_ζ^ζ'(ζ'-r)/2(ζ'-ζ)·(n+1)!q^ℓ+1/2/(ζ+q^ℓ+1/2)^n+2ζ. Note that (ζ'-r)/2(ζ'-ζ)≤1/2. Thus, by (<ref>) with M=q^-n-1 we have D_ζ',ζ≤ (ζ'-ζ)∑_a∈ℕ'∑_ℓ∈ℤ'J_a+ℓ^2(2t)· (n+1)!q^-(n+1)(ℓ+1/2)<∞. By Dominated Convergence Theorem we get that D_ζ',ζ→ 0 as ζ'↓ 0. The proof for ζ'<ζ is analogous. This completes the proof. We now come to the proof of <ref>. We are going to show that exists a constant C=C(p)>1 such that for all t large enough we have 1/C t· e^Υ(p)t≤ (-1)^n+1∫_0^ζ^-α^n/ζ^ntr(K_ζ,t) ζ≤ C e^Υ(p)t. In view of <ref> we have ^n/ζ^ntr(K_ζ,t)=tr(K_ζ,t^(n))=(-1)^n+1n!∑_a∈ℕ'∑_ℓ∈ℤ'q^ℓ+1/2J_a+ℓ^2(2t)/(ζ+q^ℓ+1/2)^n+1. Pushing the integral inside the double sum we have (-1)^n+1∫_0^ζ^-α^n/ζ^ntr(K_ζ,t) ζ= n!∑_a∈ℕ'∑_ℓ∈ℤ'q^ℓ+1/2J_a+ℓ^2(2t)∫_0^ζ^-α/(ζ+q^ℓ+1/2)^n+1ζ. Using the substitution u=1/(1+ζ^-1q^ℓ+1/2) and setting y_ℓ=1/(1+^-1q^ℓ+1/2) the integral in the right-hand side become ∫_0^ζ^-α/(ζ+q^ℓ+1/2)^n+1ζ = q^-(n+α)(ℓ+1/2)∫_0^y_ℓ u^-α(1-u)^n+α-1 u. Thus, (-1)^n+1∫_0^ζ^-α^n/ζ^ntr(K_ζ,t) ζ = n!∑_a∈ℕ'∑_ℓ∈ℤ'e^p(ℓ+1/2)J_a+ℓ^2(2t)∫_0^y_ℓ u^-α(1-u)^n+α-1 u. We now seek to find an upper and lower bound for the right-hand side in the above equality. For the upper bound we extend the range of integration to [0,1] and evaluate the integral, which we recognize to be a Beta function, to get (-1)^n+1∫_0^ζ^-α^n/ζ^ntr(K_ζ,t) ζ≤Γ(1-α)Γ(n+α)∑_a∈ℕ'∑_ℓ∈ℤ'e^p(ℓ+1/2)J_a+ℓ^2(2t). Let us choose a constant b=b(p) such that 2<b< min{Υ(p)/p, 2cosh(p/2)}, which exists by virtue of <ref>. Splitting the summation over ℓ in (<ref>), we obtain the bounds ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ≤ btJ_a+ℓ^2(2t)e^p(ℓ+1/2)≤ C · e^pbt≤ C · e^Υ(p)t, from <ref> and ∑_a∈ℕ'∑_ℓ∈ℤ': a+ℓ≥ btJ_a+ℓ^2(2t)e^p(ℓ+1/2) ≤ C· e^Υ(p)t. from <ref>. Combining the inequalities (<ref>), (<ref>) with (<ref>) proves the upper bound in (<ref>). For the lower bound, we set d=1/t⌊ 2tcosh(p/2)⌋ and focus only on a single term in the double sum in the right-hand side of (<ref>), with a=1/2 and ℓ=d t-1/2. We have ≥ n! · e^pdtJ_dt^2(2t)∫_0^y_* u^-α(1-u)^n+α-1 u, where y_*:=y_dt-1/2 =(1+^-1q^dt)^-1 =(1+e^(Υ(p)-δ-pd)t/s)^-1. For large enough t, we have Υ(p)-δ-pd < 0, which forces y_*→ 1 as t→∞. Thus, for all large enough t, one can ensure ∫_0^y_* u^-α(1-u)^n+α-1 u ≥ C^-1∫_0^1 u^-α(1-u)^n+α-1 u = C^-1·Γ(1-α)Γ(n+α)/n!. Applying (<ref>) we see that for large enough t we have e^pdtJ_dt^2(2t) ≥1/C t· e^-tΦ_+(2cosh(p/2))+pdt≥1/C t· e^Υ(p)t. Inserting the estimates (<ref>), (<ref>) in the right-hand side of (<ref>) proves the desired lower bound in (<ref>). This completes the proof. We conclude this subsection by proving additional bounds on the trace of the kernel K_ζ,t and its derivatives. These will be useful for our analysis in <ref> below. Fix any r>0, σ>2+r and n≥ 1. There exists a constant =(n,r,q)>0 such that |(K_q^σ t,t)| ≤· e^ t· U_(min{σ,x_^*})-tσ |(K_q^σ t,t^(n))| ≤· e^ t· U_n(min{σ,x_n^*}) where :=log q^-1 and the function U_v was defined in (<ref>) and x_p^*=2cosh(p/2) was defined in <ref>. We use the fact that 1+q^ℓ-σ t+1/2≥_ℓ>σ t +q^ℓ-σ t+1/2·_ℓ≤σ t to get |(K_q^σ t,t)| =∑_a∈'∑_ℓ∈'J_a+ℓ^2(2t)/1+q^ℓ-σ t+1/2 ≤ q^σ t∑_a∈'∑_ℓ∈', ℓ≤σ t q^-ℓ-1/2J_a+ℓ^2(2t)+∑_a∈'∑_ℓ∈',ℓ>σ t J_a+ℓ^2(2t). The first term in the right-hand side of (<ref>) can be bounded using <ref>, while the second term can be estimated using (<ref>). Together we have ≤·[ e^t[U_(min{σ,x_^*})-σ]+e^-t ·Φ_+(σ)]. Since U_(min{σ,x_^*})-σ≥ U_(σ)-σ=-Φ_+(σ), the previous inequality implies (<ref>). In order to show (<ref>), we use the fact that q^σ t+q^ℓ+1/2≥ q^σ t·_ℓ>σ t +q^ℓ+1/2·_ℓ≤σ t to get |tr(K_q^σ t,t^(n))| =n!∑_a∈ℕ'∑_ℓ∈ℤ'q^ℓ+1/2J_a+ℓ^2(2t)/(q^σ t+q^ℓ+1/2)^n+1 ≤ n!· q^-n/2∑_a∈'∑_ℓ∈', ℓ≤σ t q^-nℓJ_a+ℓ^2(2t)+n!· q^-σ (n+1) t∑_a∈'∑_ℓ∈',ℓ>σ t q^ℓ+1/2 J_a+ℓ^2(2t). Once again, using <ref> and (<ref>) we arrive the bound (<ref>)≤·[ e^ t· U_n(min{σ,x_n^*}) +e^ t [-Φ_+(σ)+nσ] ]. As U_n(min{σ,x_n^*}) ≥ -Φ_+(σ)+nσ, this proves (<ref>), completing the proof. §.§ Higher-order terms The goal of this section is to prove <ref>. To deal with the expression (I-K_ζ,t)+(K_ζ,t), we will use the exterior algebra definition for Fredholm determinants <cit.>, which we recall here for convenience. For a trace-class operator T on a Hilbert space H, consider the L-th exterior power ∧_i=1^L H and the operator T^∧ L defined by T^∧ L(v_1 ∧⋯∧ v_L) := (Tv_1) ∧⋯∧ (Tv_L). The operator T^∧ L is trace-class on ∧_i=1^L H and we have (I-T)=1+∑_L=1^∞ (-1)^L(T^∧ L). In light of the above formula, we have (I-K_ζ,t)+(K_ζ,t)=1+∑_L=2^∞ (-1)^L(K_ζ,t^∧ L). From (<ref>) and the calculation in (<ref>), we know (I-K_ζ,t) is infinitely differentiable and from <ref> we have (K_ζ,t) is also infinitely differentiable. Thus taking derivatives on both sides of the above equation we get ^n/ζ^n[(I-K_ζ,t)+(K_ζ,t)]=^n/ζ^n[∑_L=2^∞ (-1)^L(K_ζ,t^∧ L)]. We claim that the sum and derivative can be interchanged, i.e., ^n/ζ^n[∑_L=2^∞ (-1)^L(K_ζ,t^∧ L)]=∑_L=2^∞ (-1)^L^n/ζ^n(K_ζ,t^∧ L). We shall justify the interchange in <ref>. Assuming this, we insert the above formula in the right-hand side of (<ref>) to get ∫_0^ζ^-α|^n/ζ^n[(I-K_ζ,t)+tr(K_ζ,t)]| ζ≤∑_L=2^∞∫_0^ζ^-α|^n/ζ^n(K_ζ,t^∧ L)| ζ. The derivatives of (K_ζ,t^∧ L) can be estimated using terms of the form (K_ζ,t^(m)). To state precisely these estimates, we introduce a few pieces of notation. For any n,L∈ℕ define the set of compositions of n of length L (L,n){m⃗∈ (_≥0)^L | m_1+m_2+⋯+m_L=n} and the multinomial coefficient nm⃗:=n!/m_1! m_2!⋯ m_L!. We set r:=(Υ(p)/p-2)/8>0 so that q^(2+4r)t =e^-(2+4r)pt/s > e^-(Υ(p)-δ)t/s=. where is defined in (<ref>). Note that the above relation ensures that the bounds in <ref> are valid for all K_ζ,t with ζ∈ [0,τ]. We now extract a convenient expression for the derivatives of (K_ζ,t^∧ L) in the following lemma. Fix an orthonormal basis {e_i}_i≥ 1 of ℓ^2(ℕ'). For each t>0, (K_ζ,t^∧ L) is differentiable in ζ at each ζ∈ (0,]. Furthermore, we have ^n/ζ^n(K_ζ,t^∧ L) = ∑_i_1<⋯<i_L∑_m⃗∈(L,n)nm⃗(⟨ e_i_k, K_ζ,t^(m_j)e_i_j)_j,k=1^L. where K_ζ,t^(n) defined in (<ref>). Let us first note that (K_ζ,t^∧ L) = ∑_i_1<⋯<i_L⟨ e_i_1∧⋯∧ e_i_L, K_ζ,t e_i_1∧⋯∧ K_ζ,te_i_L⟩ = ∑_i_1<⋯<i_L(⟨ e_i_k,K_ζ,te_i_j⟩)_j,k=1^L. We wish to take the derivative of the terms inside the above sum. By the product rule for derivatives, we have ^n/ζ^n(⟨ e_i_k,K_ζ,te_i_ℓ⟩)_k,ℓ=1^L = ∑_m⃗∈(L,n)nm⃗(d^m_j/dζ^m_j⟨ e_i_k, K_ζ,te_i_j⟩)_j,k=1^L. Thanks to <ref>, we may pass the derivative on r.h.s. of the above equation inside to get ^n/ζ^n(⟨ e_i_k,K_ζ,te_i_ℓ⟩)_k,ℓ=1^L = ∑_m⃗∈(L,n)nm⃗(⟨ e_i_k, K_ζ,t^(m_j)e_i_j⟩)_j,k=1^L. Given the identities in (<ref>) and (<ref>), (<ref>) follows by taking derivative on both sides of (<ref>) and justifying the interchange of derivatives and the infinite sum ∑_i_1<⋯<i_L. To establish the interchange, employing <cit.>, it suffices to show that ∑_i_1<⋯<i_L∑_m⃗∈(L,n)nm⃗(⟨ e_i_k, K_ζ,t^(m_j)e_i_j⟩)_j,k=1^L converges absolutely and uniformly for ζ∈ [0,]. To this end, we note that K_ζ,t^(m_j)s are self-adjoint Hilbert-Schimdt operators on ℓ^2(ℕ'). Thus, appealing to <cit.> we have ∑_i_1<⋯<i_L∑_m⃗∈(L,n)nm⃗|(⟨ e_i_k, K_ζ,t^(m_j)e_i_j⟩)_j,k=1^L| ≤ L!∑_m⃗∈(L,n)nm⃗∏_i=1^L |( K_ζ,t^(m_i))|. Note that The bounds from <ref> ensure that the right-hand side of the above equation converges uniformly for ζ∈ [0,]. This completes the proof. The following lemma provides an upper bound for the derivatives of (K_ζ,t^∧ L). For L≥ 2, we have |^n/ζ^n(K_ζ,t^∧ L) | ≤∑_m⃗∈(L,n)nm⃗(#supp(m⃗))!/(L-#supp(m⃗))!∏_i=1^L |(K_ζ,t^(m_i))| where #supp(m⃗)=#{i| m_i>0}. Furthermore, we have ^n/ζ^n[∑_L=2^∞ (-1)^L(K_ζ,t^∧ L)]=∑_L=2^∞ (-1)^L^n/ζ^n(K_ζ,t^∧ L). The proof of the above lemma can be completed employing <ref> and using the argument presented in the proof of <cit.>. The only thing that we need to check is that there exists =(n,t)>0 such that for all m⃗∈(L,n) and for all ζ∈ [0,] we have |( K_ζ,t^(m_i))| ≤. This last inequality is immediate from the trace bounds in <ref>, completing the proof. Given the above lemma, in view of the estimate in (<ref>), we now need to estimate integrals of ζ^-α times products of traces of different derivative kernels. This is achieved in the following lemma. Fix t>1 and L≥ 2. There exists a constant =(p,q)>0, such that for all m⃗∈(L,n) we have ∫_0^ζ^-α∏_i=1^L |(K_ζ,t^(m_i))| ζ≤ ^L· t · e^Υ(p)t-1t. Since q^(2+4r)t≥ from (<ref>), we shall instead provide a bound for A := ∫_0^q^(2+4r)tζ^-α∏_i=1^L |(K_ζ,t^(m_i))| ζ. Let us set w:=#supp(m⃗) and assume m_1,m_2,…,m_w>0. We use the substitution ζ =q^σ t so that ζ=tq^σ tlog q σ. Using the bounds from <ref> we have A =∫_2+4r^∞ q^-σα t+σ t∏_i=1^L |(K_q^σ t,t^(m_i))| σ≤^L · t∫_2+4r^∞ e^M(σ) tσ where M(σ):=(α-L+w-1)σ +(L-w)U_(min{σ,x_^*})+∑_j=1^w U_m_j(min{σ,x_m_j^*}), and is defined in (<ref>). We shall now estimate the integral by considering several cases. * When σ∈ (2+4r,x_^*), we have M(σ) =(α-L+w-1)σ +(L-w)U_(σ)+∑_j=1^w U_m_j(σ) =(α-L+w-1)σ +(L-w)[σ-Φ_+(σ)]+∑_j=1^w [m_jσ-Φ_+(σ)] =pσ -LΦ_+(σ) = L · U_p/L(σ), where we used the fact m_1+⋯+m_w=n=s-α+1 and p=s. By properties of the U_v function (see <ref>) we have U_p/L(σ) ≤ 4sinh(p/2L). Thus we have ∫_2+4r^x_^* e^M(σ)tσ≤ x_^* · e^4Lsinh(p/2L)t. Set g_1(L):=4Lsinh(p/2L). Observe that g_1'(L)=4cosh(p/L)(tanh(p/L)-p/L)<0. This implies g_1(L) is strictly decreasing. As L≥ 2, we have g_1(L)<g_1(1)=4sinh(p/2)=Υ(p). Thus one can find a constant >0 free of L, such that x_^* · e^4Lsinh(p/2L)t≤ e^Υ(p)t-1/t, which is precisely the bound we are looking for. This completes our work for this range of σ. * When w≥ 2, σ≥ x_^*, we have n≥ 2 and M(σ) ≤ (α-1)σ -(L-w)Φ_+(x_^*)+∑_j=1^w U_m_j(x_m_j^*) ≤ (α-1)σ +∑_j=1^w 4sinh(m_j/2). Using the fact that ∑_i=1^ksinh(a_i) ≤sinh(∑_i=1^k a_i) for a_1,…,a_k>0, we get that ≤ (α-1)σ +4sinh((n-m_1)/2)+4sinh(m_1/2) ≤ (α-1)σ+4sinh((n-1)/2)+4sinh(/2) = (α-1)σ+Υ((n-1))+Υ(), where the last inequality follows from the fact that for m_1∈ [1,n-1], sinh((n-m_1)/2)+sinh(m_1/2) is maximized at m_1=1 or n-1. Using the above bound, we get ∫_x_^*^∞ e^M(σ)tσ ≤· (1-α)^-1· e^(α-1)x_^*t· e^Υ((n-1))t+Υ()t ≤· (1-α)^-1· e^[Υ((n-1))+Υ()+x_^*(α-1)]t ≤· (1-α)^-1· e^Υ(p)t· e^[Υ()+Υ((n-1))+x_^*(α-1)-Υ(p)]t. Recall p=(n+α-1). Set g_2(α):=Υ()+Υ((n-1))+x_^*(α-1)-Υ((n+α-1)). As n≥ 2, we have g_2'(α)=x_^*-Υ'((n+α-1)) =2[cosh(/2)-cosh((n+α-1)/2)] ≤ 0. This forces g_2(α)≤ g_2(0) =Υ()-x_^*<0 by property (d) in <ref>. Thus in summary there exists a constant >0 such that e^g_2(α)t≤ e^-1/t. Plugging this back in (<ref>) leads to the desired estimate. This completes our work in this step. * When w=1, σ≥ x_n^*, we have M(σ)=(α-L)σ +(L-1)U_(x_^*)+U_n(x_n^*). We thus have ∫_x_n^*^∞ e^M(σ)tσ ≤· (L-α)^-1· e^(α-L)x_n^*t· e^((L-1)U_(x_^*)+U_n(x_n^*))t. Note that the exponent above can be bounded from above as α x_n^*+L(x_^*-x_n^*)-(L-1)Φ_+(x_^*)-x_^*+U_n(x_n^*) ≤α x_n^*+2(x_^*-x_n^*)-Φ_+(x_^*)-x_^*+U_n(x_n^*) = Υ(p)+g_3(α), where g_3(α)=(α -2)x_n^*+Υ()+Υ(n)-Υ((n+α-1)). We have g_3'(α)>0. So, g_3(α) ≤ g_3(1)=-x_n^*+Υ()≤Υ()-x_^*<0. Hence, just as in part (b), we have shown that there exists a constant >0 such that e^g_3(α)t≤ e^-1/t. Plugging this back in (<ref>) leads to the desired estimate. This completes our work in this part. * When w=1, σ∈ (x_^*,x_n^*), and n+α-1 ≤ L, we have M(σ)=(α-L)σ +(L-1)U_(x_^*)+U_n(σ). In this range, we have M'(σ)=(n+α -L)-Φ_+'(σ) ≤ -Φ_+'(x_^*)+=0, which forces M to be decreasing. Thus, M(σ) ≤ M(x_^*)=(α-L)x_^* +(L-1)U_(x_^*)+U_n(x_^*) = (n+α -1)x_^* -LΦ_+(x_^*) ≤ sx_^* -2Φ_+(x_^*). Let us set g_4(s):=Υ(s)-sx_^*+2Φ_+(x_^*). Note that g_4'(1)=0 and g_4”(s) ≥ 0. Thus, g_4(s) ≥ g_4(1)=Υ()-x_^*+2Φ_+(x_^*)=Φ_+(x_^*). So, sx_^* -2Φ_+(x_^*) ≤Υ(p)-Φ(x_^*). As Φ_+(y)>0 for all y>0, this forces ∫_x_^*^x_n^* e^M(σ)tσ≤ x_n^* e^Υ(p)t· e^-Φ_+(x_^*)t≤ e^Υ(p)t-1/t, for some >0 depending only on q (as =log q^-1). This completes our work for this part. * When w=1, σ∈ (x_^*,x_n^*), and n+α-1 > L, we have M(σ) =(α-L)σ +(L-1)U_(x_^*)+U_n(σ) = U_(n+α-L)(σ)+(L-1)U_(x_^*). In the above range, M(σ) attains a maximium at x_(n+α-L)^*. Thus setting x=L-1 we get M(σ) ≤ M(x_(n+α-L)^*)=Υ((s-x))+xΥ():=g_3(x). Note that g_5'(x)=-Υ'((s-x))+Υ() ≥ -Υ'()+Υ()<0 as Υ'(p) is increasing and s-x=n+α-L≥ 1. So, g_5(x) ≤ g_5(1)=Υ((s-1))+Υ(). Now Υ((s-1))-Υ(s) is decreasing in s, hence Υ((s-1))-Υ(s) ≤Υ()-Υ(2). Hence g_5(x) ≤ 2Υ()-Υ(2)+Υ(s)=2Υ()-Υ(2)+Υ(p). Since 2Υ()-Υ(2)<0, we have that ∫_x_^*^x_n^* e^M(σ)tσ≤ x_n^* e^Υ(p)t· e^(2Υ()-Υ(2))t≤ e^Υ(p)t-1/t, completing our work for this part. Combining all the above parts, we have thus shown that ∫_2+4r^∞ e^M(σ)tσ≤ e^Υ(p)t-1/t for some >0 depending on p and q. Inserting this bound back in (<ref>) and adjusting the constant , we get that A ≤^L· t· e^Υ(p)t-1/t. Again, as q^(2+4r)t≥, this establishes <ref>. Combining preliminary results enumerated in the lemmas above, we are now ready to prove <ref>. In view of the estimates from <ref> and <ref>, we have ≤· t· e^Υ(p)t-1/t∑_L=2^∞∑_m⃗∈(L,n)nm⃗(#supp(m⃗))!^L/(L-#supp(m⃗))!, where C>0 depends only on p. The double sum is computed in proof of Proposition 4.7 in <cit.>. In particular, the double sum is finite and its value depends only on and n. Thus. adjusting the constant further, we arrive at the desired estimate in (<ref>). This completes the proof. § LOWER-TAIL LDP FOR Q-PNG In this section, we prove the lower-tail large deviation principle for the height function of q-PNG: <ref>. In <ref>, we introduce continual young diagrams and several important functionals and discuss a few basic properties of them. In <ref>, we establish continuity-type results for these functionals. In <ref> we use probabilistic arguments to derive the precise lower-tail rate function for the largest row of the shifted cylindric Plancherel measure and sharp estimates for the lower-tail of the unshifted ones. We discuss log-concavity properties of Schur polynomials in <ref> and prove convexity of ℱ defined in (<ref>). We complete the proofs of our main theorems related to the lower-tail in <ref>. §.§ Preliminaries A key ingredient for the study of the lower-tail rate function of the q-PNG is the relation (<ref>) which matches the probability distribution of a random shift of 𝔥 with a multiplicative expectation of the Poissonized Plancherel measure. For this reason, in this subsection, we recall results concerning the asymptotics of the Plancherel measure. We introduce the set of continual Young diagrams 𝒴 = {φ:[0,+∞) → [0,+∞): φ is weakly decreasing and φ_L^1 < +∞} and its subset of shapes with unit integral 𝒴_1 = {ϕ∈𝒴 : ϕ_L^1 =1 }. Given a continual Young diagram φ we define its representation in Russian notation φ, which is the function φ(u) = v ⟺ v-u/√(2) = φ( v+u/√(2)). In words, the function φ is obtained rotating by 45^∘ counterclockwise the graph of φ and as such have φ(x) ≥ |x| for all x∈ℝ; see <ref>. Motivated by this we further define φ(x) = φ(x) - |x|. Translating the properties of the function φ, the function φ is easily seen to belong to the set 𝒴 = { h ∈ L^1(ℝ): h is absolutely continuous, nonnegative, sign(x) h'(x) ∈ [-2,0] a.e.}. For a shape ϕ∈𝒴_1 we define its inverse as ϕ^-1(y):= inf{x≥ 0 |ϕ(x)≤ y }. Note that ϕ^-1∈𝒴_1. We define the Hook integral of ϕ as I_hook(ϕ) := ∫_0^∞ x ∫_0^ϕ(x) y log𝗁_ϕ(x,y), where 𝗁_ϕ(x,y) := ϕ(x) + ϕ^-1(y) - x- y. In the seminal paper <cit.> Vershik and Kerov proved through a series of algebraic manipulations that the hook integral possesses the equivalent representation I_hook (ϕ) = - 1/2 + 1/2ϕ_VKLS - ϕ_1^2 + 2 ∫_|y|>√(2)ϕ(y) arccosh| y/√(2)| y, where ϕ_VKLS is the Logan-Shepp-Vershik-Kerov optimal shape ϕ_VKLS (x) = 2/π[√(2-x^2)+x arcsin(x/√(2))]-| x| for |x|≤√(2), 0 for |x|>√(2), ·_1^2 denotes the square of the Sobolev H^1/2 norm ψ_1^2 := ∫_ℝ |ω| |ψ̂(ω)|^2 ω where ψ̂ is the Fourier transform of ψ. An analogous representation to (<ref>) of the hook functional I_hook was given in <cit.>; see also <cit.>. The following proposition states the convergence, at the exponential scale, of the Plancherel measure to the hook functional. Such convergence is uniform with respect to the limit shape. Recall f^λ from (<ref>). As n→∞, uniformly over all partitions λ⊢ n we have 1/nlog( 1/n!(f^λ)^2 )= -1 - 2 I_hook(ϕ_λ)+O(log n/√(n)), where ϕ_λ∈𝒴_1 is defined by ϕ_λ(x) = 1/√(n)λ_⌊ x √(n)⌋+1. In order to state our theorem we fix some more notation. For a left continuous weakly decreasing function ϕ : ℝ_+ →ℝ_+ with unit integral and q ∈ (0,1) we define 𝒱^(q)(x;ϕ) = ∫_0^∞ [ϕ(y) - y -x]_+ y, where [a]_+=max{a,0} and :=log q^-1. Define also the functional 𝒲^(q)(κ,ϕ;x) = 1+κlogκ + 2κ I_hook (ϕ) + κ𝒱^(q)(x/√(κ);ϕ), κ>0, ϕ∈𝒴_1, x∈, and ℱ(x) := inf_κ > 0inf_ϕ∈𝒴_1{𝒲^(q)(κ,ϕ;x) }. Note that the 𝒲^(q) functional defined above and the 𝒲^(q) functional defined in (<ref>) are essentially the same; the only difference is that the second coordinate of the one defined in the introduction took shapes in 𝒴 with unit integral as input. Thus the function ℱ defined above is precisely the same as the one defined in (<ref>). We shall show in the next subsection that ℱ is the lower-tail rate function for the first row of shifted cylindric Plancherel measure. Presently, we end this subsection by discussing a few properties of 𝒲^(q) and ℱ. For each x∈, the minimizer in the optimization problem in (<ref>) is attained at some κ_*∈ (0,κ^*), where κ^* satisfies 1+κ^* logκ^* - κ^* =. Fix any x∈ and any shape ϕ∈𝒴_1. We first provide upper and lower bounds for the function 𝒱^(q)(x;ϕ). Note that when x≥ 0 we have 0 ≤𝒱^(q)(x;ϕ)= ∫_0^∞ [ϕ(y)-y-x]_+ y ≤∫_0^∞ϕ(y) y =. For x<0, we split the integral into two parts ∫_0^∞ [ϕ(y)-y-x]_+ y = ∫_0^-x [ϕ(y)-y-x]_+ y+∫_-x^∞ [ϕ(y)-y-x]_+ y =x^2/2+∫_0^-xϕ(y) y+∫_-x^∞ [ϕ(y)-y-x]_+ y. Since ϕ is nonnegative, we thus have that ∫_0^∞ [ϕ(y)-y-x]_+ y≥ x^2/2. On the other hand for the upper bound notice that ∫_-x^∞ [ϕ(y)-y-x]_+ y ≤∫_-x^∞ϕ(y) y. Since ϕ integrates to 1, we thus have ∫_0^∞ [ϕ(y)-y-x]_+ y ≤x^2/2+1. Combining the x≥ 0 and x<0 case, we deduce that [-x]_+^2/2≤𝒱^(q)(x;ϕ) ≤( [-x]_+^2/2 + 1 ). Then we can bound from both sides the functional 𝒲^(q) as 1 + κlogκ - κ + [-x]_+^2/2≤𝒲^(q)(κ , ϕ ;x) ≤ 1 + κlogκ + 2 κ I_hook(ϕ) + ( [-x]_+^2/2 + κ), where in the lower bound we also used the fact that I_hook(ϕ) ≥ -1/2. For reference let us also evaluate the functional 𝒲^(q) at the special values κ =1 and ϕ = ϕ_VKLS as 𝒲^(q)(1,ϕ_VKLS;x) ≤( [-x]_+^2/2 + 1 ). Then, for any κ such that 1+κlogκ - κ≥ we have 𝒲^(q)(1,ϕ_VKLS;x) ≤ 1+κlogκ - κ + [-x]_+^2/2≤𝒲^(q)(κ,ϕ;x). This proves that in order for κ to be a minimizer, we must have κ≤κ^*. Let us also show that κ = 0 cannot be a minimizer. For any shape ϕ we have 𝒲^(q)(0,ϕ;x) = 1 + [-x]_+^2/2. On the other hand, we have 𝒲^(q)(κ,ϕ_VKLS;x) ≤ 1 + κlogκ + κ (-1) + [-x]_+^2/2 and the right-hand side can easily be checked to be a decreasing function for κ∈ (0,q), which at κ = 0 equals the right-hand side of (<ref>). This implies that for κ to minimize the functional 𝒲^(q) it needs to be κ >0, completing the proof. If ϕ is bounded with compact support, then for x≪ 0 (large negative values), the two integrals in (<ref>) are 1 and 0 respectively. This forces 𝒲^(q)(κ,ϕ;x) = 1 + κlogκ + 2 κ I_hook(ϕ) + ( x^2/2 + κ). Since ϕ_VKLS is bounded with compact support, we thus have 𝒲^(q)(κ,ϕ_VKLS;x) = 1 + κlogκ + κ (-1) + x^2/2 for x≪ 0. We now discuss a few properties of ℱ that can be deduced from the definition. ℱ is non-negative and weakly decreasing. We have ℱ(x)>0 if and only if x< 2. There exists x_q < 0 such that ℱ(x) = /2 x^2 + (1-q) x≤ x_q. The fact that ℱ is weakly decreasing follows from the definition. Note that 𝒲^(q)(κ,ϕ;x)=(1+κlogκ-κ)+κ(1+2I_hook(ϕ))+κ𝒱^(q)(x/√(κ);ϕ). All the three terms above on the right-hand side are non-negative. Thus ℱ is non-negative. The first two terms are zero if and only if κ=1 and ϕ=ϕ_VKLS. From (<ref>), we deduce that ϕ_VKLS(-√(2))=0 which forces ϕ_VKLS(0)=2. Thus, one has that max_y≥ 0{ϕ_VKLS(y)-y}=2. Hence 𝒱^(q)(x;ϕ_VKLS)=0 if and only if x≥ 2. Consequently, ℱ(x)=0 if and only if x≥ 2. We now turn toward the proof of parabolic behavior on the left. For a given ϕ, let A,B,C be the area of the blue, green, and red regions in <ref>, respectively. Then B=∫_-x/√(2)^∞ϕ(y) y, C = x^2/2, ∫_0^∞[ϕ(y)-y-x]_+ y = A+C, 1=∫_0^∞ϕ(y) y=A+B. We see that for x ≪ 0 we have 𝒱^(q)(ϕ;x) = (A+C)=[C+(A+B)-B]= (x^2/2 + 1 - 𝗋(-x/√(2),ϕ) ), where we define 𝗋(M,ϕ) := ∫_M^∞ϕ(y) y . From (<ref>) we see that I_hook (ϕ) ≥ - 1/2 + 2 ∫_y>Mϕ(y) arccosh( y/√(2)) y ≥ -1/2 + 𝗋(M,ϕ) arccosh( M/√(2)). Setting M=-x/√(2κ), which we can assume to grow linearly with x since a minimizing κ remains bounded by <ref>, we have 𝒲^(q)(κ,ϕ;x) = 1+κlogκ + 2κ I_hook (ϕ) + κ(x^2/2 κ + 1 - 𝗋(-x/√(2κ),ϕ) ). ≥ 1+κlogκ + κ(-1 +2 𝗋(-x/√(2κ);ϕ) arccosh(-x/2√(κ)) ) + κ(x^2/2 κ + 1 - 𝗋(-x/√(2κ);ϕ) ) = 1 + ( -1) κ + κlogκ + /2 x^2 + κ𝗋(-x/√(2κ);ϕ) ( 2 arccosh(-x/2√(κ)) - ). It is clear that the above expression grows when κ becomes large and therefore to minimize it we have to keep κ bounded. Then, when -x becomes large enough so that 2 arccosh(-x/2√(κ)) ≥, we can further write 𝒲^(q)(κ,ϕ;x) ≥ 1 + ( -1) κ + κlogκ + /2 x^2=𝒲^(q)(κ,ϕ_VKLS;x), for x≪0 sufficiently large. Here the second equality is due to (<ref>). This implies that ϕ_VKLS minimizes 𝒲^(q)(κ , · ; x) for x negative large enough. Minimizing 𝒲^(q)(κ , ϕ_VKLS ; x) in the parameter κ we obtain, through a straightforward computation (the minimum is attained at κ=q) min_κ>0{𝒲^(q)(κ , ϕ_VKLS ; x) } = /2 x^2 + (1-q), which completes the proof. §.§ Continuity of different functionals In this subsection, we state various continuity properties of different functions introduced in the previous subsection. We first state a continuity property of the Sobolev norm · _1 defined in (<ref>) when restricted to the subspace of functions in 𝒴_1 defined in (<ref>). Let {h_n}_n∈ℕ∈𝒴_1 such that h_n h ∈𝒴_1 in L^1 norm. Let g∈𝒴_1 and assume also that g _1^2, h _1^2, h_n _1^2 < +∞ for all n. Then we have lim_n → +∞ g- h_n _1^2 = g- h _1^2. Fix any u∈𝒴_1. We first claim that u_L^∞≤√(2), u'_L^2≤ 4√(2). We prove the above two inequalities in the two bullet points below. * Note that if f ∈𝒴_1, then the area under f contains the square with corner points (0,0) and (f(0)/√(2),f(0)/√(2)). This forces f(0) ≤√(2). Thus for any u∈𝒴_1, we have |u(y)| ≤√(2). * For the derivative, we have u'(y)^2 ≤ 2|u'(y)| and hence u'_L^2≤ 2u'_L^1 =-2∫_-∞^0 u'(x) x+2∫_0^∞ u'(x) x =4u(0) ≤ 4√(2). Let us now turn towards the proof of <ref>. Set Δ h_n:= h - h_n and v:=g-h. Then we have | g-h _1^2 - g-h_n _1^2 | = | v _1^2 - Δ h_n + v _1^2 | = | ∫_ℝ |ω| ( |v (ω)|^2 - ( Δ h_n(ω) + v(ω) )(Δ h_n(ω) + v(ω))) ω| ≤∫_ℝ| Δ h_n(ω) | ( | ωΔ h_n(ω) | + 2| ωv(ω) | ) ω ≤Δ h_n _L^2( Δ h_n' _L^2 + 2 v' _L^2) ≤Δ h_n _L^2( h_n' _L^2 + 3 h' _L^2+ g' _L^2). By the first bound in (<ref>), we have Δ h_n _L^2≤( h _L^∞ + h_n _L^∞) ·Δ h_n _L^1≤ 2√(2)Δ h_n _L^1. Invoking the second bound in (<ref>) we thus obtain | g-h _1^2 - g-h_n _1^2 | ≤ 80 ·Δ h_n _L^1 , which tends to zero by the hypothesis, completing the proof. We now provide a continuity-type result for the hook functional defined in (<ref>). Fix ϕ∈𝒴_1 such that I_hook(ϕ) < ∞. Then, there exists a sequence of partitions λ^(n) with λ^(n)⊢ n such that I_hook(ϕ_λ^(n)) I_hook(ϕ). Furthermore, ϕ_λ^(n) converges to ϕ in L^1. Fix ε>0. Assume first that the function ϕ is such that ϕ(0),ϕ^-1(0)<+∞ and set a=ϕ(0), b= ϕ^-1(0). This implies that the transformed function ϕ defined as in (<ref>) is compactly supported. Define the partition μ_i = ⌊√(n)ϕ( i/√(n) ) ⌋, which, in words is the largest partition to fit below the graph of x→√(n)ϕ(x/√(n)). Let m = |μ| and since ∫_0^+∞√(n)ϕ(x/√(n)) x =n we have 0 ≤ n-m ≤√(2n) (a+b). The second inequality follows from the fact that the length of the graph of the partition μ is contained within a strip of width √(2) from the graph of √(n)ϕ(x/√(n)) and that the length of the latter is at most (a+b)√(n). We can at this point define the partition λ⊢ n by adding n-m boxes to the partition μ. There are many ways to do so. One way is to define the sequence μ^(i) as μ^(0)=μ and μ^(2i+1)=μ^(2i) + 𝕖_r_i for i≥ 0 (μ^(2i))'=(μ^(2i-1))' + 𝕖_c_i for i≥ 1 where 𝕖_k are standard basis vectors and indices r_i,c_i are defined by r_i = min{ r ≥ i : μ^(2i) + 𝕖_r is a partition}, c_i = min{ c ≥ i : (μ^(2i-1))' + 𝕖_c is a partition}. This simple sequential construction is explained by the example below *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) *(white) (white) *(white) *(white) *(white) *(white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) *(white) (white) *(white) *(white) *(white) *(white) *(red) *(red) (white) *(white) (white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) *(white) (white) *(white) *(white) *(white) *(white) *(red) *(red) (white) *(white) (white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) *(white) *(red) (white) *(white) *(white) *(white) *(white) *(red) *(red) *(red) (white) *(white) (white) *(white) *(white) (white) *(white) *(white) (white) *(white) *(white) *(white) *(red) (white) *(white) *(white) *(white) *(white) *(red) ⋯, where the first partition should be μ and red cells represent cells that are added as we move on with the sequence μ^(i). Then we define λ:= μ^(n-m) and it is clear by construction that ϕ_λ - ϕ_L^1≤2√(2)(a+b)/√(n). This is because the graph of the partition λ stays within a strip of length 2√(2) around the graph of √(n)ϕ(x/√(n)). Then, we have lim_n → + ∞ I_hook(ϕ_λ) = I_hook(ϕ), which follows since lim_n → + ∞Ω - ϕ_λ_1^2 = Ω - ϕ_1^2 and lim_n → + ∞∫_|y|>1ϕ_λ(y) arccosh|y| y = ∫_|y|>1ϕ(y) arccosh|y| y, where the first limit follows from <ref>, while the second uses the fact that ϕ, ϕ_λ are compactly supported. We now would like to remove the assumption that ϕ(0),ϕ^-1(0) are bounded to allow shapes ϕ with possibly infinite tails. Let therefore ϕ be an arbitrary shape in 𝒴_1. For any K>0 we define the truncation ϕ_K(x) := (ϕ(x)∧ K)1_x<K and it is clear that ϕ_K ∈𝒴 and ϕ_K ϕ in L^1. Moreover, since the convergence of ϕ_K to ϕ is monotone we have that lim_K → + ∞ I_hook(ϕ_K) = I_hook(ϕ). Here the monotone convergence was needed to control the convergence of the integral of ϕ_K(y) against arccosh|y|. Let θ_K=ϕ_K and since ϕ_K converges in L^1 norm to ϕ we also have θ_K → 1 as K → +∞. Defining ψ_K (x) = 1/√(θ_K)ϕ_K( √(θ_K) x), we have ψ_K =1 and I_hook(ψ_K) = θ_K I_hook(ϕ_k). Since ψ_K satisfies ψ_K(0),ψ_K^-1(0)<+∞ we can now use the previous part of the proof to find an n large enough and a partition λ^(K)⊢ n such that I_hook(ϕ_λ^(K)) is arbitrarily close to I_hook(ψ_K). Then we have | I_hook(ϕ_λ^(K)) - I_hook(ϕ) | ≤| I_hook(ϕ_λ^(K)) - I_hook(ψ_K) | + | I_hook(ψ_K) - I_hook(ϕ_K) | + | I_hook(ϕ_K) - I_hook(ϕ) | = | I_hook(ϕ_λ^(K)) - I_hook(ψ_K) |+ (1-θ_K) | I_hook(ϕ_K) | + | I_hook(ϕ_K) - I_hook(ϕ) |. Finally, choosing K large enough we can make the last two terms smaller than ε/3 and later letting n grow large we can find n_ε such that the first term is smaller than ε/3. This completes the proof of the proposition. The following proposition discusses continuity for the functional 𝒱^(q) defined in (<ref>). Fix y∈, and ϕ∈𝒴_1. Take any sequence {y_n}_n≥1 such that y_n→ y. Take any sequence {ϕ_n}_n≥1 in 𝒴_1 such that ϕ_n→ϕ in L^1. Then, 𝒱^(q)(y_n;ϕ_n) 𝒱^(q)(y;ϕ) Using the fact that |[a]_+-[b]_+|≤ |a-b|, we deduce that ∫_0^∞|[ϕ_n(z)-z-y_n]_+-[ϕ(z)-z-y_n]_+|dz≤∫_0^∞ |ϕ_n(z)-ϕ(z)|dz → 0. Thus it suffices to show ∫_0^∞|[ϕ(z)-z-y]_+-[ϕ(z)-z-y_n]_+| z → 0. Note that |[ϕ(z)-z-y]_+-[ϕ(z)-z-y_n]_+| → 0 pointwise. Set w:=y ∧min{y_n | n≥ 1}. Using the fact that |[a]_+-[b]_+|≤ [a]_+∨ [b]_+ we deduce that |[ϕ(z)-z-y]_+-[ϕ(z)-z-y_n]_+| ≤ϕ(z)+[-z-w]_+ which is integrable. Thus by dominated convergence theorem, we arrive at (<ref>). §.§ Existence of lower-tail rate function ℱ In this section we use probabilistic arguments to show the existence of a lower-tail rate function for the first row of shifted cylindric Plancherel measure (<ref>). The rate function is given by ℱ defined in (<ref>). For the unshifted measure, we shall provide sharp lower-tail estimates for the first row in <ref>. The existence of the rate function for the unshifted ones requires further argument involving convex analysis which we postpone to later subsections. Let be the height function of the q-PNG with intensity Λ=2(1-q) and droplet initial condition. Let λ/ρ∼_𝖼𝖯𝗅𝖺𝗇(t(1-q)), χ∼ q-Geo(q), and S∼Theta(q,1) all independent from . For all x ∈ we have -lim_t→∞1/t^2logℙ((0,t)+χ+S≤ xt)=-lim_t→∞1/t^2logℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ_1+S≤ xt)=ℱ(x). The first equality in (<ref>) is a consequence of <ref>. We focus on proving that the limit exists and is given by ℱ(x). From <ref> (with θ=√(2)(1-q), ζ=1, and s=⌊ xt ⌋), we have ℙ((0,t)+χ+S≤ xt)=𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ xt ⌋+i-λ_i). Hence it suffices to analyze the right-hand side of the above equation. For clarity, we divide the proof into two steps. In Step 1 we prove the theorem assuming a technical estimate (<ref>), which in turn is proven in Step 2. Step 1. Fix any x∈ and >0. Since ℱ is obtained by minimizing 𝒲^(q) (see (<ref>)), get κ_*, ϕ_* (depending on ) such that 𝒲^(q)(κ_*,ϕ_*;x) ≤ℱ(x)+. Due to <ref> we may choose κ_* so that κ_*≤κ^* defined in <ref>. Lower Bound. Fix an M>max{κ^*,ℱ(x),e^2}. Using tail estimates for Poisson random variable X∼𝖯𝗈𝗂(t^2) we have ℙ_𝖯𝗅𝖺𝗇(t)(|λ| >Mt^2)=ℙ_𝖯𝗈𝗂(t^2)(X > Mt^2) ≤ e^-Mt^2. We claim that 𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ xt ⌋+i-λ_i_|λ|≤ Mt^2) ≤ e^-t^2 ℱ(x) + O(t). Combining the above claim with (<ref>), by the choice of our M we see that lim inf_t→∞ -1/t^2log𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ xt ⌋+i-λ_i) ≥ℱ(x). which verifies the lower bound. Let us now focus on proving (<ref>). We expand the expectation of the q-product over the Plancherel measure 𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ xt ⌋+i-λ_i_|λ|≤ Mt^2) ≤𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^xt +i-λ_i_|λ|≤ Mt^2) = ∑_n∈∩[0,Mt^2] e^-t^2t^2n/n!∑_λ⊢ n (f^λ)^2/n!∏_i=1^∞1/1+q^x t+i-λ_i. We will produce estimates of the various factors appearing in the summation on the right-hand side of (<ref>). Fix a partition λ with |λ|≤ Mt^2. We claim that ∏_i=1^∞1/1+q^x t+i-λ_i = e^-n 𝒱^(q)(xt/√(n); ϕ_λ) + O(t), ∏_i=1^∞1/1+q^⌊ x t⌋+i-λ_i = e^-n 𝒱^(q)(⌊ xt ⌋/√(n); ϕ_λ) + O(t) where n=|λ| the size of the partition λ and 𝒱^(q) and ϕ_λ are defined in (<ref>) and (<ref>) respectively. The error term O(t) appearing above depends only on M and x. We assume (<ref>) for the moment and complete the proof of the theorem. Setting n=κ t^2, from <ref> and the approximation of the density of the Poisson distribution, we have (f^λ)^2/n! = e^-t^2 κ( 1+ 2 I_hook (ϕ_λ)) + O(tlog t), e^-t^2t^2n/n! = e^- t^2 (1-κ + κlogκ) + O(log t) respectively. Combining the above two estimates with (<ref>), we are able to write ≤∑_κ∈1/t^2∩ [0,Mt^2]∑_λ⊢κ t^2 e^-t^2𝒲^(q)(κ,ϕ_λ;x)+O(t) ≤ e^-t^2 ℱ(x)+O(t)∑_n∈∩[0,Mt^2]𝗉_n ≤ e^-t^2 ℱ(x) + O(t). Above, 𝗉_n denotes the number of partitions of n defined in (<ref>). The second inequality uses the fact that ℱ is the minimizer of the 𝒲^(q) functional and the third inequality uses the bound 𝗉_n ≤ e^√(n) from (<ref>). This proves (<ref>). Upper Bound. For the upper bound we set n= ⌊κ_* t^2 ⌋ and take λ^(t)⊢⌊κ_* t^2 ⌋ so that <ref> holds for ϕ=ϕ_*. We use the immediate lower bound: 𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ x t⌋+i-λ_i) ≥ e^-t^2t^2n/n! (f^λ^(t))^2/n!∏_i=1^∞1/1+q^⌊ x t⌋+i-λ_i^(t). Taking logarithms on both sides, dividing by -t^2, and using the asymptotics from (<ref>) and the second equality from (<ref>) we get -1/t^2log𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ x t⌋+i-λ_i) ≤ (1-nt^2+nt^2lognt^2)-1t^2log(1n! (f^λ^(t))^2) + nt^2𝒱^(q)(⌊ x t⌋/√(n);ϕ_λ^(t)))+O(1/t). Since n/t^2→κ_* and ⌊ x t⌋/t→ x, combining <ref> we have that lim sup_t→∞ -1/t^2log𝔼_𝖯𝗅𝖺𝗇(t)( ∏_i=1^∞1/1+q^⌊ x t⌋+i-λ_i) ≤ (1-κ_*+κ_*logκ_*)-κ_*(-1-2I_hook(ϕ_*))+ κ_*𝒱^(q)(x/√(κ_*);ϕ_*) = 𝒲^(q)(κ_*,ϕ_*;x) ≤ℱ(x)+. Since >0 is arbitrary, taking ↓ 0 produces a matching upper bound. Step 2. In this step we prove (<ref>). We shall only prove the first estimate in (<ref>), the argument for the second one is exactly the same. Let us write ∏_i=1^∞1/1+q^x t+i-λ_i = exp{ - ∑_i=1^∞ f(i/√(n)) }, f(y):=log(1+q^xt+y√(n)-λ_⌈√(n) y⌉). We rely on the following two estimates: √(n)∫_0^∞ f(y)ỵ -∑_i=1^∞ f(i/√(n))=O(t), √(n)∫_0^∞ f(y)ỵ -n 𝒱^(q)(xt/√(n); ϕ_λ)=O(1), where the error term O(t) depends on M,x as well. Recall that n=|λ|≤ Mt^2. (<ref>) follows from the above two estimates. Proof of (<ref>). Observe that f is decreasing. Hence the left-hand side of (<ref>) is positive. It thus suffices to look for an upper bound for the same. Observe that √(n)∫_0^∞ f(y) y -∑_i=1^∞ f(i/√(n)) = ∑_i=1^n√(n)∫_(i-1)/√(n)^i/√(n) (f(y)-f(i/√(n))) y = ∑_i=1^∞√(n)∫_(i-1)/√(n)^i/√(n)log[1+q^xt+i-λ_i(q^y√(n)-i-1)/1+q^xt+i-λ_i] y = ∑_i=1^∞∫_0^1 log[1+q^xt+i-λ_i(q^-z-1)/1+q^xt+i-λ_i] z ≤ q^-1∑_i=1^∞q^xt+i-λ_i/1+q^xt+i-λ_i, where in the last line we use the inequality log(1+z)≤ z for each term in the summand. We now split the above sum into two parts: ∑_i=1^∞q^xt+i-λ_i/1+q^xt+i-λ_i =[∑_i=1^k+∑_i=k^∞] q^xt+i-λ_i/1+q^xt+i-λ_i. For the first sum, we bound each term by 1 and get that the sum is at most k. Note that for each i, we have λ_i≤1/i(λ_1+⋯+λ_i) ≤ n/i. Thus, for each term in the second sum, we have q^xt+i-λ_i/1+q^xt+i-λ_i≤ q^xt+i-n/i≤ q^i-kq^xt+k-Mt^2/k. Now if we choose k=⌈ t(√(x^2+4M)-x) ⌉, this ensures xt+k-Mt^2/k ≥ 0. Then the right-hand side is at most O(t). Proof of (<ref>). Using the elementary inequality: [v]_+ ≤log(1+e^v) we obtain √(n)∫_0^∞f(y)ỵ≥ n∫_0^∞ [ϕ_λ(y)-y-xt/√(n)]_+ỵ. We thus focus on proving √(n)∫_0^∞f(y)ỵ≤ n∫_0^∞ [ϕ_λ(y)-y-xt/√(n)]_+ỵ+O(1). For each k≥ 1, define a_k:=log(1+q^xt+k-λ_k) and b_k:=log(1+q^xt+k-1-λ_k). Note that b_1≥ a_1≥ b_2 ≥ a_2 ≥ b_3 ≥ a_3 ≥⋯. Making the change of variable u=log(1+q^xt+y-λ_k) we get √(n)∫_(k-1)/√(n)^k/√(n)f(y)ỵ = ∫_k-1^k log(1+q^xt+y-λ_k)ỵ = 1/∫_a_k^b_kue^u/e^u-1ụ = 1/b_k^2-a_k^2/2+1/∫_a_k^b_ku/e^u-1ụ. Note that 0≤ b_k-a_k ≤, and ∑_k≥ 1∫_a_k^b_ku/e^u-1ụ≤∫_0^∞u/e^u-1ụ=π^2/6. Summing over k≥ 1 on both sides of (<ref>) and applying these inequalities we get √(n)∫_0^∞ f(y)ỵ≤π^2/6 +1/2∑_k≥ 1 (b_k+a_k). We now partition the index set ℕ into three sets: I_1:={k∈ℕ : xt+k-λ_k≤ 0}, I_2:={k∈ℕ : xt+k-1-λ_k≥ 0}, I_3:=ℕ∖ (I_1∪ I_2). Note that the cardinality of I_3 is at most 1, and if k∈ I_3, we have a_k,b_k ≤log(1+q^-1). For I_2, using the log(1+x)≤ x inequality, we observe that ∑_k∈ I_2 a_k = ∑_k∈ I_2log(1+q^xt+k-λ_k) ≤∑_k∈ I_2 q^xt+k-λ_k≤1/1-q, and ∑_k∈ I_2 b_k ≤1/1-q similarly. For I_1, using the log(1+x)≤ x inequality, we note that ∑_k∈ I_1 [a_k-(λ_k-k-xt)] = ∑_k∈ I_1log(1+q^λ_k-k-xt) ≤∑_k∈ I_1 q^λ_k-k-xt≤1/1-q, and ∑_k∈ I_2 b_k-(λ_k-k+1-xt) ≤1/1-q similarly. Inserting these bounds back in the right-hand side of (<ref>) we get √(n)∫_0^∞ f(y)ỵ ≤ O(1)+η∑_k∈ I_11/2((λ_k-k+1-xt)+(λ_k-k-xt)) =O(1)+η∑_k∈ I_11/2((λ_k-k+1-xt)^2-(λ_k-k-xt)^2) =O(1)+η∑_k∈ I_1√(n)∫_(k-1)/√(n)^k/√(n) (λ_⌊ y√(n)⌋ +1-y√(n)-xt)ẓ ≤ O(1)+nη∫_0^∞ [ϕ_λ(y)-y-xt/√(n)]_+ỵ. This proves (<ref>), completing the proof. Next, we provide an approximation for the lower-tail rate function of the cylindric Plancherel measure that will be useful throughout. For t>0 and a≥ 0 we define 𝒯_t(a) := -1/t^2log{sup_ρ⊂λ λ_1 ≤ a tℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ/ρ) }. As a convention we set 𝒯_t(a):=∞ for a<0. For each t>0, 𝒯_t(a) is nonnegative and weakly decreasing in a. Moreover, for all t>0 we have 𝒯_t(0)=1-q. The fact that 𝒯_t is nonnegative and weakly decreasing follows immediately from the definition of 𝒯_t(a). Notice that sup_ρ⊂λ λ_1=0ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ/ρ) = ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))(∅/∅)=e^-t^2(1-q). Taking -1/t^2log of the left- and right-hand sides of the above equality we obtain 𝒯_t(0)=1-q. There exists a constant =(q)>0 such that for all a≥ 0 we have e^-t^2𝒯_t(a)≤ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at) ≤ e^-t^2 𝒯_t(a) + · t. The lower bound in (<ref>) is obvious from the definition of 𝒯_t(a). We focus on the upper bound. Let us set θ:=1/2 so that e^θ=q^-1/2. Set M:=max{3,3θ^-1}. By a union bound we have ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at) ≤ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (|λ / ρ| > Mt^2)+ ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (|ρ| > Mt^2) +ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at, |λ/ρ|<Mt^2,|ρ|<Mt^2). We shall now bound each of the three terms above separately. Recall that by (<ref>) , |λ/ρ|∼𝖯𝗈𝗂(t^2(1-q)), hence Markov inequality yields ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (|λ / ρ| > Mt^2) ≤ e^-Mt^2_𝖯𝗈𝗂(t^2(1-q))[e^|λ/ρ|]=e^-Mt^2+t^2(1-q)(e-1)≤ e^-t^2. Furthermore by (<ref>) we have ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (|ρ| > Mt^2) ≤ e^-Mθ t^2[e^θ |ρ|] ≤exp(-Mθ t^2+t^2(1-q)^2√(q)/1-√(q)) ≤ e^-t^2 For the last term in (<ref>) observe that ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at, |λ/ρ|<Mt^2,|ρ|<Mt^2) ≤ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at, |λ|,|ρ|<2Mt^2) = ∑_ρ⊂λ λ_1 ≤ at, |λ|,|ρ|≤ 2Mt^2ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ / ρ) ≤ e^-t^2𝒯_t(a)(∑_n∈∩[0,2Mt^2]𝗉_n)^2 ≤ e^-t^2𝒯_t(a)+ t. where in the last line we used the well known 𝗉_n≤ e^√(n) bound. Inserting the above bound along with the bounds in (<ref>) and (<ref>) back in (<ref>) yields ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1 ≤ at) ≤ e^-t^2𝒯_t(a)+ t+( +1)e^-t^2. Since 𝒯_t(a) ≤ 1-q by <ref>, e^-t^2𝒯_t(a) dominates e^-t^2. Thus adjusting the constant we derive the upper bound in (<ref>). This completes the proof. We have lim_t→∞𝒯_t(x)=0 for all x≥ 2. Utilizing the upper bound in (<ref>) and the relation (<ref>) (θ=√(2)(1-q) and x=0) we deduce e^-t^2𝒯_t(2)+ t≥(λ_1≤ 2t) ≥(χ=0)((0,t)≤ 2t)=(q;q)_∞·((0,t)≤ 2t). Taking -1/t^2log both sides and then taking lim sup_t→∞ we obtain lim sup_t→∞𝒯_t(2) ≤lim sup_t→∞ -1/t^2log((0,t)≤ 2t). From the fluctuation result of q-PNG height function <cit.>, we have that lim_t→∞((0,t)≤ 2t) =(TW_GUE≤ 0)>0. As 𝒯_t is nonnegative, this implies lim_t→∞𝒯_t(2)=0. The conclusion for general x≥ 2 follows from monotonicity. §.§ Convexity of rate function ℱ The scope of this subsection is to prove convexity of the lower-tail rate function ℱ for the edge of the shift-mixed cylindric Plancherel measure. The starting point of our argument is the celebrated Okounkov inequality. Indeed, Schur functions enjoy remarkable log-concavity properties, a version of which was first conjectured by Okounkov in <cit.>. This conjecture was later proved and refined in <cit.> by Lam, Postnikov, and Pylyavskyy. We report this result next, and then use it to establish a certain asymptotic midpoint convexity of the family of functions 𝒯_t in <ref>. (<cit.>) For any pair of skew-partitions λ/μ,ν/ρ we have s_⌈λ+ν/2⌉ / ⌈μ+ρ/2⌉ s_⌊λ+ν/2⌋ / ⌊μ+ρ/2⌋≥_s s_λ/μ s_ν/ρ. where for any two symmetric functions A, B, A ≥_s B means that the difference A-B possesses an expansion in the basis of Schur functions with positive coefficients. Here the operations +,/2,⌊·⌋,⌈·⌉ on partitions are performed coordinate-wise. The following proposition establishes a certain asymptotic midpoint convexity of the family of functions 𝒯_t. Let a, a' ∈. Then, for all ε>0, there exists t_ε such that for all t>t_ε we have 𝒯_t( a+a'/2 +1/t) ≤1/2( 𝒯_t(a) + 𝒯_t(a') ) + ε. As 𝒯_t(x)=+∞ for x<0, we may assume a,a'≥ 0. Without loss of generality assume 0≤ a≤ a'. By the Okounkov's inequality (<ref>) we have, for any pair of skew partitions λ/μ, ν/ρ we have max{(s_⌈λ+ν/2⌉ / ⌈μ+ρ/2⌉)^2, (s_⌊λ+ν/2⌋ / ⌊μ+ρ/2⌋)^2 }≥ s_λ/μ s_ν/ρ, whenever the Schur functions are evaluated at some positive specialization. Let λ+ν/2 / μ+ρ/2 be the skew partition that maximizes the left-hand side of (<ref>) and set Δ := |μ+ρ/2| - |μ|+|ρ|/2. Taking the exponential specialization with parameter γ:=t(1-q) in all the Schur functions (see (<ref>) and (<ref>)) we obtain that ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))( λ+ν/2/ μ+ρ/2) = e^-t^2(1-q) q^|μ|/2+|ρ|/2( s_λ+ν/2 / μ+ρ/2)^2 q^Δ ≥ e^-t^2(1-q) q^|μ|/2 + |ρ|/2 s_λ/μ s_ν/ρ q^Δ = √(ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))( λ / μ) ) √(ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))( ν / ρ) ) q^Δ. Note that λ_1≤ at and ν_1≤ a't implies that the first row of λ+ν/2 has length ≤(a+a')t/2+1. Taking -1/t^2log of both sides of the previous inequality and optimizing over the choice of λ/μ with λ_1 ≤ a t and ν/ρ with ν_1 ≤ a't we find 𝒯_t( a+a'/2 +1/t) ≤ -1/t^2logℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q))( λ+ν/2/ μ+ρ/2) ≤1/2( 𝒯_t(a) + 𝒯_t(a') ) + log q^Δ/t^2, Since Δ≤ 1, taking t large enough, we get the desired result. For the next proposition, we introduce the operation of infimal convolution between two real-valued functions g,h as (g ⊕ h)(x) := inf_y ∈ℝ{ g(y) + h(x-y) }. The infimal convolution is the analog of the integral convolution g*h in the (inf,+) algebra and it is a common object in convex analysis. We use the notion of infimal convolution in the following result. Let x ∈ℝ and define the function g(x)=x^2/2. Then we have ℱ(x) = lim_t →∞( g ⊕𝒯_t )(x). By <ref>, ℱ(x)=0 for x≥2. On the other hand, we have for x≥2, 0≤lim inf_t→∞(g ⊕𝒯_t )(x)≤lim sup_t→∞(g ⊕𝒯_t )(x)≤ g(0)+lim_t→∞𝒯_t(x)=0 by <ref>. Hence (<ref>) holds for x≥2. Let us fix any x<2. Suppose λ/ρ∼_𝖼𝖯𝗅𝖺𝗇(t(1-q)) and S∼Theta(q;1). For each i≥ 0, let us set y_i := argmax{ℙ(S = yt) ℙ(λ_1 ≤ (x-y)t) : y∈1tℤ∩ [x-i,x] }, and define (g⊕_i𝒯_t)(x):=inf_y∈1/tℤ∩[x-i,x]{g(y)+𝒯_t(x-y)}. We shall only consider y_2 and y_3 in our proof. Observe that ℙ(λ_1 + S ≤ xt) ≥(S∈ [xt-3t,xt],λ_1 + S ≤ xt) ≥ℙ(S = y_3t) ℙ(λ_1 ≤ (x-y_3)t), and ℙ(λ_1 + S ≤ xt) ≤(S∈ [xt-2t,xt],λ_1 + S ≤ xt) +P(S≤ (x-2)t) ≤ 2t·ℙ(S = y_2t) ℙ(λ_1 ≤ (x-y_2)t)+P(S≤ (x-2)t). Using (<ref>) and the explicit law (<ref>) (with ζ=1) of the random variable S we obtain the estimates ℙ(S = y_i t) ℙ(λ_1 ≤ (x-y_i)t) = e^-t^2 (g ⊕_i 𝒯_t )(x) + O(t), ℙ(S ≤ (x-2)t) = e^-t^2 g(x-2) + O(t). Inserting these estimates back in (<ref>) and (<ref>), we obtain e^-t^2 (g ⊕_3 𝒯_t )(x) + O(t)≤ℙ(λ_1+S≤ xt) ≤ e^-t^2 (g ⊕_2 𝒯_t )(x) + O(t) + e^-t^2 g(x-2) + O(t). From <ref> we know that λ_1+S satisfies a lower-tail LDP with speed t^2 and rate function ℱ. Taking the -1/t^2log of all terms in the previous chain of inequality and letting t tend to +∞, we thus obtain lim sup_t→∞min{( g ⊕_2 𝒯_t )(x), g(x-2) }≤ℱ(x) ≤lim inf_t→∞( g ⊕_3 𝒯_t )(x). Note that (g⊕𝒯_t)(x)≤ (g⊕_2 𝒯_t)(x). Since (g⊕𝒯_t)(x) ≤ g(x-2)+𝒯_t(2) and 𝒯_t(2) → 0 as t tends to ∞ via <ref>, from the first inequality in (<ref>) we deduce that lim sup_t→∞( g ⊕𝒯_t )(x) ≤ℱ(x). For x≤ 2, we claim that lim_t→∞ |(g⊕_3 𝒯_t)(x)-(g⊕𝒯_t)(x)|=0. Owing to the second inequality in (<ref>), the above claim forces ℱ(x) ≤lim inf_t→∞ (g⊕𝒯_t)(x) for x≤ 2. Combining this with (<ref>) verifies (<ref>) for x≤ 2. Thus we are left to show (<ref>) for x≤ 2. Fix any ε>0 and x≤ 2. Since g(y)→∞ as |y|→∞ and sup_t>0, y≥ 0 |𝒯_t(y)| ≤ 1-q, we may find a sequence {z_t}_t such that sup_t |z_t|<∞ and g(z_t)+𝒯_t(x-z_t)-(g⊕𝒯_t)(x) ≤. Clearly, z_t≤ x for all t. Let z be any limit point of the sequence {z_t}_t. We have g(z_t)+𝒯_t(x-z_t)-ε≤ (g⊕𝒯_t)(x) ≤ g(x-2)+𝒯_t(2). Taking subsequential limit and using <ref> we obtain g(z) ≤ g(x-2). Since x<2, we have z≥ x-2. Note that 𝒯_t(t^-1⌊ tz_t⌋)=𝒯_t(z_t). Since all limit points of {z_t} are in [x-2,x] for all enough t we can ensure t^-1⌊ tz_t⌋∈1/t∩[x-3,x]. Thus, (g⊕_3𝒯_t)(x)-g(⊕𝒯_t)(x) ≤ |g(z_t)-g(t^-1⌊ tz_t⌋)|+ε. Taking lim sup_t→∞ on both sides and noticing that is arbitrary, we arrive at (<ref>). This completes the proof. Armed with the asymptotic midpoint convexity of 𝒯_t from <ref> and the pointwise convergence result from <ref>, we can now prove certain properties of ℱ. For completeness, we remind the reader of the notion of proper and closed functions. We call a function f:→ [-∞,+∞] proper if f(x)>-∞ for all x∈ and f(x)<∞ for some x∈. We call f:→ [-∞,+∞] to be closed if {x :f(x)≤α} is a closed set for all α∈. The lower-tail rate function ℱ is real-valued, convex, and continuous on the entire real line. The rate function ℱ is weakly decreasing and to prove its convexity it is sufficient to show that ℱ is midpoint convex. For this we take x,x' ∈ℝ and we will show that ℱ( x+x'/2) ≤1/2(ℱ(x)+ℱ(x')). We are going to use the characterization (<ref>) of ℱ as a limit of g ⊕𝒯_t. By the definition of infimal convolution, fixed x,x' and an arbitrary small number ε>0 we can find u,v,u',v' such that u+v=x, u'+v'=x' and g(u)+𝒯_t(v) ≤(g ⊕𝒯_t ) (x) + ε, g(u')+𝒯_t(v') ≤(g ⊕𝒯_t ) (x') + ε. Now let u” = (u+u')/2 and v”=(v+v')/2. Then, by <ref> we have, for t large enough 𝒯_t(v”+1t) ≤1/2( 𝒯_t(v) + 𝒯_t(v') ) + ε. Clearly we also have g(u”) ≤1/2(g(u) + g(u')) and noticing that u”+v” = 1/2(x+x') we have ( g ⊕𝒯_t ) (x+x'/2) ≤ g(u”-1t) + 𝒯_t(v”+1t) = g(u”-1t)-g(u”)+g(u”) + 𝒯_t(v”+1t) ≤ g(u”-1t)-g(u”)+1/2(g(u) + g(u')) + 1/2( 𝒯_t(v) + 𝒯_t(v') ) + ε ≤ g(u”-1t)-g(u”)+1/2( (g ⊕𝒯_t ) (x) + ε + (g ⊕𝒯_t ) (x') + ε) + 2ε. Now, using (<ref>) and the fact that g is continuous we obtain that ℱ(x+x'/2) ≤1/2(ℱ(x)+ℱ(x'))+3. Since ε is arbitrary, this proves the midpoint convexity of ℱ. In addition to convexity, since ℱ is non-negative, weakly decreasing, and has parabolic behavior for x≤ x_q via <ref>, it follows that ℱ(x)∈ for all x∈. Finally, it is well known that a real-valued convex function is continuous. §.§ Proof of <ref> and <ref> In this section we prove <ref>. In <ref> we derived the lower-tail rate function of (0,t)+χ+S and in <ref> we obtain sharp estimates for the lower-tail probability of (0,t)+χd=λ_1 in terms of 𝒯_t. We would first like to show that 𝒯_t converges pointwise. This will enable us to show the existence of the lower-tail rate function of (0,t)+χ. Using a deconvolution lemma (<ref>), we can then obtain the lower-tail rate function of (0,t). To show the pointwise convergence of 𝒯_t, the following equicontinuity-type result is crucial. For any ε>0, there exists δ>0 and t_ε>0 such that for all t≥ t_ε we have |𝒯_t(x)-𝒯_t(y)|≤ε, for all x,y∈[0,2] with |x-y|≤δ. Let x<y and assume y-x=δ. By the midpoint convexity stated in <ref>, for any fixed ε', there exists t_ε' such that 2 𝒯_t(x) ≤𝒯_t(x-δ-1t) + 𝒯_t(x+δ) + ε', for any t>t_ε', which implies 𝒯_t(x) - 𝒯_t(x+δ) ≤𝒯_t(x-δ-1t) - 𝒯_t(x) + ε'. Consider a non-negative integer k such that kδ+k(k+1)/2t≤ x < (k+1)δ+(k+1)(k+2)/2t. Then, iterating (<ref>) we obtain 𝒯_t(x)-𝒯_t(y) ≤𝒯_t(x-kδ-k(k+1)/2t)-𝒯_t(x-(k-1)δ-k(k-1)/2t) + kε' ≤𝒯_t(0) - 𝒯_t(2δ+2k+1/t) + kε'. Next, we estimate the term 𝒯_t(0) - 𝒯_t(2δ+2k+1/t). Combining (<ref>), <ref> and <ref>, we know that there exists x_q such that ℱ(x_q) = 𝒯_t(0) + g(x_q) = lim_t → +∞inf_y∈ [0,2]{𝒯_t(y) + g(x_q-y) }, where g(y)= y^2/2. This implies that for any fixed ε” we can pick t_ε” such that, for all t>t_ε” -ε”≤inf_y∈ [0,2]{𝒯_t(y) + g(x_q-y) } - 𝒯_t(0) - g(x_q) ≤ε”. Then, for any y∈[0,2] we have 0≤𝒯_t(0)- 𝒯_t(y) ≤ε” + g(x_q-y)-g(x_q) = ε” + y( y-2x_q ) ≤ε” + M y, where M=-2x_q+2. Combining the estimates (<ref>), (<ref>) we arrive at the bound 0≤𝒯_t(x)-𝒯_t(y) ≤ε” + 2Mδ+M(2k+1)/t + k ε', which holds for any t>max{ t_ε', t_ε”}. It is now clear that the right-hand side of (<ref>) can be made arbitrarily small, since k<2/δ and ε', ε” are independent of δ. Moreover, we can also allow |x-y|<δ using the fact that 𝒯_t is weakly decreasing. This completes the proof. We now state two real analysis lemmas that will allow us to deduce the lower-tail rate functions first for (0,t)+χ and then for (0,t). Let h_n:ℝ→ [0,+∞] be a family of weakly decreasing functions such that * h_n(x)=+∞ for x<0, and there exists M> such that h_n(x)∈[0,M] for x≥ 0 and sup_x≥ 2 h_n(x) → 0 as n→∞. * For all ε>0, there exists δ>0 and n_>0 such that for all n≥ n_ and for all x,y ∈ [0,2] with |x-y|≤δ we have |h_n(x)-h_n(y)|≤ε. * Every subsequential limit of {h_n} is convex. Let g(x)=x^2/2. Assume that (h_n ⊕ g)(x) converges pointwise to a proper, lower-semicontinuous convex function f(x). Then h_n(x) converges pointwise to h(x)=(f⊖ g)(x) := sup_y∈{ f(x-y) - g(y) }. Moreover we have f=g⊕ h, the function h is continuous on [0,∞), and the function f is differentiable with derivative f' being -Lipschitz. Fix any subsequence (n_k)_k∈. Although h_n's are not given to be continuous, one can use the first two conditions on {h_n} and follow the proof of Arzela-Ascoli theorem verbatim to extract a (uniformly) converging subsequence h_n_k_ℓ h [0,2]. We define h(x)=+∞ for x<0 and h(x)=0 for x>2. Clearly, h is continuous by the equicontinuity type hypothesis on {h_n}_n≥ 1. We claim that (g⊕ h_n_k_ℓ) → (g⊕ h) pointwise. Fix any x, v∈. Observe that g(v)+h(x-v)=lim_ℓ→∞ g(v)+h_n_k_ℓ(x-v) ≥lim sup_ℓ→∞ (g⊕ h_n_k_ℓ)(x). Taking infimum over v above on both sides, we get (g⊕ h)(x) ≥lim sup_ℓ→∞(g⊕ h_n_k_ℓ)(x). For the other way, suppose v_ℓ:=inf g⊕ h_n_k_ℓ(x). By the conditions on h_n, it is clear that sup_ℓ |v_ℓ|< ∞. Passing through a subsequence we may assume v_ℓ→ v for some v ∈. Then utilizing the uniform convergence for h_n_k_ℓ we obtain lim inf_ℓ→∞ (g⊕ h_n_k_ℓ)(x) = lim inf_ℓ→∞ [g(v_ℓ)+h_n_k_ℓ(x-v_ℓ)]= g(v)+h(x-v) ≥ (g⊕ h)(x). This verifies our claim. Consequently, we obtain that g⊕ h=f. By <cit.>, taking Legendre transform of both sides we find that g^*+h^*=f^*. Since g is quadratic, we know g^*(x)∈ for all x∈. Thus we have h^*=f^*-g^*. Since h is continuous on [0,∞) and h(x)=+∞ for x<0, we see that h is a closed function (see <ref>). Since h is closed and convex, using the Fenchel biconjugation theorem we get that h=h^**=(f^*-g^*)^*. Thus h is uniquely determined by f,g. Since every subsequence has a further subsequence that converges to the same quantity, we have thus shown h_n converges uniformly to (f^*-g^*)^*. Since both f,g are proper, lower-semicontinuous, convex functions, by <cit.> we have (f^*-g^*)^*=f⊖ g. Finally, since f=g⊕ h, the properties of f claimed in the lemma follow from <cit.>. Let G:→ [0,1] be a decreasing function with the property that lim_x→∞1/x^2log G(-x)=0, lim_x→∞1/x^2log G(x)= -∞. Fix an open set O∈. Suppose g: O→ is a continuous function. Let {X_t}_t≥ 1 be a sequence of random variables satisfying lim_t→∞1/t^2log[G(X_t-xt)]=g(x) for all x∈ O. Then for all x ∈ O we have lim_t→∞1/t^2log(X_t ≤ xt)=g(x). Fix x∈ O. Choose δ small enough so that (x-δ,x+δ)⊂ O. Note that for all y∈ we have G(y+δ t) ≤_y<0+G(δ t), G(y-δ t) ≥ G(-δ t)·_y<0. Setting y=X_t-xt in the above relations and taking expectation we get [G(X_t-xt+δ t)] ≤(X_t<xt)+G(δ t), [G(X_t-xt-δ t)] ≥ G(-δ t)·(X_t<xt). In the second inequality above, we use the left tail decay of G, so that taking log and dividing by t^2, and then taking t↑∞ we get lim sup_t→∞1/t^2log(X_t ≤ xt)≤g(x+δ). Taking δ↓ 0 and using the fact g is continuous we get lim sup_t→∞1/t^2log(X_t ≤ xt)≤g(x). Fix ρ>0. Using (<ref>) we can ensure from the first inequality in (<ref>) that (X_t<xt) ≥exp(t^2[g(x-δ)-ρ])-G(δ t). Due to the right tail decay conditions on G, the first term dominates the second term as t→∞. Thus, lim inf_t→∞1/t^2log(X_t ≤ xt)≤g(x-δ)-ρ. Taking δ↓ 0 and ρ↓ 0, and again using the fact g is continuous we get lim inf_t→∞1/t^2log(X_t ≤ xt)≤g(x). Combining this with (<ref>) we get the desired limit. Let us now finish the proof of <ref>. Fix μ∈ (0,2). Recall 𝒯_t from (<ref>). We would like to apply <ref> with h_t = 𝒯_t to deduce that lim_t→∞𝒯_t(μ) exists. Note that {𝒯_t} satisfies the three conditions of <ref>. Indeed, the first condition follows from <ref> and <ref>, whereas the second one follows from <ref>. The third one is a consequence of <ref>. Since by <ref> we have g⊕𝒯_t→ℱ pointwise and ℱ is proper, lower semicontinuous and convex by <ref>, we thus have that 𝒯_t(μ)Φ_-(μ):=sup_y∈{ℱ(y)-g(μ-y)} and Φ_- is continuous on [0,∞). Due to the properties of 𝒯_t established in <ref>, we readily have that Φ_- is weakly decreasing, non-negative and convex with Φ_-(μ)=+∞ for μ<0, Φ_-(0)=1-q, and Φ_-(μ)=0 for μ≥ 2. But by the lower-tail estimate of λ_1 in (<ref>) we have thus shown that -lim_t→+∞1/t^2log_𝖼𝖯𝗅𝖺𝗇(t(1-q))(λ_1≤μ t)=Φ_-(μ). Recall that (0,t)+χd=λ_1 from <ref>. To remove χ, we appeal to <ref>. For every fixed q∈ (0,1) consider the function G_q:→ [0,1] defined as G_q(y):=(χ≤ -y), where χ∼ q-Geo(q). We claim that G_q satisfies the conditions in <ref>. Clearly, G_q is decreasing. Since G_q(y)=0 for all y>0, the right tail condition in <ref> is satisfied trivially. Since lim_x→∞ G_q(-x)=1, the left tail condition also holds. Thus G_q satisfies the conditions in <ref>. The utility of G_q function is that it allows us to write ((0,t)+χ≤μ t)=[G_q((0,t)-μ t)]. Since Φ_- is continuous on [0,∞), invoking <ref> with G=G_q and g=Φ_-, we see that (0,t) satisfies a lower-tail LDP with the same rate function Φ_- and speed t^2. This completes the proof. A few of the properties of ℱ are already proven in <ref> and <ref>. The remaining properties of ℱ claimed in <ref> follow from <ref>. § FURTHER APPROACHES TO THE LOWER-TAIL In this section, we outline further possible approaches to the explicit characterization of the lower-tail rate function. §.§ Non rigorous approach to characterization of ℱ: nonlinear differential equations In <cit.>, starting from a discrete Riemann-Hilbert characterization of the Fredholm determinant (<ref>), the authors derived a differential-difference equation for the quantity Q(t,s) := ℙ_𝖼𝖯𝗅𝖺𝗇(t(1-q)) (λ_1+S ≤ s), which reads ∂_t^2 log Q(t,s) + 1/t∂_t log Q(t,s) + 4 = 4 Q(t,s+1) Q(t,s-1)/Q(t,s)^2. When s is of order t, from the large deviation principle (<ref>) we have approximation log Q_σ(t,s) = -t^2 ℱ(s/t) + o(t^2), where we recall that the function ℱ is given by the variational problem (<ref>). Plugging this approximation in (<ref>), assuming that the function ℱ is twice differentiable (which we did not prove), we find a closed equation for ℱ as -4 ℱ (x) + 3 x ℱ'(x) - x^2 ℱ”(x) + 4 = 4 e^-ℱ”(x), where x=s/t. Motivated by the fluctuation result (<ref>) we expect that at x=2, the behavior of ℱ is given by ℱ(2)=ℱ'(2)=ℱ”(2)=0, since F_GUE(s) ∼ e^-|s|^3/12 for s → -∞. A quick check shows that these conditions also determine ℱ”'(2). In fact deriving twice both sides of (<ref>) we get -x ℱ”' (x )+ℱ””(x ) (4 e^-ℱ”(x )-x ^2)-4 ℱ”'(x )^2 e^-ℱ”(x ) = 0, which evaluated at x=2 and assuming ℱ(2)=ℱ'(2)=ℱ”(2)=0 leaves us with ℱ”' (2 )(1+2 ℱ”'(2 )) = 0. This forces, once we exclude constant solutions, ℱ”'(2)=-1/2. These considerations motivate imposing the boundary conditions ℱ(2)=ℱ'(2)=ℱ”(2)=0 and ℱ”'(2)=- 1/2. In general, these boundary conditions do not guarantee uniqueness. We look for a one-parameter family of solutions to this problem of the following particular form. Let us express ℱ as ℱ(x) = ∫_2^x y ∫_2^y z log𝒢(z), in which case 𝒢(x), which needs to be positive for x>0, solves the differential equation 𝒢(4 𝒢”+x ^2 (𝒢')^2)=8 (𝒢')^2+x 𝒢^2 (x 𝒢”+𝒢') with boundary conditions 𝒢(2)=1, 𝒢'(2)=-1/2. In the following proposition, we introduce a one-parameter family of solutions 𝒢. For any c∈ (0,+∞] and x ∈ (0,2) let 𝒢=𝒢_c(x)>1 be solution of the equation log (𝒢)= 2c/√(c^2-4)arctanh(c x √((c^2-4) 𝒢)-√((c^2-4)^2 𝒢 x^2+16 (c^2-4))/(c^2-4) x √(𝒢)-√((c^2-4) c^2 𝒢 x ^2+16 c^2)) if c ≠ 2,+∞ 2-x √(𝒢) if c = 2 log(4/𝒢 x ^2) if c = +∞, where the √( ) is the principal branch of the square root. Then, 𝒢_c(x) solves the system 𝒢(4 𝒢”+x ^2 (𝒢')^2)=8 (𝒢')^2+x 𝒢^2 (x 𝒢”+𝒢') x ∈ (0,2) 𝒢(2)=1, 𝒢'(2)=-1/2, 𝒢”(2)=1/2-1/4 c^2. Using the implicit function theorem we can verify that the implicitly defined function 𝒢_c is in fact the solution of the desired differential system. When 0<c<+∞, the function 𝒢 of (<ref>) is defined on a larger interval (x_c,0) for x_c<0 and hence it solves the differential system (<ref>) on a larger interval. Leveraging these explicit solutions for the differential system for the function 𝒢 we are able to obtain solutions of the system (<ref>) with the condition on the fourth derivative of ℱ. Fix c > 0 or c=+∞ and consider the function 𝒢_c defined in <ref>. Then, the function ℱ_c(x) := ∫_2^x y ∫_2^y z log𝒢_c(z), solves the differential system 4 ℱ (x) - 3 x ℱ'(x) + x^2 ℱ”(x) - 4 + 4 e^-ℱ”(x) =0 x ∈ (0,2), ℱ(2)=ℱ'(2)=ℱ”(2) = 0, ℱ”'(2) = -1/2, ℱ””(2)= 1/4-1/4 c^2 . We can check that the function f(x)=/2 x^2 + (1-q), which by virtue of (<ref>) coincides with the lower-tail rate function ℱ(x) for x≪ 0 is in fact a solution of the differential equation (<ref>). In the particular case where c=+∞, the function 𝒢_∞ is explicit and given by 𝒢_∞(x) = 2/x. Then, integrating twice as in (<ref>) we obtain ℱ_∞(x) = 1-2 x + 3 x^2/4+1/2 x^2 log(2/x), which coincides with the lower-tail rate function of the PNG <cit.>. In the study of the lower-tail rate function of the KPZ equation, an analogous idea to that discussed in this subsection was developed by Le Doussal in <cit.>. The probability distribution of the KPZ equation with droplet initial condition obeys a variant of the Kadomtsev–Petviashvili equation, as found in <cit.>, which when scaled as in (<ref>) gives rise to a first order non-linear differential equation whose solution matches the lower-tail rate function mathematically derived in <cit.>. We end this subsection considering the case where c=2 in the relation (<ref>). Here the function 𝒢 can be written in terms of Lambert's W function as 𝒢(x) = 4 /x^2 W(e x /2)^2, so that, integrating twice the logarithm of 𝒢 we obtain the explicit expression 1- x^2 W(e x/2)-6 x^2/4 W(e x/2)-x^2/4 W(e x/2)^2+5/2 x^2. As a function of x the above expression can be connected in C^1 manner to a parabola of the form 1-q + /2 x^2 as ℱ_2(x)= 1-q_2 +log q_2^-1/2 x^2 x < x_q_2 1- x^2 W(e x/2)-6 x^2/4 W(e x/2)-x^2/4 W(e x/2)^2+5/2 x^2 x_q_2≤ x ≤ 2 0 x > 2, where values q_2,x_q_2 can be found numerically and they are q_2 ≈ 0.00003724, x_q_2≈ -0.1867. A plot of the function ℱ_2 is given in <ref>. We conjecture this behavior to be general, i.e. that the lower-tail rate function ℱ, for any q∈(0,1), possesses three behaviors: one where it is identically 0 for x>2, one where it assumes the form prescribed by (<ref>) for x ∈ [x_q,2] and finally, for x<x_q it is equal to 1-q + /2 x^2. The values of c,x_q are determined by q and they are unique if we require that ℱ is continuously differentiable. At the moment we cannot prove this statement as we are not able to prove that the rate function Φ_- is C^2 in its domain of definition. §.§ Potential theoretic approach to the minimizer of 𝒲^(q). We propose another possible way of finding the exact form of the lower-tail rate function ℱ(x), by solving an energy minimization problem, closely related to the variational characterization (<ref>). Recall the functional 𝒲^(q)(κ,ϕ;x) defined in (<ref>). Consider first a “de-Poissonized" variant of the functional 𝒲^(q) and the associate variational problem as 𝒲^(q)(ϕ;x) := 𝒲^(q)(1,ϕ;x)= 1+ 2 I_hook (ϕ) + 𝒱^(q)(x;ϕ), 𝒰(x):= inf_ϕ {𝒲^(q)(ϕ;x)}. The function ℱ(x) can be deduced from 𝒰(x) through a further minimization problem: ℱ(x)=inf_κ>0{κ 𝒰(x/√(κ))+κlogκ +1-κ}. A promising observation is that the functional 𝒲^(q)(ϕ;x) takes a special form, known as the logarithmic energy (associated with certain external fields). To elaborate on this, first, we use the following alternative representation due to Logan and Shepp <cit.> of the hook functional I_hook: 2 I_hook(ϕ) = log 2 - 1/2∫_ℝ∫_ℝlog| s-t | ϕ'(s) ϕ'(t) s t - 2 ∫_ℝϕ'(t) (t log|t|-t) t. Recall that the notation ϕ was defined in (<ref>). On the other hand, a simple integration by parts implies that the functional 𝒱^(q) can be expressed as 𝒱^(q)(ϕ;x) = - ∫_ℝ [t-x]_+ ϕ'(t) t. Combining (<ref>) (<ref>) with (<ref>) we recast the problem of minimizing the q-deformed hook functional W as that of minimizing the functional 𝖩_η,x : 𝒴_1 ⟶ℝ defined by 𝖩_η,x(h) = - 1/2∫_ℝ∫_ℝlog| s-t | h'(s) h'(t) s t - 2 ∫_ℝ h'(t) (t log|t|-t + η [t-x]_+ ) t, with η>0 and x∈ℝ. The functional 𝖩_η,x(h) is, up to a scaling factor 1/2, the logarithmic energy <cit.> associated to the measure dh:=h'(t)dt, with the external field given by V_ext(t;η,x):= -4(tlog|t|-t+η[t-x]_+). Finding the minimizer (known as the equilibrium measure) of certain logarithmic energy functionals is a well-studied problem in potential theory literature and has applications to random matrix theory and many other mathematical physics problems. One has the following standard necessary condition for a function h_0∈𝒴_1 to be the minimizer (see e.g. <cit.> and <cit.>): Defining p(t) := -∫_ℝ h_0'(s) log|s-t| s - 2(t log|t| - t + η[t-x]_+) +λ t, where the parameter λ∈ℝ is the Lagrange multiplier. Then for a minimizer h_0, the function p must satisfy the following Euler-Lagrange type equations: p(u) is =0, if sign(u) · h_0'(u) ∈ (-2,0) ≤ 0, if h_0'(u) + sign(u) = 1 ≥ 0, if h_0'(u) + sign(u) = -1. Taking a (weak) derivative of p we see that on {u: sign(u)h_0'(u)∈ (-2,0)}, one must have 1/π P.V.∫h_0'(y)/y-u y = -2/πlog|u| - 2 η/π1_[x,+∞)(u) - λ/π, where P.V. stands for the Cauchy principal value. We are not able to solve the Cauchy integral equation (<ref>) explicitly. The main difficulty is due to the fact that we do not know a priori the form of the support of h_0'. An ansatz of the form supp(h_0')=∪_i=1^k (a_i,b_i) leads to highly transcendental relations on the endpoints which are not straightforward to solve (see <cit.> for solutions to simpler versions of the above problem). We hope to explore this direction in a future work. We end this subsection by remarking that solving certain energy-minimization problems similar to that of (<ref>) is also the starting point for a nonlinear steepest descent analysis of the Riemann-Hilbert problem (as mentioned earlier, the relevant RHP for q-PNG was introduced in <cit.>). Indeed, such analysis usually starts with finding a so-called g-function as a conjugating factor such that the transformed RHP behaves properly at ∞. The construction of such g-functions is a potential-theoretic problem that we expect to be rather involved for the q-PNG case. Riemann-Hilbert methods have long been used as a powerful tool for studying the tail behaviors of Fredholm determinants, especially in the oscillating regimes, see e.g. <cit.>. More recently, a similar but more involved analysis has been implemented for finite-temperature models, see <cit.>. The discrete nature of the RHP associated with q-PNG seems to lead to additional technical difficulty for a suitable nonlinear steepest descent analysis. §.§ Approach through Morales-Pak-Tassy's extension of the Hook integral We have used in our main arguments a relation between the height function of the q-PNG and the first row of a partition taken with cylindric Plancherel law; recall <ref>. This correspondence opens the possibility to study large deviations for the height function 𝔥 using asymptotic formulas for f^λ/ρ, the number of standard Young tableaux of skew shape λ/ρ. In <cit.>, authors managed to compute the asymptotic limit of the Naruse hook formula (<ref>), which expresses f^λ/ρ in terms of the evaluation of a functional c(ϕ_λ / ρ) which is a deformation of the hook integral (<ref>). The expression of the functional c involves solving a variational problem that arises from a connection between lozenge tilings on a hexagonal lattice and the Naruse hook formula <cit.> and for this reason, it appears significantly more involved to analyze explicitly. Variational problems associated with the asymptotic number of standard Young tableaux have appeared in literature also in <cit.>. In light of the explicit formula for the rate function ℱ conjectured in <ref> it would be very interesting to analyze these various variational problems to possibly extract further properties of the lower-tail rate function Φ_-; we leave these further approaches for future works. §.§ Approach through geometric techniques from the theory of last passage percolation Yet another possible way of studying lower-tail behaviors of the q-PNG model is through its connection to a cylindrical Poissonian last passage percolation. The connection between q-PushTASEP and a model of last passage percolation in a cylindric geometry was essentially discovered in <cit.> and used by Corwin and Hegde to obtain bounds for the tail probabilities of the height function in a q-PushTASEP in the moderate deviation regime <cit.>. For a thorough description of the last passage percolation model, the reader should consult <cit.>. Following the scaling limit presented in <ref> which transforms the q-PushTASEP into the q-PNG we can transform the last passage percolation model with random geometric weights into a model with Poisson rates. Although it is possible that techniques developed in <cit.> could adapt well to our situation and possibly allow us to derive moderate deviation bounds for the tails of the distribution of 𝔥, we are also positive that sharper bounds, in the style of results developed by Ganguly and Hegde <cit.>, could be obtained. We leave these interesting directions for future works. alpha
http://arxiv.org/abs/2307.02522v1
20230705180000
High-Energy Collision of Quarks and Hadrons in the Schwinger Model: From Tensor Networks to Circuit QED
[ "Ron Belyansky", "Seth Whitsitt", "Niklas Mueller", "Ali Fahimniya", "Elizabeth R. Bennewitz", "Zohreh Davoudi", "Alexey V. Gorshkov" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "hep-lat", "hep-ph", "nucl-th" ]
UMD-PP-023-02, IQuS@UW-21-050 [email protected] Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742 USA Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742 USA InQubator for Quantum Simulation (IQuS), Department of Physics, University of Washington, Seattle, WA 98195, USA Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742 USA Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742 USA Maryland Center for Fundamental Physics and Department of Physics, University of Maryland, College Park, MD 20742 USA Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742 USA Joint Center for Quantum Information and Computer Science, NIST/University of Maryland, College Park, MD 20742 USA Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742 USA With the aim of studying nonperturbative out-of-equilibrium dynamics of high-energy particle collisions on quantum simulators, we investigate the scattering dynamics of lattice quantum electrodynamics in 1+1 dimensions. Working in the bosonized formulation of the model, we propose an analog circuit-QED implementation that is native to the platform, requires minimal ingredients and approximations, and enables practical schemes for particle wave-packet preparation and evolution. Furthermore, working in the thermodynamic limit, we use uniform-matrix-product-state tensor networks to construct multi-particle wave-packet states, evolve them in time, and detect outgoing particles post collision. This facilitates the numerical simulation of scattering experiments in both confined and deconfined regimes of the model at different energies, giving rise to rich phenomenology, including inelastic production of quark and meson states, meson disintegration, and dynamical string formation and breaking. We obtain elastic and inelastic scattering cross sections, together with time-resolved momentum and position distributions of the outgoing particles. This study highlights the role of classical and quantum simulation in enhancing our understanding of scattering processes in quantum field theories in real time. High-Energy Collision of Quarks and Hadrons in the Schwinger Model: From Tensor Networks to Circuit QED Alexey V. Gorshkov August 1, 2023 ========================================================================================================== Introduction.—Scattering processes in nuclear and high-energy physics play an essential role in studies of hadronic and nuclear structure and of exotic phases of matter, and in searches for new particles and interactions. Current and future frontiers are the Large Hadron Collider, the Relativistic Heavy-Ion Collider <cit.>, the Electron-Ion Collider <cit.>, and neutrino-nucleus scattering at the Deep Underground Neutrino Experiment <cit.>. Collisions in these experiments involve hadronic initial states and complex many-particle final states. In addition, scattering proceeds in a multi-stage process and may encompass a wide range of phenomena, including the formation of exotic matter <cit.>, such as quark-gluon plasma <cit.>, thermalization <cit.>, quark and hadron fragmentation <cit.>, and quark-gluon-plasma hadronization <cit.>. Ideally, such rich phenomenology should be grounded in first-principles quantum-chromodynamics (QCD) descriptions. While perturbation theory and QCD factorization <cit.>, as well as the nonperturbative method of lattice QCD <cit.>, have brought about impressive advances, a full understanding of scattering processes in QCD at all stages and energies is still lacking. First-principles simulations of high-energy particle scattering are considered a prime application for quantum computers and simulators <cit.>. A central challenge is that realistic scattering experiments involve a vast range of spatial and temporal scales, placing their simulation beyond the capabilities of current digital quantum computers. Analog quantum simulators may enable simulating larger Hilbert spaces and longer times, but concrete proposals are lacking for analog simulation of scattering processes in quantum field theories. At the same time, classical tensor-network methods have been shown to successfully capture ground-state <cit.>, and to some degree dynamical <cit.>, phenomena in gapped theories, including scattering processes <cit.>, particularly in 1+1 dimensions, but their reach remains limited in simulating general scattering problems in quantum field theories. This manuscript advances both analog quantum simulation and tensor-network-based classical simulation for a prototypical model of QCD, the lattice Schwinger model, i.e., lattice quantum electrodynamics (QED) in 1+1 dimensions. Previous tensor-network <cit.> and quantum-simulation <cit.> studies of the model focused on formulations involving fermion (or qubit) degrees of freedom (with or without gauge fields). Motivated to address, more generally, theories with bosonic content, here we instead consider the bosonic dual of the theory, a particular type of a massive Sine-Gordon model. Our first objective is to propose an analog circuit-QED implementation of the bosonized lattice Schwinger model. Recently, the bosonic dual was shown to be approximately realizable by circular Rydberg states <cit.>. In contrast, we will show that circuit QED's basic components, its native bosonic degrees of freedom, and the available ultrastrong coupling <cit.> allow the model to be implemented in a simple circuit with minimal ingredients and approximations, making it particularly suitable for near-term quantum simulation. Our second objective is a numerical exploration of high-energy real-time scattering phenomenology in the model. We work in the nonperturbative regime, near the confinement-deconfinement critical point and in the thermodynamic limit, using uniform matrix product states (uMPS) <cit.>, which in turn allows for the construction <cit.> and collision of numerically-exact quasiparticle wave packets in the interacting theory at various energies, resulting in nontrivial inelastic effects. In contrast, earlier works were limited to elastic scattering at either weak (nearly free fermions) <cit.> or strong (nearly free bosons) <cit.> coupling regimes. We focus on a detailed spatial, temporal, and momentum-resolved diagnostic of elastic and inelastic processes of quark and meson states, involving phenomena such as meson disintegration, dynamical string formation and breaking, and the creation of quark and (excited) meson states. We also investigate the role of entanglement in high-energy scattering <cit.>. Model and circuit-QED implementation.—The massive Schwinger model has the Lagrangian density ℒ = ψ̅(iγ^μ∂_μ-eγ^uA_μ-m ) ψ -1/4F_μνF^μν, where ψ(x,t) is a 2-component Dirac spinor, γ^0 = σ^z, γ^1 = i σ^y with σ^z,σ^y being the Pauli matrices, m is the mass, e is the electric charge, and A_μ(x,t) and F_μν(x,t) are the gauge field and the field-strength tensor, respectively. <Ref> is dual to a bosonic scalar field theory with the Hamiltonian <cit.> H = ∫ dx [Π^2/2 +(∂_xϕ)^2/2+M^2ϕ^2/2-ucos(βϕ-θ)], where ϕ(x) and Π(x) are the scalar field and conjugate momentum, respectively, M=e/√(π), β=√(4π), and u=e^γ/2πΛ m, where γ is Euler's constant and Λ is a UV scale (we assume ħ=c=1 throughout, where c is the speed of light). Finally, θ∈ (-π,π], with its origin explained in Ref. <cit.> and the Supplemental Material (SM) <cit.>. We work with a lattice discretization of <ref> given by H = χ∑_x [π_x^2/2+(ϕ_x-ϕ_x-1)^2/2 +μ^2ϕ_x^2/2-λcos(βϕ_x-θ)], where x labels lattice sites, ϕ_xπ_y=iδ_xy, χ=1/a, μ^2=M^2a^2, λ = ua^2, and a is the lattice spacing. We set a=1, with the continuum limit corresponding to μ,λ→ 0. Quantities are assumed in lattice units throughout. Remarkably, <ref> can be exactly realized in a simple superconducting circuit, shown in <ref>. The circuit can be regarded as a chain of inductively coupled fluxoniums <cit.>. It consists of nodes i, each corresponding to a lattice site with a local bosonic degree of freedom described by flux ϕ_i and charge π_i, composed of a parallel arrangement of a capacitor, an inductor, and a Josephson junction with respective energies E_C,E_L, and E_J <cit.>. Further, nodes are coupled by inductors with energy E_L'. The circuit parameters are related to those of <ref> via χ = 8E_C/β^2, E_L'β^4/8E_C=1, μ^2=E_Lβ^4/8E_C, λ = E_Jβ^2/8E_C, and θ=Φ_ext-π, where Φ_ext is a tunable external flux threading each loop, and β≠ 0 can be chosen arbitrarily (see the SM <cit.> for the full derivation). In fact, when β≠√(4π), the circuit describes a more general model known as the massive Thirring-Schwinger model <cit.>. In the SM <cit.>, we present a method for preparing initial wave packets of bosonic particles using two ancillary qubits, hence providing a complete protocol for preparation and evolution of mesonic wave packets for a scattering experiment. Measurements of the local field ϕ_x <cit.> or the output field at the edges <cit.> can be performed using standard techniques. To gain insight into the anticipated phenomenology, we proceed with a numerical study of the collision dynamics in the lattice Schwinger model. While quantitative predictions for the continuum theory require an extrapolation procedure <cit.>, here only fixed, but sufficiently small, values of μ and λ are considered. The model has two dimensionless parameters, the ratio e/m, corresponding to μ/λ in <ref>, and the angle θ representing a constant background electric field E_θ = e/2πθ. Gauss's law, ∂_xE=eψ^†ψ, ties the total electric field E_T=E_θ+E to the dynamical charges, and equals E_T=e/√(π)ϕ in the bosonic dual <cit.>. Two regimes will be studied near the ℤ_2 critical point, shown in <ref> as (b) and (c). Point (b) is in the deconfined phase [red line at θ=π in <ref>(a) terminating at the Ising critical point], where the ground state is two-fold degenerate [<ref>(b,i)]. Here, fundamental excitations are “half-asymptotic" <cit.> fermions (“quarks"), appearing as topological kinks in the bosonic dual [see <ref>(b,ii)]. Point (c) in <ref>(a) is in the confined phase, with a unique ground state [<ref>(c,i)] and quark-antiquark bound-state (“meson") excitations. Quark-antiquark scattering.—We first consider quark-antiquark scattering in the deconfined phase [<ref>(b)]. Constructing a uMPS representation of the two ground states <cit.>, we use the uMPS quasiparticle ansatz <cit.> to obtain single-particle energy-momentum eigenstates with dispersion ℰ(p) and momenta p∈ [-π,π) (see the SM <cit.>). From this, we construct two Gaussian wave packets, localized in momentum and position space, centered at opposite momenta ± p_0. The initial state consists of a finite nonuniform region of 150–300 sites containing the two wave packets, and is surrounded (on the left and the right) by the uniform vacuum [we choose the vacuum with positive E_T, i.e., the right minimum of <ref>(b,i)]. We then time-evolve this state under the Hamiltonian in <ref>, while dynamically expanding the nonuniform region <cit.> up to 600–1300 sites (see the SM <cit.> for a more detailed description). By working near the critical point, where the quark mass m_q≡ℰ(p=0) (i.e., the gap) is small, one can consider momenta up to p_0≲ 0.8. These are sufficiently small to keep the physics in the long-wavelength regime of the lattice model, where the dispersion is approximately relativistic ℰ(p)≈(p^2+m_q^2)^1/2, but highly relativistic center-of-mass (CM) energies ℰ_CM≡ 2ℰ(p_0)≲ 30m_q are achieved. <Ref>(a) shows the space-time distribution of the electric field for collisions at three representative energies, ℰ_CM/m_q= 11.4, 23.0, and 28.8. Initially, the quark and antiquark are separated, resembling <ref>(b,ii), with electric field between the charges equal in magnitude but opposite in sign to the field outside [the two regions correspond to the two degenerate ground states in <ref>(b,i)]. Under time evolution, the two charges propagate ballistically, shrinking the negative-field region until they collide. During the collision, the particles bounce off each other and reverse their propagation direction elastically, the sole process at lower energies. Specifically, as can be seen in <ref>(a), at the lowest energy, ℰ_CM/m_q= 11.4, the post-collision value of E_T between the charges is practically equal to the pre-collision value. For the higher-energy collisions, ℰ_CM/m_q= 23.0 and 28.8, an increase of the post-collision electric field is observed, signalling additional charge production. While our numerical approach does not rely on strong- or weak-coupling expansion, the relevant scattering channels can be understood from weak-coupling arguments as follows. In the SM <cit.>, we derive, in the nonrelativistic limit, an effective potential between opposite charges at the lowest order in e/m starting from <ref>, which reads (in the center-of-mass frame) V_eff(x)= e^2/2(x-θ/πx)+e^2/4m^2δ(x) . Here, x is the distance between charges. For θ≠π, one recovers linear confinement [<ref>(c,ii)] <cit.>, while at θ=π, charges experience short-range repulsion due to the delta function in <ref> [<ref>(b,ii)]. This implies the absence of stable bound states (mesons) in the deconfined phase, which is confirmed numerically in the SM <cit.>. All possible scattering channels are, therefore, (even-numbered) multi-quark states. The lowest-order channel after the elastic one (qq̅→ qq̅) is the four-quark production (qq̅→ qq̅qq̅), exhibiting quark fragmentation. In the latter case, the two inner particles screen the electric field produced by the outer two, consistent with the two rightmost panels in <ref>(a). Elastic and inelastic processes are also distinguished by the production of von Neumann entanglement entropy [S_vN(x,t)=-(ρ_>x(t) lnρ_>x(t)) with ρ_>x(t) being the reduced density matrix for sites y > x] across the collision point (x=0), shown in <ref>(b) as a function of time. <Ref>(c) also shows the asymptotic (t →∞) entanglement generated as a function of the collision energy. The entanglement entropy is maximal during the collision but quickly approaches a constant afterwards. At lower energies, it nearly returns to its pre-collision (vacuum) value. A small increase is observed because different momentum components of the wave packets acquire slightly different elastic scattering phase shifts, making the two scattered wave packets slightly entangled <cit.>. At higher energies, however, significant net entanglement is generated, indicating inelastic particle production <cit.>. Finally, we compute elements of the momentum-resolved scattering S-matrix by projecting the post-collision state onto a basis of asymptotic two-particle states (see the SM <cit.> for details). This basis is constructed from the single-particle wavefunctions, requiring the particles to be widely separated to ensure orthogonality and avoid interaction effects. For 2→ 2 scattering, this is guaranteed sufficiently far from the collision point, but not for higher-order scattering. From this, we obtain the elastic scattering probability P(qq̅), displayed in <ref>(c), as a function of the collision energy. The elastic scattering probability is near unity at lower energies, decreasing monotonically, falling below 0.5 around ℰ_CM/m_q≳ 28. Interestingly, the energy required for significant inelastic scattering is many times the threshold energy (ℰ_CM=4m_q). While we did not obtain the precise contribution of the four-quark (or higher-quark-number) states [Projecting the state on four widely-separated quarks basis states resulted in a negligible contribution. This does not mean that four particle states are absent from <ref>, but suggests that they are not spatially separated to be recorded as single-particle states.], the decrease of P(qq̅) confirms the presence of significant inelastic scattering, consistent with the increase in entanglement entropy in <ref>(b) and the screening of E_T in <ref>(a). Meson-meson scattering.—We next consider scattering in the confined phase [<ref>(c)] at θ = π-ε. We choose ε≪ 1, which gives rise to weak confinement of quarks, but keeps us close to the critical point (all other parameters are unchanged). In contrast to the deconfined regime, the interplay of high-energy and weak confinement yields rich behavior following the collision. There are multiple stable meson excitations, which are labeled by π_j (j=1,2,...), with increasing masses m_π_j. Here, we consider π_1π_1 collisions, with meson wave packets prepared similarly as before, centered at p_0=±0.6 with ℰ_CM/m_π_1=6.84 (5.95) for ε=0.04 (0.07). The electric-field evolution for the two collisions is displayed in <ref>(a,i). Before the collision, the background electric field is only locally disturbed by the charge-neutral mesons [<ref>(c,ii)], unlike in the deconfined case where the presence of free quarks can lead to electric-field screening at arbitrary long distances. After the collision, the mesons partially fragment into a quark-antiquark pair. The quarks are joined by an electric-field string which screens the background electric field (light-blue regions) inside the collision cone. As the quarks travel outward, their kinetic energy gets converted into the potential energy of the string. Eventually, they turn and propagate back in the opposite direction [see also <ref>(c)] causing a second collision. Weaker confinement ε=0.04 allows the quarks to propagate farther. Next, we project the time-evolved state onto two-particle components, focusing on the lightest two mesons π_1,π_2, and the quark-antiquark pair qq̅. While the latter are not true (i.e., asymptotic) quasiparticles, at weak confinement ε≪ 1, (anti)quarks can be approximately described by the modified quasiparticle ansatz of Ref. <cit.>. This requires a uMPS representation of the electric-flux string, which we approximate by its lowest energy state, a so-called “false-vacuum" state <cit.>, corresponding to the second (local) minimum in <ref>(c,i). <Ref>(d) shows the probabilities of the π_1π_1 (blue), π_2π_2 (orange), π_1π_2 (green), and π_2π_1 (pink) combinations (where in state μν, the particle μ/ν is on the left/right), and of the quark-antiquark state (red). One can observe significant flavor-conserving elastic scattering, π_1π_1→π_1π_1, a smaller probability of exciting one of the outgoing mesons, π_2π_1 and π_1π_2 (this smaller probability increases with stronger confinement ε=0.07), and a substantial qq̅ component. Interestingly, for ε=0.07, the qq̅ component is decreasing in time, indicating string breaking <cit.>, which is also visible in the gradual increase of the bipartite entanglement entropy in <ref>(b,i) [see also <ref>(b,ii)], and in the gradual reduction of the electric-field screening [<ref>(a,ii)]. At a late time t=700, asymptotic two-particle states account for about 90% (76%) of the state at ε=0.04 (0.07) [We verified that the missing wavefunction weight is not accounted for by three or four widely-separated particle basis states.]. The projection onto the asymptotic two-particle basis also provides the full momentum, and consequently position, distributions of the particles. <Ref>(c) shows the mean and standard deviation of the positions and momenta of the quarks, and the mean positions of the mesons, computed from fits of these distributions to a Gaussian form. The mean momenta of the quarks are approximately p(t)∝± t, in agreement with the expectation from the linear potential of <ref>. Their extracted positions in <ref>(c) are consistent with the boundaries of the screened-field region in <ref>(a,i) and with the localized increase in the entanglement entropy in <ref>(b,i). From the mean position of the mesons, <ref>(c), one can see that the heavier meson π_2 has a slightly lower average velocity compared to π_1, as expected. Discussion and outlook.—First-principles numerical explorations and quantum simulations of dynamics in strongly interacting quantum field theories are starting to shed light on the rich phenomenology of particle collisions in real time. As a step toward this goal, using ab initio numerical uMPS computations and working with a bosonized formulation of the Schwinger model, we analyzed the real-time dynamics of high-energy particle scattering in the nonperturbative regime of QED in 1+1 dimensions. We also proposed an analog circuit-QED implementation of the bosonized Schwinger model. This implementation requires minimal ingredients and no approximations (besides a lattice discretization), in contrast to previous circuit-QED proposals based on a quantum-link model <cit.>. We studied both the confined and deconfined regimes of the model, exhibiting a multitude of phenomena, including inelastic particle production, meson disintegration, and dynamical string formation and breaking. In addition to the local electric-field and entanglement observables, the single-particle excitations allowed us to obtain complete time-resolved momentum and position distributions of the outgoing 2→ 2 scattered particles. To account for higher-order scattering beyond this two-particle characterization, it appears necessary to include states where two particles can be close, which could potentially be accomplished using the two-particle uMPS ansatz from Ref. <cit.>. This might also shed light on the nontrivial transient dynamics in <ref>(d). It would also be interesting to explore the energy dependence of string-breaking dynamics <cit.> as well as the possibility of formation of excited string states and their characterization beyond the false-vacuum approximation. Ultimately, tensor-network methods are limited by entanglement growth, motivating quantum simulations using the proposed circuit-QED implementation for high-energy collisions. The proposed implementation can also be used to study quench dynamics. For example, the Schwinger mechanism or dynamical topological phase transitions can be studied in quenches of the θ parameter <cit.>, which can be accomplished using time-dependent flux control <cit.>. Finally, our circuit-QED implementation applies to other bosonic theories <cit.>, including the ϕ^4 theory (achieved in the β→ 0 limit) in 1+1 or 2+1 dimensions and generalizations of the bosonized Schwinger model, including to multi-flavor fermions <cit.> and to Thirring interactions <cit.>. In the latter case, sufficiently strong Thirring interactions give rise to attractive short-range interactions between quarks in the deconfined phase, as shown in the SM <cit.>, leading to stable meson particles and hence qualitatively different scattering dynamics. Acknowledgments.—We acknowledge valuable discussion with A. Milsted and Z. Minev. The uMPS simulations were performed with the help of the MPSKit.jl Julia package (<https://github.com/maartenvd/MPSKit.jl>). We thank M. Van Damme for help with the package. The authors acknowledge the University of Maryland's supercomputing resources (<http://hpcc.umd.edu>) made available for conducting the research reported in this paper. R.B., S.W., A.F., and A.V.G. were supported in part by the National Science Foundation (NSF) Quantum Leap Challenge Institute (award no. OMA-2120757), Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research (ASCR), Accelerated Research in Quantum Computing program (award no. DE-SC0020312), ARO MURI, the DOE ASCR Quantum Testbed Pathfinder program (award no. DE-SC0019040), NSF Physics Frontier Center Quantum Computing program, AFOSR, AFOSR MURI, and DARPA SAVaNT ADVENT. Support is also acknowledged from the DOE, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. N.M. acknowledges funding by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) (<https:// iqus.uw.edu>) under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science. Z.D. and N.M. acknowledge funding by the DOE, Office of Science, Office of Nuclear Physics via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science (award no. DE-SC0021143). Z.D. further acknowledges support by the DOE, Office of Science, Early Career Award (award no. DESC0020271). E.R.B acknowledges support from the DOE, Office of Science, Office of ASCR, Computational Science Graduate Fellowship (award no. DE-SC0023112). 139 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Florkowski(2010)]florkowski2010phenomenology author author W. Florkowski, https://doi.org/10.1142/7396 title Phenomenology of Ultra-Relativistic Heavy-Ion Collisions (publisher WORLD SCIENTIFIC, year 2010)NoStop [Lovato et al.()Lovato, Dore, Pisarski, Schenke, Chatziioannou, Read, Landry, Danielewicz, Lee, Pratt, Rennecke, Elfner, Dexheimer, Kumar, Strickland et al.]lovato2022long author author A. Lovato, author T. Dore, author R. D. Pisarski, author B. Schenke, author K. Chatziioannou, author J. S. Read, author P. Landry, author P. Danielewicz, author D. Lee, author S. Pratt, author F. Rennecke, author H. Elfner, author V. Dexheimer, author R. Kumar, author M. Strickland, et al., title title Long Range Plan: Dense matter theory for heavy-ion collisions and neutron stars, @noop https://arxiv.org/abs/2211.02224 arXiv:2211.02224 NoStop [Accardi et al.(2016)Accardi, Albacete, Anselmino, Armesto, Aschenauer, Bacchetta, Boer, Brooks, Burton, Chang, Deng, Deshpande, Diehl, Dumitru, Dupré et al.]accardiElectronIonColliderNext2016 author author A. Accardi, author J. L. Albacete, author M. Anselmino, author N. Armesto, author E. C. Aschenauer, author A. Bacchetta, author D. Boer, author W. K. Brooks, author T. Burton, author N. B. Chang, author W. T. Deng, author A. Deshpande, author M. Diehl, author A. Dumitru, author R. Dupré, et al., title title Electron-Ion Collider: The next QCD frontier: Understanding the glue that binds us all, https://doi.org/10.1140/epja/i2016-16268-9 journal journal Eur. Phys. J. A volume 52, pages 268 (year 2016)NoStop [Achenbach et al.()Achenbach, Adhikari, Afanasev, Afzal, Aidala, Al-bataineh, Almaalol, Amaryan, Androić, Armstrong, Arratia, Arrington, Asaturyan, Aschenauer, Atac et al.]Achenbach:2023pba author author P. Achenbach, author D. Adhikari, author A. Afanasev, author F. Afzal, author C. A. Aidala, author A. Al-bataineh, author D. K. Almaalol, author M. Amaryan, author D. Androić, author W. R. Armstrong, author M. Arratia, author J. Arrington, author A. Asaturyan, author E. C. Aschenauer, author H. Atac, et al., title title The Present and Future of QCD, @noop https://arxiv.org/abs/2303.02579 arXiv:2303.02579 NoStop [Gallagher et al.(2011)Gallagher, Garvey, and Zeller]gallagher2011neutrino author author H. Gallagher, author G. Garvey, and author GP. Zeller, title title Neutrino-nucleus interactions, https://doi.org/10.1146/annurev-nucl-102010-130255 journal journal Annu. Rev. Nucl. Part. Sci. volume 61, pages 355 (year 2011)NoStop [Alvarez-Ruso et al.(2018)Alvarez-Ruso, Athar, Barbaro, Cherdack, Christy, Coloma, Donnelly, Dytman, de Gouvêa, Hill, Huber, Jachowicz, Katori, Kronfeld, Mahn et al.]alvarez2018nustec author author L. Alvarez-Ruso, author M. S. Athar, author M. Barbaro, author D. Cherdack, author M. Christy, author P. Coloma, author T. Donnelly, author S. Dytman, author A. de Gouvêa, author R. Hill, author P. Huber, author N. Jachowicz, author T. Katori, author A. Kronfeld, author K. Mahn, et al., title title NuSTEC White Paper: Status and challenges of neutrino–nucleus scattering, https://doi.org/10.1016/j.ppnp.2018.01.006 journal journal Prog. Part. Nucl. Phys. volume 100, pages 1 (year 2018)NoStop [Kronfeld et al.(2019)Kronfeld, Richards, Detmold, Gupta, Lin, Liu, Meyer, Sufian, and Syritsyn]kronfeld2019lattice author author A. S. Kronfeld, author D. G. Richards, author W. Detmold, author R. Gupta, author H.-W. Lin, author K.-F. Liu, author A. S. Meyer, author R. Sufian, and author S. Syritsyn, title title Lattice QCD and neutrino-nucleus scattering, https://doi.org/10.1140/epja/i2019-12916-x journal journal Eur. Phys. J. A volume 55, pages 1 (year 2019)NoStop [Ruso et al.()Ruso, Ankowski, Bacca, Balantekin, Carlson, Gardiner, Gonzalez-Jimenez, Gupta, Hobbs, Hoferichter, Isaacson, Jachowicz, Jay, Katori, Kling et al.]Ruso:2022qes author author L. A. Ruso, author A. M. Ankowski, author S. Bacca, author A. B. Balantekin, author J. Carlson, author S. Gardiner, author R. Gonzalez-Jimenez, author R. Gupta, author T. J. Hobbs, author M. Hoferichter, author J. Isaacson, author N. Jachowicz, author W. I. Jay, author T. Katori, author F. Kling, et al., title title Theoretical tools for neutrino scattering: Interplay between lattice QCD, EFTs, nuclear physics, phenomenology, and neutrino event generators, @noop https://arxiv.org/abs/2203.09030 arXiv:2203.09030 NoStop [Sorensen et al.()Sorensen, Agarwal, Brown, Chajecki, Danielewicz, Drischler, Gandolfi, Holt, Kaminski, Ko, Kumar, Li, Lynch, McIntosh, Newton et al.]sorensen2023dense author author A. Sorensen, author K. Agarwal, author K. W. Brown, author Z. Chajecki, author P. Danielewicz, author C. Drischler, author S. Gandolfi, author J. W. Holt, author M. Kaminski, author C.-M. Ko, author R. Kumar, author B.-A. Li, author W. G. Lynch, author A. B. McIntosh, author W. G. Newton, et al., title title Dense Nuclear Matter Equation of State from Heavy-Ion Collisions, @noop https://arxiv.org/abs/2301.13253 arXiv:2301.13253 NoStop [Bass et al.(1999)Bass, Gyulassy, Stoecker, and Greiner]bass1999signatures author author S. A. Bass, author M. Gyulassy, author H. Stoecker, and author W. Greiner, title title Signatures of quark-gluon plasma formation in high energy heavy-ion collisions: A critical review, https://doi.org/10.1088/0954-3899/25/3/013 journal journal J. Phys. G Nucl. Part. Phys. volume 25, pages R1 (year 1999)NoStop [Shuryak(1980)]Shuryak:1980tp author author E. V. Shuryak, title title Quantum chromodynamics and the theory of superdense matter, https://doi.org/10.1016/0370-1573(80)90105-2 journal journal Phys. Rept. volume 61, pages 71 (year 1980)NoStop [Baier et al.(2001)Baier, Mueller, Schiff, and Son]baier2001bottom author author R. Baier, author A. H. Mueller, author D. Schiff, and author D. T. Son, title title “Bottom-up” thermalization in heavy ion collisions, https://doi.org/10.1016/S0370-2693(01)00191-5 journal journal Phys. Lett. B volume 502, pages 51 (year 2001)NoStop [Berges et al.(2021)Berges, Heller, Mazeliauskas, and Venugopalan]berges2021qcd author author J. Berges, author M. P. Heller, author A. Mazeliauskas, and author R. Venugopalan, title title QCD thermalization: Ab initio approaches and interdisciplinary connections, https://doi.org/10.1103/RevModPhys.93.035003 journal journal Rev. Mod. Phys. volume 93, pages 035003 (year 2021)NoStop [Andersson et al.(1983)Andersson, Gustafson, Ingelman, and Sjostrand]Andersson:1983ia author author B. Andersson, author G. Gustafson, author G. Ingelman, and author T. Sjostrand, title title Parton fragmentation and string dynamics, https://doi.org/10.1016/0370-1573(83)90080-7 journal journal Phys. Rept. volume 97, pages 31 (year 1983)NoStop [Webber(1984)]Webber:1983if author author B. R. Webber, title title A QCD model for jet fragmentation including soft gluon interference, https://doi.org/10.1016/0550-3213(84)90333-X journal journal Nucl. Phys. B volume 238, pages 492 (year 1984)NoStop [Bass and Dumitru(2000)]bass2000dynamics author author SA. Bass and author A. Dumitru, title title Dynamics of hot bulk QCD matter: From the quark-gluon plasma to hadronic freeze-out, https://doi.org/10.1103/PhysRevC.61.064909 journal journal Phys. Rev. C volume 61, pages 064909 (year 2000)NoStop [Andronic et al.(2018)Andronic, Braun-Munzinger, Redlich, and Stachel]andronic2018decoding author author A. Andronic, author P. Braun-Munzinger, author K. Redlich, and author J. Stachel, title title Decoding the phase structure of QCD via particle production at high energy, https://doi.org/10.1038/s41586-018-0491-6 journal journal Nature volume 561, pages 321 (year 2018)NoStop [Bjorken(1969)]bjorken1969asymptotic author author J. D. Bjorken, title title Asymptotic sum rules at infinite momentum, https://doi.org/10.1103/PhysRev.179.1547 journal journal Phys. Rev. volume 179, pages 1547 (year 1969)NoStop [Gross and Wilczek(1973)]gross1973ultraviolet author author D. J. Gross and author F. Wilczek, title title Ultraviolet behavior of non-abelian gauge theories, https://doi.org/10.1103/PhysRevLett.30.1343 journal journal Phys. Rev. Lett. volume 30, pages 1343 (year 1973)NoStop [Collins et al.(1989)Collins, Soper, and Sterman]collins1989factorization author author J. C. Collins, author D. E. Soper, and author G. Sterman, title title Factorization of hard processes in QCD, in @noop booktitle Perturbative QCD (publisher World Scientific, year 1989) pp. pages 1–91NoStop [Blümlein(2013)]blumlein2013theory author author J. Blümlein, title title The theory of deeply inelastic scattering, https://doi.org/10.1016/j.ppnp.2012.09.006 journal journal Prog. Part. Nucl. Phys. volume 69, pages 28 (year 2013)NoStop [Kronfeld et al.()Kronfeld, Bhattacharya, Blum, Christ, DeTar, Detmold, Edwards, Hasenfratz, Lin, Mukherjee, Orginos, Brower, Cirigliano, Davoudi, Jóo et al.]kronfeld2022lattice author author A. S. Kronfeld, author T. Bhattacharya, author T. Blum, author N. H. Christ, author C. DeTar, author W. Detmold, author R. Edwards, author A. Hasenfratz, author H.-W. Lin, author S. Mukherjee, author K. Orginos, author R. Brower, author V. Cirigliano, author Z. Davoudi, author B. Jóo, et al., title title Lattice QCD and Particle Physics, @noop https://arxiv.org/abs/2207.07641 arXiv:2207.07641 NoStop [Davoudi et al.()Davoudi, Neil, Bauer, Bhattacharya, Blum, Boyle, Brower, Catterall, Christ, Cirigliano, Colangelo, DeTar, Detmold, Edwards, El-Khadra et al.]davoudi2022report author author Z. Davoudi, author E. T. Neil, author C. W. Bauer, author T. Bhattacharya, author T. Blum, author P. Boyle, author R. C. Brower, author S. Catterall, author N. H. Christ, author V. Cirigliano, author G. Colangelo, author C. DeTar, author W. Detmold, author R. G. Edwards, author A. X. El-Khadra, et al., title title Report of the Snowmass 2021 Topical Group on Lattice Gauge Theory, @noop https://arxiv.org/abs/2209.10758 arXiv:2209.10758 NoStop [Beane et al.(2011)Beane, Detmold, Orginos, and Savage]beane2011nuclear author author SR. Beane, author W. Detmold, author K. Orginos, and author MJ. Savage, title title Nuclear physics from lattice QCD, https://doi.org/10.1016/j.ppnp.2010.08.002 journal journal Prog. Part. Nucl. Phys. volume 66, pages 1 (year 2011)NoStop [Davoudi et al.(2021a)Davoudi, Detmold, Shanahan, Orginos, Parreno, Savage, and Wagman]davoudi2021nuclear author author Z. Davoudi, author W. Detmold, author P. Shanahan, author K. Orginos, author A. Parreno, author M. J. Savage, and author M. L. Wagman, title title Nuclear matrix elements from lattice QCD for electroweak and beyond-Standard-Model processes, https://doi.org/10.1016/j.physrep.2020.10.004 journal journal Phys. Rep. volume 900, pages 1 (year 2021a)NoStop [Drischler et al.(2021)Drischler, Haxton, McElvain, Mereghetti, Nicholson, Vranas, and Walker-Loud]drischler2021towards author author C. Drischler, author W. Haxton, author K. McElvain, author E. Mereghetti, author A. Nicholson, author P. Vranas, and author A. Walker-Loud, title title Towards grounding nuclear physics in QCD, https://doi.org/10.1016/j.ppnp.2021.103888 journal journal Prog. Part. Nucl. Phys. volume 121, pages 103888 (year 2021)NoStop [Ding et al.(2015)Ding, Karsch, and Mukherjee]ding2015thermodynamics author author H.-T. Ding, author F. Karsch, and author S. Mukherjee, title title Thermodynamics of strong-interaction matter from Lattice QCD, https://doi.org/10.1142/S0218301315300076 journal journal Int. J. Mod. Phys. E volume 24, pages 1530007 (year 2015)NoStop [Ratti(2018)]ratti2018lattice author author C. Ratti, title title Lattice QCD and heavy ion collisions: A review of recent progress, https://doi.org/10.1088/1361-6633/aabb97 journal journal Rep. Prog. Phys. volume 81, pages 084301 (year 2018)NoStop [Bazavov et al.(2019)Bazavov, Karsch, Mukherjee, and Petreczky]usqcd2019hot author author A. Bazavov, author F. Karsch, author S. Mukherjee, and author P. Petreczky, title title Hot-dense lattice QCD, https://doi.org/10.1140/epja/i2019-12922-0 journal journal Eur. Phys. J. A volume 55, pages 1 (year 2019)NoStop [Guenther(2021)]guenther2021overview author author J. N. Guenther, title title Overview of the QCD phase diagram: Recent progress from the lattice, https://doi.org/10.1140/epja/s10050-021-00354-6 journal journal Eur. Phys. J. A volume 57, pages 136 (year 2021)NoStop [Jordan et al.()Jordan, Lee, and Preskill]jordan2011quantum author author S. P. Jordan, author K. S. M. Lee, and author J. Preskill, title title Quantum Computation of Scattering in Scalar Quantum Field Theories, @noop https://arxiv.org/abs/1112.4833 arXiv:1112.4833 NoStop [Jordan et al.(2012)Jordan, Lee, and Preskill]jordanQuantumAlgorithmsQuantum2012 author author S. P. Jordan, author K. S. Lee, and author J. Preskill, title title Quantum algorithms for quantum field theories, https://doi.org/10.1126/SCIENCE.1217069 journal journal Science volume 336, pages 1130 (year 2012)NoStop [Jordan et al.(2018)Jordan, Krovi, Lee, and Preskill]jordan2018bqp author author S. P. Jordan, author H. Krovi, author K. S. Lee, and author J. Preskill, title title BQP-completeness of scattering in scalar quantum field theory, https://doi.org/10.22331/q-2018-01-08-44 journal journal Quantum volume 2, pages 44 (year 2018)NoStop [Roggero et al.(2020)Roggero, Li, Carlson, Gupta, and Perdue]roggero2020quantum author author A. Roggero, author A. C. Li, author J. Carlson, author R. Gupta, and author G. N. Perdue, title title Quantum computing for neutrino-nucleus scattering, https://doi.org/10.1103/physrevd.101.074038 journal journal Phys. Rev. D volume 101, pages 074038 (year 2020)NoStop [Mueller et al.(2020)Mueller, Tarasov, and Venugopalan]mueller2020deeply author author N. Mueller, author A. Tarasov, and author R. Venugopalan, title title Deeply inelastic scattering structure functions on a hybrid quantum computer, https://doi.org/10.1103/physrevd.102.016007 journal journal Phys. Rev. D volume 102, pages 016007 (year 2020)NoStop [Barata et al.(2021)Barata, Mueller, Tarasov, and Venugopalan]barata2021single author author J. Barata, author N. Mueller, author A. Tarasov, and author R. Venugopalan, title title Single-particle digitization strategy for quantum computation of a ϕ^4 scalar field theory, https://doi.org/10.1103/physreva.103.042410 journal journal Phys. Rev. A volume 103, pages 042410 (year 2021)NoStop [Farrell et al.(2023)Farrell, Chernyshev, Powell, Zemlevskiy, Illa, and Savage]farrell2023preparations author author R. C. Farrell, author I. A. Chernyshev, author S. J. M. Powell, author N. A. Zemlevskiy, author M. Illa, and author M. J. Savage, title title Preparations for quantum simulations of quantum chromodynamics in 1 + 1 dimensions. II. Single-baryon β -decay in real time, https://doi.org/10.1103/PhysRevD.107.054513 journal journal Phys. Rev. D volume 107, pages 054513 (year 2023)NoStop [Surace and Lerose(2021)]surace2021scattering author author F. M. Surace and author A. Lerose, title title Scattering of mesons in quantum simulators, https://doi.org/10.1088/1367-2630/abfc40 journal journal New J. Phys. volume 23, pages 062001 (year 2021)NoStop [Bauer et al.(2023a)Bauer, Davoudi, Balantekin, Bhattacharya, Carena, De Jong, Draper, El-Khadra, Gemelke, Hanada, Kharzeev, Lamm, Li, Liu, Lukin et al.]Bauer:2022hpo author author C. W. Bauer, author Z. Davoudi, author A. B. Balantekin, author T. Bhattacharya, author M. Carena, author W. A. De Jong, author P. Draper, author A. El-Khadra, author N. Gemelke, author M. Hanada, author D. Kharzeev, author H. Lamm, author Y.-Y. Li, author J. Liu, author M. Lukin, et al., title title Quantum Simulation for High-Energy Physics, https://doi.org/10.1103/PRXQuantum.4.027001 journal journal PRX Quantum volume 4, pages 027001 (year 2023a)NoStop [Beck et al.()Beck, Carlson, Davoudi, Formaggio, Quaglioni, Savage, Barata, Bhattacharya, Bishof, Cloet, Delgado, DeMarco, Fink, Florio, Francois et al.]Beck:2023xhh author author D. Beck, author J. Carlson, author Z. Davoudi, author J. Formaggio, author S. Quaglioni, author M. Savage, author J. Barata, author T. Bhattacharya, author M. Bishof, author I. Cloet, author A. Delgado, author M. DeMarco, author C. Fink, author A. Florio, author M. Francois, et al., title title Quantum Information Science and Technology for Nuclear Physics. Input into U.S. Long-Range Planning, 2023, @noop https://arxiv.org/abs/2303.00113 arXiv:2303.00113 NoStop [Klco et al.(2022)Klco, Roggero, and Savage]klco2022standard author author N. Klco, author A. Roggero, and author M. J. Savage, title title Standard model physics and the digital quantum revolution: Thoughts about the interface, https://doi.org/10.1088/1361-6633/ac58a4 journal journal Rep. Prog. Phys. volume 85, pages 064301 (year 2022)NoStop [Bauer et al.(2023b)Bauer, Davoudi, Klco, and Savage]bauerQuantumSimulationFundamental2023 author author C. W. Bauer, author Z. Davoudi, author N. Klco, and author M. J. Savage, title title Quantum simulation of fundamental particles and forces, https://doi.org/10.1038/s42254-023-00599-8 journal journal Nat Rev Phys volume 5, pages 420 (year 2023b)NoStop [Schollwöck(2011)]Schollwock2011 author author U. Schollwöck, title title The density-matrix renormalization group in the age of matrix product states, https://doi.org/10.1016/j.aop.2010.09.012 journal journal Ann. Phys. volume 326, pages 96 (year 2011)NoStop [Paeckel et al.(2019)Paeckel, Köhler, Swoboda, Manmana, Schollwöck, and Hubig]paeckelTimeevolutionMethodsMatrixproduct2019 author author S. Paeckel, author T. Köhler, author A. Swoboda, author S. R. Manmana, author U. Schollwöck, and author C. Hubig, title title Time-evolution methods for matrix-product states, https://doi.org/10.1016/j.aop.2019.167998 journal journal Ann. Phys. volume 411, pages 167998 (year 2019)NoStop [Pichler et al.(2016)Pichler, Dalmonte, Rico, Zoller, and Montangero]pichlerRealtimeDynamicsLattice2016 author author T. Pichler, author M. Dalmonte, author E. Rico, author P. Zoller, and author S. Montangero, title title Real-time Dynamics in U(1) Lattice Gauge Theories with Tensor Networks, https://doi.org/10.1103/PhysRevX.6.011023 journal journal Phys. Rev. X volume 6, pages 011023 (year 2016)NoStop [Rigobello et al.(2021)Rigobello, Notarnicola, Magnifico, and Montangero]rigobelloEntanglementGenerationQED2021a author author M. Rigobello, author S. Notarnicola, author G. Magnifico, and author S. Montangero, title title Entanglement generation in (1+1)D QED scattering processes, https://doi.org/10.1103/PhysRevD.104.114501 journal journal Phys. Rev. D volume 104, pages 114501 (year 2021)NoStop [Van Damme et al.(2021)Van Damme, Vanderstraeten, De Nardis, Haegeman, and Verstraete]vandammeRealtimeScatteringInteracting2021 author author M. Van Damme, author L. Vanderstraeten, author J. De Nardis, author J. Haegeman, and author F. Verstraete, title title Real-time scattering of interacting quasiparticles in quantum spin chains, https://doi.org/10.1103/PhysRevResearch.3.013078 journal journal Phys. Rev. Res. volume 3, pages 013078 (year 2021)NoStop [Milsted et al.(2022)Milsted, Liu, Preskill, and Vidal]milstedCollisionsFalseVacuumBubble2022 author author A. Milsted, author J. Liu, author J. Preskill, and author G. Vidal, title title Collisions of False-Vacuum Bubble Walls in a Quantum Spin Chain, https://doi.org/10.1103/PRXQuantum.3.020316 journal journal PRX Quantum volume 3, pages 020316 (year 2022)NoStop [Buyens et al.(2014)Buyens, Haegeman, Van Acoleyen, Verschelde, and Verstraete]buyensMatrixProductStates2014 author author B. Buyens, author J. Haegeman, author K. Van Acoleyen, author H. Verschelde, and author F. Verstraete, title title Matrix Product States for Gauge Field Theories, https://doi.org/10.1103/PhysRevLett.113.091601 journal journal Phys. Rev. Lett. volume 113, pages 091601 (year 2014)NoStop [Buyens et al.(2016)Buyens, Haegeman, Verschelde, Verstraete, and Van Acoleyen]buyensConfinementStringBreaking2016 author author B. Buyens, author J. Haegeman, author H. Verschelde, author F. Verstraete, and author K. Van Acoleyen, title title Confinement and String Breaking for QED 2 in the Hamiltonian Picture, https://doi.org/10.1103/PhysRevX.6.041040 journal journal Phys. Rev. X volume 6, pages 041040 (year 2016)NoStop [Buyens et al.(2017)Buyens, Haegeman, Hebenstreit, Verstraete, and Van Acoleyen]buyensRealtimeSimulationSchwinger2017 author author B. Buyens, author J. Haegeman, author F. Hebenstreit, author F. Verstraete, and author K. Van Acoleyen, title title Real-time simulation of the Schwinger effect with matrix product states, https://doi.org/10.1103/PhysRevD.96.114501 journal journal Phys. Rev. D volume 96, pages 114501 (year 2017)NoStop [Byrnes et al.(2002)Byrnes, Sriganesh, Bursill, and Hamer]byrnesDensityMatrixRenormalization2002 author author T. M. R. Byrnes, author P. Sriganesh, author R. J. Bursill, and author C. J. Hamer, title title Density matrix renormalization group approach to the massive Schwinger model, https://doi.org/10.1103/PhysRevD.66.013002 journal journal Phys. Rev. D volume 66, pages 013002 (year 2002)NoStop [Bañuls et al.(2013)Bañuls, Cichy, Cirac, and Jansen]banuls2013mass author author M. Bañuls, author K. Cichy, author J. Cirac, and author K. Jansen, title title The mass spectrum of the Schwinger model with matrix product states, https://doi.org/10.1007/JHEP11(2013)158 journal journal J. High Energ. Phys. volume 2013number (11), pages 158NoStop [Rico et al.(2014)Rico, Pichler, Dalmonte, Zoller, and Montangero]rico2014tensor number author author E. Rico, author T. Pichler, author M. Dalmonte, author P. Zoller, and author S. Montangero, title title Tensor Networks for Lattice Gauge Theories and Atomic Quantum Simulation, https://doi.org/10.1103/PhysRevLett.112.201601 journal journal Phys. Rev. Lett. volume 112, pages 201601 (year 2014)NoStop [Bañuls et al.(2015)Bañuls, Cichy, Cirac, Jansen, and Saito]banulsThermalEvolutionSchwinger2015 author author M. C. Bañuls, author K. Cichy, author J. I. Cirac, author K. Jansen, and author H. Saito, title title Thermal evolution of the Schwinger model with matrix product operators, https://doi.org/10.1103/PhysRevD.92.034519 journal journal Phys. Rev. D volume 92, pages 034519 (year 2015)NoStop [Zapp and Orús(2017)]zappTensorNetworkSimulation2017 author author K. Zapp and author R. Orús, title title Tensor network simulation of QED on infinite lattices: Learning from (1+1) d, and prospects for (2+1 ) d, https://doi.org/10.1103/PhysRevD.95.114508 journal journal Phys. Rev. D volume 95, pages 114508 (year 2017)NoStop [Funcke et al.(2020)Funcke, Jansen, and Kühn]funckeTopologicalVacuumStructure2020 author author L. Funcke, author K. Jansen, and author S. Kühn, title title Topological vacuum structure of the Schwinger model with matrix product states, https://doi.org/10.1103/PhysRevD.101.054507 journal journal Phys. Rev. D volume 101, pages 054507 (year 2020)NoStop [Butt et al.(2020)Butt, Catterall, Meurice, Sakai, and Unmuth-Yockey]buttTensorNetworkFormulation2020 author author N. Butt, author S. Catterall, author Y. Meurice, author R. Sakai, and author J. Unmuth-Yockey, title title Tensor network formulation of the massless Schwinger model with staggered fermions, https://doi.org/10.1103/PhysRevD.101.094509 journal journal Phys. Rev. D volume 101, pages 094509 (year 2020)NoStop [Bañuls and Cichy(2020)]banuls2020review author author M. C. Bañuls and author K. Cichy, title title Review on novel methods for lattice gauge theories, https://doi.org/10.1088/1361-6633/ab6311 journal journal Rep. Prog. Phys. volume 83, pages 024401 (year 2020)NoStop [Meurice et al.(2022)Meurice, Sakai, and Unmuth-Yockey]meurice2022tensor author author Y. Meurice, author R. Sakai, and author J. Unmuth-Yockey, title title Tensor lattice field theory for renormalization and quantum computing, https://doi.org/10.1103/RevModPhys.94.025005 journal journal Rev. Mod. Phys. volume 94, pages 025005 (year 2022)NoStop [Martinez et al.(2016)Martinez, Muschik, Schindler, Nigg, Erhard, Heyl, Hauke, Dalmonte, Monz, Zoller, and Blatt]martinez2016real author author E. A. Martinez, author C. A. Muschik, author P. Schindler, author D. Nigg, author A. Erhard, author M. Heyl, author P. Hauke, author M. Dalmonte, author T. Monz, author P. Zoller, and author R. Blatt, title title Real-time dynamics of lattice gauge theories with a few-qubit quantum computer, https://doi.org/10.1038/nature18318 journal journal Nature volume 534, pages 516 (year 2016)NoStop [Klco et al.(2018)Klco, Dumitrescu, McCaskey, Morris, Pooser, Sanz, Solano, Lougovski, and Savage]klco2018quantum author author N. Klco, author E. F. Dumitrescu, author A. J. McCaskey, author T. D. Morris, author R. C. Pooser, author M. Sanz, author E. Solano, author P. Lougovski, and author M. J. Savage, title title Quantum-classical computation of Schwinger model dynamics using quantum computers, https://doi.org/10.1103/PhysRevA.98.032331 journal journal Phys. Rev. A volume 98, pages 032331 (year 2018)NoStop [Nguyen et al.(2022)Nguyen, Tran, Zhu, Green, Alderete, Davoudi, and Linke]nguyenDigitalQuantumSimulation2022 author author N. H. Nguyen, author M. C. Tran, author Y. Zhu, author A. M. Green, author C. H. Alderete, author Z. Davoudi, and author N. M. Linke, title title Digital Quantum Simulation of the Schwinger Model and Symmetry Protection with Trapped Ions, https://doi.org/10.1103/PRXQuantum.3.020324 journal journal PRX Quantum volume 3, pages 020324 (year 2022)NoStop [Mueller et al.()Mueller, Carolan, Connelly, Davoudi, Dumitrescu, and Yeter-Aydeniz]muellerQuantumComputationDynamical2022 author author N. Mueller, author J. A. Carolan, author A. Connelly, author Z. Davoudi, author E. F. Dumitrescu, and author K. Yeter-Aydeniz, title title Quantum computation of dynamical quantum phase transitions and entanglement tomography in a lattice gauge theory, @noop https://arxiv.org/abs/2210.03089 arXiv:2210.03089 NoStop [Chakraborty et al.(2022)Chakraborty, Honda, Izubuchi, Kikuchi, and Tomiya]chakraborty:2020uhf author author B. Chakraborty, author M. Honda, author T. Izubuchi, author Y. Kikuchi, and author A. Tomiya, title title Classically emulated digital quantum simulation of the Schwinger model with a topological term via adiabatic state preparation, https://doi.org/10.1103/PhysRevD.105.094503 journal journal Phys. Rev. D volume 105, pages 094503 (year 2022)NoStop [de Jong et al.(2022)de Jong, Lee, Mulligan, Płoskoń, Ringer, and Yao]de2022quantum author author W. A. de Jong, author K. Lee, author J. Mulligan, author M. Płoskoń, author F. Ringer, and author X. Yao, title title Quantum simulation of nonequilibrium dynamics and thermalization in the Schwinger model, https://doi.org/10.1103/PhysRevD.106.054508 journal journal Phys. Rev. D volume 106, pages 054508 (year 2022)NoStop [Shaw et al.(2020)Shaw, Lougovski, Stryker, and Wiebe]shawQuantumAlgorithmsSimulating2020 author author A. F. Shaw, author P. Lougovski, author J. R. Stryker, and author N. Wiebe, title title Quantum Algorithms for Simulating the Lattice Schwinger Model, https://doi.org/10.22331/q-2020-08-10-306 journal journal Quantum volume 4, pages 306 (year 2020)NoStop [Kan and Nam()]kan2021lattice author author A. Kan and author Y. Nam, title title Lattice Quantum Chromodynamics and Electrodynamics on a Universal Quantum Computer, @noop https://arxiv.org/abs/2107.12769 arXiv:2107.12769 NoStop [Zhou et al.(2022)Zhou, Su, Halimeh, Ott, Sun, Hauke, Yang, Yuan, Berges, and Pan]zhouThermalizationDynamicsGauge2022 author author Z.-Y. Zhou, author G.-X. Su, author J. C. Halimeh, author R. Ott, author H. Sun, author P. Hauke, author B. Yang, author Z.-S. Yuan, author J. Berges, and author J.-W. Pan, title title Thermalization dynamics of a gauge theory on a quantum simulator, https://doi.org/10.1126/science.abl6277 journal journal Science volume 377, pages 311 (year 2022)NoStop [Yang et al.(2020)Yang, Sun, Ott, Wang, Zache, Halimeh, Yuan, Hauke, and Pan]yangObservationGaugeInvariance2020 author author B. Yang, author H. Sun, author R. Ott, author H.-Y. Wang, author T. V. Zache, author J. C. Halimeh, author Z.-S. Yuan, author P. Hauke, and author J.-W. Pan, title title Observation of gauge invariance in a 71-site Bose–Hubbard quantum simulator, https://doi.org/10.1038/s41586-020-2910-8 journal journal Nature volume 587, pages 392 (year 2020)NoStop [Mil et al.(2020)Mil, Zache, Hegde, Xia, Bhatt, Oberthaler, Hauke, Berges, and Jendrzejewski]milScalableRealizationLocal2020 author author A. Mil, author T. V. Zache, author A. Hegde, author A. Xia, author R. P. Bhatt, author M. K. Oberthaler, author P. Hauke, author J. Berges, and author F. Jendrzejewski, title title A scalable realization of local U(1) gauge invariance in cold atomic mixtures, https://doi.org/10.1126/science.aaz5312 journal journal Science volume 367, pages 1128 (year 2020)NoStop [Banerjee et al.(2012)Banerjee, Dalmonte, Müller, Rico, Stebler, Wiese, and Zoller]banerjeeAtomicQuantumSimulation2012 author author D. Banerjee, author M. Dalmonte, author M. Müller, author E. Rico, author P. Stebler, author U.-J. Wiese, and author P. Zoller, title title Atomic Quantum Simulation of Dynamical Gauge Fields Coupled to Fermionic Matter: From String Breaking to Evolution after a Quench, https://doi.org/10.1103/PhysRevLett.109.175302 journal journal Phys. Rev. Lett. volume 109, pages 175302 (year 2012)NoStop [Hauke et al.(2013)Hauke, Marcos, Dalmonte, and Zoller]haukeQuantumSimulationLattice2013a author author P. Hauke, author D. Marcos, author M. Dalmonte, and author P. Zoller, title title Quantum Simulation of a Lattice Schwinger Model in a Chain of Trapped Ions, https://doi.org/10.1103/PhysRevX.3.041018 journal journal Phys. Rev. X volume 3, pages 041018 (year 2013)NoStop [Wiese(2013)]wieseUltracoldQuantumGases2013 author author U.-J. Wiese, title title Ultracold quantum gases and lattice systems: Quantum simulation of lattice gauge theories, https://doi.org/10.1002/andp.201300104 journal journal Ann. Phys. volume 525, pages 777 (year 2013)NoStop [Zohar et al.(2015)Zohar, Cirac, and Reznik]zoharQuantumSimulationsLattice2015 author author E. Zohar, author J. I. Cirac, and author B. Reznik, title title Quantum simulations of lattice gauge theories using ultracold atoms in optical lattices, https://doi.org/10.1088/0034-4885/79/1/014401 journal journal Rep. Prog. Phys. volume 79, pages 014401 (year 2015)NoStop [Yang et al.(2016)Yang, Giri, Johanning, Wunderlich, Zoller, and Hauke]yangAnalogQuantumSimulation2016 author author D. Yang, author G. S. Giri, author M. Johanning, author C. Wunderlich, author P. Zoller, and author P. Hauke, title title Analog quantum simulation of ( 1 + 1 ) -dimensional lattice QED with trapped ions, https://doi.org/10.1103/PhysRevA.94.052321 journal journal Phys. Rev. A volume 94, pages 052321 (year 2016)NoStop [Davoudi et al.(2020)Davoudi, Hafezi, Monroe, Pagano, Seif, and Shaw]davoudiAnalogQuantumSimulations2020 author author Z. Davoudi, author M. Hafezi, author C. Monroe, author G. Pagano, author A. Seif, and author A. Shaw, title title Towards analog quantum simulations of lattice gauge theories with trapped ions, https://doi.org/10.1103/PhysRevResearch.2.023015 journal journal Phys. Rev. Research volume 2, pages 023015 (year 2020)NoStop [Luo et al.(2020)Luo, Shen, Highman, Clark, DeMarco, El-Khadra, and Gadway]luoFrameworkSimulatingGauge2020a author author D. Luo, author J. Shen, author M. Highman, author B. K. Clark, author B. DeMarco, author A. X. El-Khadra, and author B. Gadway, title title Framework for simulating gauge theories with dipolar spin systems, https://doi.org/10.1103/PhysRevA.102.032617 journal journal Phys. Rev. A volume 102, pages 032617 (year 2020)NoStop [Notarnicola et al.(2020)Notarnicola, Collura, and Montangero]notarnicolaRealtimedynamicsQuantumSimulation2020 author author S. Notarnicola, author M. Collura, and author S. Montangero, title title Real-time-dynamics quantum simulation of ( 1 + 1 ) -dimensional lattice QED with Rydberg atoms, https://doi.org/10.1103/PhysRevResearch.2.013288 journal journal Phys. Rev. Research volume 2, pages 013288 (year 2020)NoStop [Surace et al.(2020)Surace, Mazza, Giudici, Lerose, Gambassi, and Dalmonte]suraceLatticeGaugeTheories2020 author author F. M. Surace, author P. P. Mazza, author G. Giudici, author A. Lerose, author A. Gambassi, and author M. Dalmonte, title title Lattice Gauge Theories and String Dynamics in Rydberg Atom Quantum Simulators, https://doi.org/10.1103/PhysRevX.10.021041 journal journal Phys. Rev. X volume 10, pages 021041 (year 2020)NoStop [Davoudi et al.(2021b)Davoudi, Linke, and Pagano]davoudiSimulatingQuantumField2021 author author Z. Davoudi, author N. M. Linke, and author G. Pagano, title title Toward simulating quantum field theories with controlled phonon-ion dynamics: A hybrid analog-digital approach, https://doi.org/10.1103/PhysRevResearch.3.043072 journal journal Phys. Rev. Research volume 3, pages 043072 (year 2021b)NoStop [Andrade et al.(2022)Andrade, Davoudi, Graß, Hafezi, Pagano, and Seif]andrade2022engineering author author B. Andrade, author Z. Davoudi, author T. Graß, author M. Hafezi, author G. Pagano, and author A. Seif, title title Engineering an effective three-spin Hamiltonian in trapped-ion systems for applications in quantum simulation, https://doi.org/10.1088/2058-9565/ac5f5b journal journal Quantum Sci. Technol. volume 7, pages 034001 (year 2022)NoStop [Marcos et al.(2013)Marcos, Rabl, Rico, and Zoller]marcosSuperconductingCircuitsQuantum2013a author author D. Marcos, author P. Rabl, author E. Rico, and author P. Zoller, title title Superconducting Circuits for Quantum Simulation of Dynamical Gauge Fields, https://doi.org/10.1103/PhysRevLett.111.110504 journal journal Phys. Rev. Lett. volume 111, pages 110504 (year 2013)NoStop [Halimeh et al.(2022)Halimeh, McCulloch, Yang, and Hauke]halimehTuningTopologicalAngle2022 author author J. C. Halimeh, author I. P. McCulloch, author B. Yang, and author P. Hauke, title title Tuning the Topological θ-Angle in Cold-Atom Quantum Simulators of Gauge Theories, https://doi.org/10.1103/PRXQuantum.3.040316 journal journal PRX Quantum volume 3, pages 040316 (year 2022)NoStop [Osborne et al.()Osborne, Yang, McCulloch, Hauke, and Halimeh]osborneSpinMathrmUQuantum2023a author author J. Osborne, author B. Yang, author I. P. McCulloch, author P. Hauke, and author J. C. Halimeh, title title Spin-S U(1) Quantum Link Models with Dynamical Matter on a Quantum Simulator, @noop https://arxiv.org/abs/2305.06368 arXiv:2305.06368 NoStop [Kruckenhauser et al.()Kruckenhauser, van Bijnen, Zache, Di Liberto, and Zoller]kruckenhauserHighdimensionalSymmetricRydberg2022 author author A. Kruckenhauser, author R. van Bijnen, author T. V. Zache, author M. Di Liberto, and author P. Zoller, title title High-dimensional SO(4)-symmetric Rydberg manifolds for quantum simulation, @noop https://arxiv.org/abs/2206.01108 arXiv:2206.01108 NoStop [Forn-Díaz et al.(2019)Forn-Díaz, Lamata, Rico, Kono, and Solano]Forn-Diaz2019 author author P. Forn-Díaz, author L. Lamata, author E. Rico, author J. Kono, and author E. Solano, title title Ultrastrong coupling regimes of light-matter interaction, https://doi.org/10.1103/RevModPhys.91.025005 journal journal Rev. Mod. Phys volume 91, pages 25005 (year 2019)NoStop [Frisk Kockum et al.(2019)Frisk Kockum, Miranowicz, De Liberato, Savasta, and Nori]FriskKockum author author A. Frisk Kockum, author A. Miranowicz, author S. De Liberato, author S. Savasta, and author F. Nori, title title Ultrastrong coupling between light and matter, https://doi.org/10.1038/s42254-018-0006-2 journal journal Nat. Rev. Phys. volume 1, pages 19 (year 2019)NoStop [Vanderstraeten et al.(2019)Vanderstraeten, Haegeman, and Verstraete]vanderstraetenTangentspaceMethodsUniform2019 author author L. Vanderstraeten, author J. Haegeman, and author F. Verstraete, title title Tangent-space methods for uniform matrix product states, https://doi.org/10.21468/SciPostPhysLectNotes.7 journal journal SciPost Phys. Lect. Notes , pages 7 (year 2019)NoStop [Berges et al.(2018a)Berges, Floerchinger, and Venugopalan]Berges:2017zws author author J. Berges, author S. Floerchinger, and author R. Venugopalan, title title Thermal excitation spectrum from entanglement in an expanding quantum string, https://doi.org/10.1016/j.physletb.2018.01.068 journal journal Phys. Lett. B volume 778, pages 442 (year 2018a)NoStop [Berges et al.(2018b)Berges, Floerchinger, and Venugopalan]Berges:2017hne author author J. Berges, author S. Floerchinger, and author R. Venugopalan, title title Dynamics of entanglement in expanding quantum fields, https://doi.org/10.1007/jhep04(2018)145 journal journal J. High Energy Phys. volume 04number (4), pages 145NoStop [Kharzeev and Levin(2017)]kharzeev2017deep number author author D. E. Kharzeev and author E. M. Levin, title title Deep inelastic scattering as a probe of entanglement, https://doi.org/10.1103/physrevd.95.114008 journal journal Phys. Rev. D volume 95, pages 114008 (year 2017)NoStop [Hagiwara et al.(2018)Hagiwara, Hatta, Xiao, and Yuan]hagiwara2018classical author author Y. Hagiwara, author Y. Hatta, author B.-W. Xiao, and author F. Yuan, title title Classical and quantum entropy of parton distributions, https://doi.org/10.1103/physrevd.97.094029 journal journal Phys. Rev. D volume 97, pages 094029 (year 2018)NoStop [Kovner et al.(2019)Kovner, Lublinsky, and Serino]kovner2019entanglement author author A. Kovner, author M. Lublinsky, and author M. Serino, title title Entanglement entropy, entropy production and time evolution in high energy QCD, https://doi.org/10.1016/j.physletb.2018.10.043 journal journal Phys. Lett. B volume 792, pages 4 (year 2019)NoStop [Beane et al.(2019)Beane, Kaplan, Klco, and Savage]beane2019entanglement author author S. R. Beane, author D. B. Kaplan, author N. Klco, and author M. J. Savage, title title Entanglement suppression and emergent symmetries of strong interactions, https://doi.org/10.1103/PhysRevLett.122.102001 journal journal Phys. Rev. Lett. volume 122, pages 102001 (year 2019)NoStop [Beane and Ehlers(2020)]beane2020chiral author author S. R. Beane and author P. J. Ehlers, title title Chiral symmetry breaking, entanglement, and the nucleon spin decomposition, https://doi.org/10.1142/S0217732320500480 journal journal Mod. Phys. Lett. A volume 35, pages 2050048 (year 2020)NoStop [Beane and Farrell(2021)]beane2021geometry author author S. R. Beane and author R. C. Farrell, title title Geometry and entanglement in the scattering matrix, https://doi.org/10.1016/j.aop.2021.168581 journal journal Ann. Phys. volume 433, pages 168581 (year 2021)NoStop [Coleman et al.(1975)Coleman, Jackiw, and Susskind]colemanChargeShieldingQuark1975 author author S. Coleman, author R. Jackiw, and author L. Susskind, title title Charge shielding and quark confinement in the massive schwinger model, https://doi.org/10.1016/0003-4916(75)90212-2 journal journal Ann. Phys. volume 93, pages 267 (year 1975)NoStop [Coleman(1976)]colemanMoreMassiveSchwinger1976 author author S. Coleman, title title More about the massive Schwinger model, https://doi.org/10.1016/0003-4916(76)90280-3 journal journal Ann. Phys. volume 101, pages 239 (year 1976)NoStop [sup()]supp @noop note See Supplemental Material for additional details concerning the general massive Thirring-Schwinger model, its experimental implementation with circuit QED, experimental wave packet preparation scheme, the derivation of the effective quark-antiquark Hamiltonian and details on the numerical methods. Includes Refs. <cit.>.Stop [Özgüler et al.()Özgüler, Manucharyan, and Vavilov]ozgulerExcitationDynamicsInductively2021 author author A. B. Özgüler, author V. E. Manucharyan, and author M. G. Vavilov, title title Excitation dynamics in inductively coupled fluxonium circuits, @noop https://arxiv.org/abs/2104.03300 arXiv:2104.03300 NoStop [Blais et al.(2021)Blais, Grimsmo, Girvin, and Wallraff]blaisCircuitQuantumElectrodynamics2021 author author A. Blais, author A. L. Grimsmo, author S. M. Girvin, and author A. Wallraff, title title Circuit Quantum Electrodynamics, https://doi.org/10.1103/RevModPhys.93.025005 journal journal Rev. Mod. Phys. volume 93, pages 025005 (year 2021)NoStop [Fröhlich and Seiler(1976)]frohlichMassiveThirringSchwingerModel1976 author author J. Fröhlich and author E. Seiler, title title The massive Thirring-Schwinger model (QED_2) : Convergence of perturbation theory and particle structure, https://doi.org/10.5169/SEALS-114796 journal journal Helvetica Phys. Acta volume 49, pages 889 (year 1976)NoStop [Zhang et al.(2023)Zhang, Kim, Mark, Choi, and Painter]zhangSuperconductingQuantumSimulator2023 author author X. Zhang, author E. Kim, author D. K. Mark, author S. Choi, and author O. Painter, title title A superconducting quantum simulator based on a photonic-bandgap metamaterial, https://doi.org/10.1126/science.ade7651 journal journal Science volume 379, pages 278 (year 2023)NoStop [Forn-Díaz et al.(2017)Forn-Díaz, García-Ripoll, Peropadre, Orgiazzi, Yurtalan, Belyansky, Wilson, and Lupascu]forn-diazUltrastrongCouplingSingle2017 author author P. Forn-Díaz, author J. J. García-Ripoll, author B. Peropadre, author J.-L. Orgiazzi, author M. A. Yurtalan, author R. Belyansky, author C. M. Wilson, and author A. Lupascu, title title Ultrastrong coupling of a single artificial atom to an electromagnetic continuum in the nonperturbative regime, https://doi.org/10.1038/nphys3905 journal journal Nature Phys volume 13, pages 39 (year 2017)NoStop [Vrajitoarea et al.()Vrajitoarea, Belyansky, Lundgren, Whitsitt, Gorshkov, and Houck]vrajitoareaUltrastrongLightmatterInteraction2022a author author A. Vrajitoarea, author R. Belyansky, author R. Lundgren, author S. Whitsitt, author A. V. Gorshkov, and author A. A. Houck, title title Ultrastrong light-matter interaction in a photonic crystal, @noop https://arxiv.org/abs/2209.14972 arXiv:2209.14972 NoStop [Zache et al.(2022)Zache, Van Damme, Halimeh, Hauke, and Banerjee]zacheContinuumLimitMathrmD2022 author author T. V. Zache, author M. Van Damme, author J. C. Halimeh, author P. Hauke, and author D. Banerjee, title title Toward the continuum limit of a (1+1)D quantum link Schwinger model, https://doi.org/10.1103/PhysRevD.106.L091502 journal journal Phys. Rev. D volume 106, pages L091502 (year 2022)NoStop [Shankar and Murthy(2005)]shankarDeconfinementAsymptoticHalfasymptotic2005c author author R. Shankar and author G. Murthy, title title Deconfinement in d = 1 : Asymptotic and half-asymptotic particles, https://doi.org/10.1103/PhysRevB.72.224414 journal journal Phys. Rev. B volume 72, pages 224414 (year 2005)NoStop [Zauner-Stauber et al.(2018)Zauner-Stauber, Vanderstraeten, Fishman, Verstraete, and Haegeman]zauner-stauberVariationalOptimizationAlgorithms2018 author author V. Zauner-Stauber, author L. Vanderstraeten, author M. T. Fishman, author F. Verstraete, and author J. Haegeman, title title Variational optimization algorithms for uniform matrix product states, https://doi.org/10.1103/PhysRevB.97.045145 journal journal Phys. Rev. B volume 97, pages 045145 (year 2018)NoStop [Haegeman et al.(2012)Haegeman, Pirvu, Weir, Cirac, Osborne, Verschelde, and Verstraete]haegemanVariationalMatrixProduct2012 author author J. Haegeman, author B. Pirvu, author D. J. Weir, author J. I. Cirac, author T. J. Osborne, author H. Verschelde, and author F. Verstraete, title title Variational matrix product ansatz for dispersion relations, https://doi.org/10.1103/PhysRevB.85.100408 journal journal Phys. Rev. B volume 85, pages 100408 (year 2012)NoStop [Haegeman et al.(2013)Haegeman, Michalakis, Nachtergaele, Osborne, Schuch, and Verstraete]haegemanElementaryExcitationsGapped2013 author author J. Haegeman, author S. Michalakis, author B. Nachtergaele, author T. J. Osborne, author N. Schuch, and author F. Verstraete, title title Elementary Excitations in Gapped Quantum Spin Systems, https://doi.org/10.1103/PhysRevLett.111.080401 journal journal Phys. Rev. Lett. volume 111, pages 080401 (year 2013)NoStop [Milsted et al.(2013)Milsted, Haegeman, Osborne, and Verstraete]milstedVariationalMatrixProduct2013b author author A. Milsted, author J. Haegeman, author T. J. Osborne, and author F. Verstraete, title title Variational matrix product ansatz for nonuniform dynamics in the thermodynamic limit, https://doi.org/10.1103/PhysRevB.88.155116 journal journal Phys. Rev. B volume 88, pages 155116 (year 2013)NoStop [Phien et al.(2013)Phien, Vidal, and McCulloch]phienDynamicalWindowsRealtime2013 author author H. N. Phien, author G. Vidal, and author I. P. McCulloch, title title Dynamical windows for real-time evolution with matrix product states, https://doi.org/10.1103/PhysRevB.88.035103 journal journal Phys. Rev. B volume 88, pages 035103 (year 2013)NoStop [Zauner et al.(2015)Zauner, Ganahl, Evertz, and Nishino]zaunerTimeEvolutionComoving2015 author author V. Zauner, author M. Ganahl, author H. G. Evertz, and author T. Nishino, title title Time Evolution within a Comoving Window: Scaling of signal fronts and magnetization plateaus after a local quench in quantum spin chains, https://doi.org/10.1088/0953-8984/27/42/425602 journal journal J. Phys.: Condens. Matter volume 27, pages 425602 (year 2015)NoStop [Rothe et al.(1979)Rothe, Rothe, and Swieca]rotheScreeningConfinement1979 author author H. J. Rothe, author K. D. Rothe, and author J. A. Swieca, title title Screening versus confinement, https://doi.org/10.1103/PhysRevD.19.3020 journal journal Phys. Rev. D volume 19, pages 3020 (year 1979)NoStop [Note1()]Note1 note Projecting the state on four widely-separated quarks basis states resulted in a negligible contribution. This does not mean that four particle states are absent from <ref>, but suggests that they are not spatially separated to be recorded as single-particle states.Stop [Coleman(1977)]colemanFateFalseVacuum1977 author author S. Coleman, title title Fate of the false vacuum: Semiclassical theory, https://doi.org/10.1103/PhysRevD.15.2929 journal journal Phys. Rev. D volume 15, pages 2929 (year 1977)NoStop [Callan and Coleman(1977)]callanFateFalseVacuum1977 author author C. G. Callan and author S. Coleman, title title Fate of the false vacuum. II. First quantum corrections, https://doi.org/10.1103/PhysRevD.16.1762 journal journal Phys. Rev. D volume 16, pages 1762 (year 1977)NoStop [Hebenstreit et al.(2013)Hebenstreit, Berges, and Gelfand]hebenstreitRealTimeDynamicsString2013 author author F. Hebenstreit, author J. Berges, and author D. Gelfand, title title Real-Time Dynamics of String Breaking, https://doi.org/10.1103/PhysRevLett.111.201601 journal journal Phys. Rev. Lett. volume 111, pages 201601 (year 2013)NoStop [Note2()]Note2 note We verified that the missing wavefunction weight is not accounted for by three or four widely-separated particle basis states.Stop [Vanderstraeten et al.(2014)Vanderstraeten, Haegeman, Osborne, and Verstraete]vanderstraetenMatrixMatrixProduct2014 author author L. Vanderstraeten, author J. Haegeman, author T. J. Osborne, and author F. Verstraete, title title S Matrix from Matrix Product States, https://doi.org/10.1103/PhysRevLett.112.257202 journal journal Phys. Rev. Lett. volume 112, pages 257202 (year 2014)NoStop [Verdel et al.()Verdel, Zhu, and Heyl]verdelDynamicalLocalizationTransition2023 author author R. Verdel, author G.-Y. Zhu, and author M. Heyl, title title Dynamical localization transition of string breaking in quantum spin chains, @noop https://arxiv.org/abs/2304.12957 arXiv:2304.12957 NoStop [Zache et al.(2019)Zache, Mueller, Schneider, Jendrzejewski, Berges, and Hauke]zacheDynamicalTopologicalTransitions2019 author author T. V. Zache, author N. Mueller, author J. T. Schneider, author F. Jendrzejewski, author J. Berges, and author P. Hauke, title title Dynamical Topological Transitions in the Massive Schwinger Model with a Term, https://doi.org/10.1103/PhysRevLett.122.050403 journal journal Phys. Rev. Lett. volume 122, pages 050403 (year 2019)NoStop [Tian et al.(2017)Tian, Jing, and Dragan]tianAnalogCosmologicalParticle2017 author author Z. Tian, author J. Jing, and author A. Dragan, title title Analog cosmological particle generation in a superconducting circuit, https://doi.org/10.1103/PhysRevD.95.125003 journal journal Phys. Rev. D volume 95, pages 125003 (year 2017)NoStop [García-Álvarez et al.(2015)García-Álvarez, Casanova, Mezzacapo, Egusquiza, Lamata, Romero, and Solano]garcia-alvarezFermionFermionScatteringQuantum2015 author author L. García-Álvarez, author J. Casanova, author A. Mezzacapo, author I. L. Egusquiza, author L. Lamata, author G. Romero, and author E. Solano, title title Fermion-Fermion Scattering in Quantum Field Theory with Superconducting Circuits, https://doi.org/10.1103/PhysRevLett.114.070502 journal journal Phys. Rev. Lett. volume 114, pages 070502 (year 2015)NoStop [Roy et al.(2021)Roy, Schuricht, Hauschild, Pollmann, and Saleur]royQuantumSineGordonModel2021 author author A. Roy, author D. Schuricht, author J. Hauschild, author F. Pollmann, and author H. Saleur, title title The quantum sine-Gordon model with quantum circuits, https://doi.org/10.1016/j.nuclphysb.2021.115445 journal journal Nuclear Physics B volume 968, pages 115445 (year 2021)NoStop [Roy and Lukyanov()]roySolitonConfinementQuantum2023 author author A. Roy and author S. Lukyanov, title title Soliton Confinement in a Quantum Circuit, @noop https://arxiv.org/abs/2302.06289 arXiv:2302.06289 NoStop [Bañuls et al.(2017)Bañuls, Cichy, Cirac, Jansen, and Kühn]banulsDensityInducedPhase2017 author author M. C. Bañuls, author K. Cichy, author J. I. Cirac, author K. Jansen, and author S. Kühn, title title Density Induced Phase Transitions in the Schwinger Model: A Study with Matrix Product States, https://doi.org/10.1103/PhysRevLett.118.071601 journal journal Phys. Rev. Lett. volume 118, pages 071601 (year 2017)NoStop [Coleman(1975)]colemanQuantumSineGordonEquation1975 author author S. Coleman, title title Quantum sine-Gordon equation as the massive Thirring model, https://doi.org/10.1103/PhysRevD.11.2088 journal journal Phys. Rev. D volume 11, pages 2088 (year 1975)NoStop [Jentsch et al.(2022)Jentsch, Daviet, Dupuis, and Floerchinger]jentschPhysicalPropertiesMassive2022 author author P. Jentsch, author R. Daviet, author N. Dupuis, and author S. Floerchinger, title title Physical properties of the massive Schwinger model from the nonperturbative functional renormalization group, https://doi.org/10.1103/PhysRevD.105.016028 journal journal Phys. Rev. D volume 105, pages 016028 (year 2022)NoStop [Dashen et al.(1975)Dashen, Hasslacher, and Neveu]Dashen:1975hd author author R. F. Dashen, author B. Hasslacher, and author A. Neveu, title title Particle spectrum in model field theories from semiclassical functional integral techniques, https://doi.org/10.1103/PhysRevD.11.3424 journal journal Phys. Rev. D volume 11, pages 3424 (year 1975)NoStop [Zamolodchikov and Zamolodchikov(1979)]Zamolodchikov:1978xm author author A. B. Zamolodchikov and author A. B. Zamolodchikov, title title Factorized S-matrices in two dimensions as the exact solutions of certain relativistic quantum field theory models, https://doi.org/10.1016/0003-4916(79)90391-9 journal journal Ann. Phys. volume 120, pages 253 (year 1979)NoStop [Griffiths and Schroeter(2018)]griffithsquantum author author D. J. Griffiths and author D. F. Schroeter, @noop title Introduction to Quantum Mechanics, edition third edition ed. (publisher Cambridge University Press, address Cambridge ; New York, NY, year 2018)NoStop [Peruzzo et al.(2021)Peruzzo, Hassani, Szep, Trioni, Redchenko, Žemlička, and Fink]peruzzoGeometricSuperinductanceQubits2021 author author M. Peruzzo, author F. Hassani, author G. Szep, author A. Trioni, author E. Redchenko, author M. Žemlička, and author J. M. Fink, title title Geometric Superinductance Qubits: Controlling Phase Delocalization across a Single Josephson Junction, https://doi.org/10.1103/PRXQuantum.2.040341 journal journal PRX Quantum volume 2, pages 040341 (year 2021)NoStop [Rasmussen et al.(2021)Rasmussen, Christensen, Pedersen, Kristensen, Baekkegaard, Loft, and Zinner]rasmussenSuperconductingCircuitCompanionan2021a author author S. E. Rasmussen, author K. S. Christensen, author S. P. Pedersen, author L. B. Kristensen, author T. Baekkegaard, author N. J. S. Loft, and author N. T. Zinner, title title Superconducting Circuit Companion-an Introduction with Worked Examples, https://doi.org/10.1103/PRXQuantum.2.040204 journal journal Phys. Rev. Appl. volume 10, pages 40204 (year 2021)NoStop [Siddiqi(2021)]siddiqiEngineeringHighcoherenceSuperconducting2021 author author I. Siddiqi, title title Engineering high-coherence superconducting qubits, https://doi.org/10.1038/s41578-021-00370-4 journal journal Nat Rev Mater volume 6, pages 875 (year 2021)NoStop [Yao et al.(2013)Yao, Laumann, Gorshkov, Weimer, Jiang, Cirac, Zoller, and Lukin]yaoTopologicallyProtectedQuantum2013 author author N. Y. Yao, author C. R. Laumann, author A. V. Gorshkov, author H. Weimer, author L. Jiang, author J. I. Cirac, author P. Zoller, and author M. D. Lukin, title title Topologically protected quantum state transfer in a chiral spin liquid, https://doi.org/10.1038/ncomms2531 journal journal Nat Commun volume 4, pages 1585 (year 2013)NoStop [Magazzù et al.(2018)Magazzù, Forn-Díaz, Belyansky, Orgiazzi, Yurtalan, Otto, Lupascu, Wilson, and Grifoni]magazzuProbingStronglyDriven2018 author author L. Magazzù, author P. Forn-Díaz, author R. Belyansky, author J.-L. Orgiazzi, author M. A. Yurtalan, author M. R. Otto, author A. Lupascu, author C. M. Wilson, and author M. Grifoni, title title Probing the strongly driven spin-boson model in a superconducting quantum circuit, https://doi.org/10.1038/s41467-018-03626-w journal journal Nat Commun volume 9, pages 1403 (year 2018)NoStop [Haegeman et al.(2016)Haegeman, Lubich, Oseledets, Vandereycken, and Verstraete]Haegeman2016 author author J. Haegeman, author C. Lubich, author I. Oseledets, author B. Vandereycken, and author F. Verstraete, title title Unifying time evolution and optimization with matrix product states, https://doi.org/10.1103/PhysRevB.94.165116 journal journal Phys. Rev. B volume 94, pages 165116 (year 2016)NoStop
http://arxiv.org/abs/2307.00955v1
20230703115826
Combinatorics on Number Walls and the $p(t)$-adic Littlewood Conjecture
[ "Steven Robertson" ]
math.NT
[ "math.NT" ]
Dynamical Graph Echo State Networks with Snapshot Merging for Dissemination Process Classification Ziqiang Li10000-0002-7208-9003 Kantaro Fujiwara10000-0001-8114-7837 Gouhei Tanaka1,20000-0002-6223-4406 August 1, 2023 =========================================================================================================== Let p be a prime number and α a real number. In analogy with the classical Littlewood Conjecture, de Mathan and Teulié conjectured that inf_|m|≥1|m|_p· |m|· |⟨α m⟩|=0 where |m| is the usual absolute value, |m|_p is the p-adic norm and |⟨ x⟩| is the distance from x∈ to the nearest integer. This is the p-adic Littlewood Conjecture. Let 𝕂 be a field and p(t) be an irreducible polynomial with coefficients in 𝕂. This paper deals with the analogue of this conjecture over the field of formal Laurent series over 𝕂, known as the p(t)-adic Littlewood Conjecture (p(t)-LC). Firstly, a given counterexample to p(t)-LC for the case p(t)=t is shown to generate an explicit counterexample when p(t) is any irreducible polynomial. Since Adiceam, Nesharim and Lunnon <cit.> proved the p(t)-LC is false when p(t)=t and 𝕂 has characteristic three, one obtains a disproof of the p(t)-LC over any such field in full generality (i.e for any choice of irreducible polynomial p(t)). Secondly, a Khintchine-type theorem for p(t)-adic multiplicative approximation is established. This enables one to determine the measure of the set of counterexamples to p(t)-LC with an additional monotonic growth function. In complement to this result, the Hausdorff dimension of the same set is shown to be maximal when p(t)=t in the critical case where the growth function is log^2. These results here are in agreement with the corresponding theory of multiplicative Diophantine approximation over the reals. These goals are achieved by developing an extensive theory in combinatorics relating p(t)-LC to the properties of the so-called number wall of a sequence. This is an infinite array containing the determinant of every finite Toeplitz matrix generated by that sequence. In full generality, the main novelty of this paper is creating a dictionary allowing one to transfer statements in Diophantine approximation in positive characteristic to combinatorics through the concept of a number wall, and conversely. § INTRODUCTION Let α be a real number. Let |α| denote the usual absolute value and define |⟨α⟩| as the distance from α to its nearest integer[Although this notation is not the standard, it is in-line with the notation used later in the function field analogue.] . A classical conjecture due to Littlewood states that for any real numbers α and β, inf_n∈\{0}|⟨ nα⟩|·|⟨ nα⟩|·|n|=0. The Littlewood Conjecture (from now on abbreviated to LC) remains open. The current state of the art is a result by Einsiedler, Katok and Lindenstrauss <cit.>, who proved the set of counterexamples to LC has Hausdorff dimension zero. In 2004, de Mathan and Teulié <cit.> suggested a variant commonly known as the p-adic Littlewood Conjecture (abbreviated to p-LC). Let p be a prime number. Given a natural number n expanded as n=p^k· n_1, where k and n_1 are natural numbers and n_1 and p are coprime, define its p-adic norm as |n|_p:=p^-k. For any prime p and any real number α, inf_n∈\{0}|n|· |n|_p·|⟨ nα⟩|=0. In 2007, Einsiedler and Kleinbock <cit.> proved that the set of counterexamples to p-LC has Hausdorff dimension zero. Metric results have been obtained when a growth function is added to the left hand side of equation (<ref>). Explicitly, for a non-decreasing function f and prime number p, define the sets W(f,p): ={α∈[0,1): inf_n∈\{0}f(n)· n·|n|_p·|⟨α· n⟩|=0}, and M(f,p)=[0,1)\ W(f,p). That is, W(f,p) is the set of real numbers in the unit interval satisfying p-LC with growth function f, and M(f,p) is the set of those numbers failing it. In 2009, Bugeaud and Moshchevitin <cit.> proved that M(log^2,p) has full Hausdorff dimension. Two years later, Bugeaud, Haynes and Velani <cit.> proved the Lebesgue measure of the same set is zero. Then, Badziahin and Velani <cit.> established that M(log(n)·log(log(n)),p)=1. Above and throughout, refers to the Hausdorff dimension. The p-adic Littlewood Conjecture admits a natural analogue over function fields. In order to state it, here and throughout this paper q is a power of a prime number and _q denotes the field with cardinality q. Additionally, 𝔽_q[t] is the ring of polynomials with coefficients in 𝔽_q and (_q[t])_n is the subset of _q[t] comprised of only the polynomials of degree less than or equal to n∈. Similarly, 𝔽_q(t) is the field of rational functions over 𝔽_q. The absolute value of Θ(t)∈_q(t) is then |Θ(t)|=q^(Θ(t)). The real and function field absolute values share a notation, and it will be clear from context which is meant. Using this metric, the completion of the field of rational functions is given by 𝔽_q((t^-1)), the field of formal Laurent series in the variable t^-1 with coefficients in 𝔽_q. An element Θ(t) of this field is written explicitly as Θ(t)=∑^∞_i=-ha_it^-i for some h in ℤ and a_i in _q with a_h≠0. With this expression, the degree of Θ(t)∈_q((t^-1)) is defined as (Θ(t)):= h. The fractional part of Θ(t) is given by ⟨Θ(t) ⟩ := ∑^∞_i=1a_it^-i. In analogy with the real numbers, the unit interval is defined as ={Θ(t)∈_q((t^-1)): |Θ(t)|<1}. Alternatively, it is the set containing exactly the fractional parts of every Θ(t)∈_q((t^-1)). The analogue of the prime numbers is the set of irreducible polynomials. Given such an irreducible polynomial p(t)∈𝔽_q[t], the following is used to define the analogue of the p-adic norm: any Θ(t)∈𝔽_q((t^-1)) can be expanded uniquely as Θ(t) = ∑^∞_i=-HA_i(t)p(t)^-i, where H=⌊(Θ(t))/(p(t))⌋ and A_i(t)∈(𝔽_q[t])_(p(t))-1 for i≥ -H. This is known as the base p(t) expansion of Θ(t) and it is unique and well-defined. Let m be the degree of p(t) and let N(t)∈_q[t] be a polynomial with base p(t) expansion N(t):=∑_i=0^h A_i(t)p(t)^i. The p(t)-adic norm of N(t) is defined as |N(t)|_p(t)=q^-mi, where i=min{j≥0: A_j≠0}. The function field analogue of p-LC, also due to de Mathan and Teulié <cit.>, reads as follows: For any irreducible polynomial p(t) in _q[t] and any Laurent series Θ(t)∈_q((t^-1)), inf_N(t)∈_q[t]\{0}|N(t)|· |N(t)|_p(t)·|⟨ N(t)·Θ(t)⟩|=0. The p(t)-adic Littlewood Conjecture is abbreviated to p(t)-LC. When p(t)=t, it becomes the t-adic Littlewood Conjecture (abbreviated to t-LC). In the same paper as it was conjectured, de Mathan and Teulié establish that t-LC is false over infinite fields. This work was extended by Bugeaud and de Mathan <cit.>, who provide explicit counterexamples to t-LC in this case, whilst also providing examples of power series satisfying t-LC in any characteristic. Over finite fields, p(t)-LC is more challenging. In 2017, Einsiedler, Lindenstrauss and Mohammadi <cit.> proved results on the positive characteristic analogue of the measure classification results of Einsiedler, Katok and Lindenstrauss <cit.>, but their work falls short of implying that p(t)-LC fails on a set of Hausdorff dimension zero. In 2021, Adiceam, Nesharim and Lunnon <cit.> found an explicit counterexample to t-LC over _q, for q a power of 3. They conjectured that the same Laurent series is a counterexample over _q for q a power of any prime congruent to 3 modulo 4. From this breakthrough, it is tempting to believe that p(t)-LC is false over all finite fields and for all irreducible polynomials p(t). The first result in this paper provides further evidence to this claim by showing that any counterexample to t-LC induces a counterexample to p(t)-LC for any irreducible p(t). This result thus reduces the problem of totally disproving p(t)-LC to only disproving t-LC. It is not expected that such a transference result should exist over . Let l be a natural number, p(t) be an irreducible polynomial of degree m over _q and Θ(t):=∑_i=1^∞ b_it^-i∈𝔽_q((t^-1)) be a Laurent series. Assume that inf_N(t)∈_q[t]\{0}|N(t)|· |N(t)|_t·|⟨ N(t)·Θ(t)⟩|=q^-l. Then, the Laurent series Θ(p(t))∈𝔽_q((p(t)^-1))⊂𝔽_q((t^-1)) satisfies inf_N(t)∈_q[t]\{0}|N(t)|· |N(t)|_p(t)·|⟨ N(t)·Θ(p(t))⟩|=q^-lm. In particular, if Θ(t) is a counterexample to t-LC in the sense that the infimum in equation (<ref>) is positive, then Θ(p(t)) is a counterexample to p(t)-LC in the sense that the infimum in equation (<ref>) is positive. Combining this with the counterexamples from Adiceam, Nesharim and Lunnon <cit.> and de Mathan and Bugeaud <cit.> leads to the following corollary: For all irreducible polynomials p(t), the p(t)-adic Littlewood Conjecture is false over infinite fields and also over any field of characteristic 3. The remaining theorems in this paper are of a metric nature. The implicit measure used in the related statements is the Haar measure over _q((t^-1))), denoted by μ. For an integer l, define a ball of radius q^-l around a Laurent series Θ(t)∈_q((t^-1)) as B(Θ(t),q^-l)={Φ∈_q((t^-1)): |Θ(t)-Φ(t)|≤ q^-l}. Explicitly, this set consists of all the Laurent series which have the same coefficients as Θ(t) up to and including the power of t^-l. The Haar measure over _q((t^-1)) is characterised by its translation invariance: for any l∈ and any Θ(t)∈_q((t^-1)) μ(B(Θ(t),q^-l))=q^-l. Let f:{0}∪{q^n:n∈}→^+ be a monotonic increasing function. The following sets are the natural analogues of those defined in (<ref>) and (<ref>): W(f,p(t)): ={Θ∈𝕀: inf_N(t)∈_q[t]\{0}f(|N(t)|)·|N(t)|·|N(t)|_p(t)|⟨Θ· N(t)⟩|=0} and M(f,p(t)):={Θ∈: inf_N(t)∈_q[t]\{0}f(|N(t)|)· |N(t)|_(p(t))· |N(t)|· |⟨Θ· N(t)⟩|>0}. Thus, W(f,p(t)) is the set of Laurent series satisfying p(t)-LC with growth function f, and M(f,p(t)) is the complement of W(f,p(t)). Theorem <ref> suggests t-LC underpins p(t)-LC, and hence the study of t-LC instead of the more general p(t)-LC is justified. The strategy for the remainder of this paper is to rephrase t-LC using the concept of a number wall. Section 3 provides a rigorous introduction to number walls, but for the sake of the introduction, the following definitions will suffice. A matrix is Toeplitz if every entry on a given diagonal is equal. It is thus determined by the sequence comprising its first column and row. The number wall of a sequence S=(s_i)_i∈ is a two dimensioanl array containing the determinants of all the possible finite Toeplitz matrices generated by consecutive elements of S. Specifically, the entry in row m and column n is the determinant of the (m+1)× (m+1) Toeplitz matrix with top left entry s_n. Zero entries in number walls can only appear in square shapes, called windows. The key insight of this paper is the following theorem, which reduces the Diophantine problem underpinning t-LC to combinatorics on number walls: Let Θ(t)=∑_i=1^∞ s_it^-i∈𝔽_q((t^-1)), l be a natural number and f:ℕ→ℝ be a positive monotonic increasing function. The following are equivalent: * The inequality inf_N(t)∈𝔽_q[t]\{0}f(|N(t)|)· |N(t)|· |N(t)|_t· |⟨ N(t)·Θ(t) ⟩|≥ q^-l. is satisfied (In particular, Θ(t) is a counterexample to the t-LC with growth function f). * For every m,n∈ℕ, there is no window of size greater than or equal to l+⌈log_q(f(q^m+n))⌉-1 with top left corner on row m and column m+n+1 in the number wall generated by the sequence S=(s_i)_i∈. Above, log_q(x) is the base-q logarithm of a real number x. Sections 3 develops an extensive theory of combinatorics on number walls, resulting in explicit formulae for how many finite sequences have a given window in their number wall. Section 5 extends these results to count how many finite sequences have any given pair of disconnected windows, or a window containing any given connected zero portion. These combinatorial results are hence used to prove a Khintchine-type result on p(t)-adic multiplicative approximation. Let f:{0}∪{q^k}_k≥0→ℝ+ be a monotonic increasing function and p(t)∈_q[t] be an irreducible polynomial. Then μ(W(f,p(t)))=0 if ∑_k≥0k/f(q^k)<∞ 1 if ∑_k≥0k/f(q^k)=∞. This amounts to claiming that the set of counterexamples to p(t)-LC with growth function f has zero or full measure depending on if the sum in (<ref>) converges or diverges. This agrees with the analogous theorem over the real numbers, proved in <cit.> by Bugeaud, Haynes and Velani. The combinatorial theory developed to study the properties of number walls in Section 3 is also used to prove the final theorem in this paper, which computes the Hausdorff dimension of M(f,p(t)) in the case p(t)=t and f(|N|)=log_q(|N|)^2. Recalling Definition <ref>, this provides the function field analogue of the 2009 result by Bugeaud and Moshchevitin <cit.>, stating M(log^2,p)=1 in the real case. The set of counterexamples to the t-LC with growth function log^2 has full Hausdorff dimension. That is, (M(log^2,t))=1. Note, that Theorem <ref> is nontrivial as Theorem <ref> implies that M(log^2,t) has measure zero. §.§.§ Acknowledgements The Author is grateful to his supervisor Faustin Adiceam for his consistent support, supervision and advice throughout the duration of this project. Furthermore, the Author acknowledges the financial support of the Heilbronn Institute. § FROM COUNTEREXAMPLES TO T-LC TO COUNTEREXAMPLES TO P(T)-LC This chapter is dedicated to proving Theorem <ref>; namely that a counterexample to t-LC implies a counterexample to p(t)-LC for any irreducible p(t). Decomposing the polynomial N(t) as p(t)· N'(t) for N'(t)∈_q[t] coprime to p(t), the equation in (<ref>) is rephrased as inf_N'(t)∈𝔽_q[t]\{0} k≥0|⟨Θ(t)· N'(t) · p(t)^k⟩|·|N'(t)|=0. Due to the infimum being over all polynomials N'(t) and natural numbers k, it is simple to see that N'(t) being coprime to p(t) is not required. Given a Laurent series F(t)∈𝔽_q((x^-1)), define _x(F(t)) as the degree of F(t) measured with respect to the variable t, and let |F|^(t):=q^_t(F) be its norm. Throughout this section, p(t)∈_q[t] is an irreducible polynomial and m:=_t(p(t)). Let l be a natural number and Θ(t):=∑_i=0^∞ s_it^-i be a Laurent series in _q((t^-1)). Then, inf_N(t)∈𝔽_q[t]\{0} k≥0|⟨Θ(t)· N(t) · t^k⟩|^(t)·|N(t)|^(t)=q^-l if and only if inf_N(p(t))∈𝔽_q[p(t)]\{0} k≥0|⟨Θ(p(t))· N(p(t)) · p(t)^k⟩|^(t)·|N(p(t))|^(t)=q^-ml. Let N(t)=∑^h_j=0a_jt^j be a nonzero polynomial in 𝔽_q[t]. Define h:=_tN(t). Then, (<ref>) holds if and only if |⟨Θ(p(t))· N(p(t)) · p(t)^k⟩|^(t)≥ q^-m(l+h) for any positive integer k. Furthermore, ⟨Θ(p(t))· N(p(t)) · p(t)^k⟩ = ⟨(∑^∞_i=1s_ip(t)^-i)·(∑^h_j=0a_jp(t)^j) · p(t)^k⟩ =⟨∑^∞_i=1(∑^h_j=0a_js_i+j+k)p(t)^-i⟩, implying that _t(⟨Θ(p(t))· N(p(t)) · p(t)^k⟩)∈{-mn: n≥1}, as the expansion of p(t)^-i has degree -im, since p(t)^i· p(t)^-i=1. Relation (<ref>) and the definition of the absolute value (<ref>) imply that (<ref>) holds if and only if, for any positive integer k, there exists some 1≤ i ≤ l+h such that ∑^h_j=0a_js_i+j+k≠0. Explicitly, this means that the system of equations [ s_1+k s_2+k … s_h+1+k; s_2+k s_3+k … s_h+2+k; ⋮ ⋮; s_l'+h+k s_l'+h+k+1 … s_l'+2h+k; ][ a_0; a_1; ⋮; a_h ]=[ 0; ⋮; 0; ] has no non trivial solution. This condition is independent of the choice of p(t), and holds in particular when p(t)=t. This shows the equivalency of (<ref>) and (<ref>). Consider a polynomial N(t)∈𝔽_q[t]\{0} expanded in base p(t) as: N(t)=∑_i=0^h' A_i(t)p(t)^i with h':=⌊_tN(t)/m⌋ and _t(A_i(t))≤ m-1 for every 0≤ i≤ h'. Let a_j,i be the coefficient of t^j in A_i(t). Then, N(t)=∑_i=0^h'∑^m-1_j=0a_j,it^jp(t)^i =∑^m-1_j=0t^j∑_i=0^h'a_j,ip(t)^i :=∑^m-1_j=0t^j N_j(p(t)), where N_j(p(t)):=∑^h'_i=0a_j,ip(t)^i∈𝔽_q[p(t)]. Therefore, inf_N(t)∈𝔽_q[t]\{0} k≥0 |⟨Θ(p(t))· N(t) · p(t)^k⟩|^(t)·|N(t)|^(t) = inf |⟨Θ(p(t))·∑^m-1_j=0t^jN_j(p(t)) · p(t)^k⟩|^(t)·|∑^m-1_j=0t^jN_j(p(t))|^(t) = inf |∑^m-1_j=0⟨Θ(p(t))· t^jN_j(p(t)) · p(t)^k⟩|^(t)·|∑^m-1_j=0t^jN_j(p(t))|^(t). Above, and throughout the rest of the proof, the infimum is taken over natural numbers k≥0 and all polynomials N_0(p(t)),…,N_m-1(p(t))∈𝔽_q[p(t)], ignoring the case where they are all zero. It is clear that ⟨Θ(p(t))· N(p(t)) · p(t)^k⟩ has no nonzero coefficient for t^-1,…,t^-m+1, which implies that for any 0≤ j ≤ m-1, ⟨Θ(p(t))· t^jN(p(t)) · p(t)^k⟩=t^j⟨Θ(p(t))· N(p(t)) · p(t)^k⟩. Hence, (<ref>)=inf |∑^m-1_j=0t^j⟨Θ(p(t))· N_j(p(t)) · p(t)^k⟩|^(t)·|∑^m-1_j=0t^jN_j(p(t))|^(t) If the degree of ⟨Θ(p(t))· N(p(t)) · p(t)^k⟩ is -nm, then for any 0≤ j ≤ m-1 the degree of t^j⟨Θ(p(t))· N(p(t)) · p(t)^k⟩ is -nm+j≡ j m. This implies that the degree of each of the terms in ∑^m-1_j=0t^j⟨Θ(p(t))· N_j(p(t)) · p(t)^k⟩ are all different and therefore the degree of the sum is equal to the greatest degree of its terms. The same is true for |∑^m-1_j=0t^jN_j(p(t))|^(t). As a consequence, (<ref>) =inf max_j∈{0,…,m-1}|t^j⟨Θ(p(t))· N_j(p(t)) · p(t)^k⟩|^(t)max_i∈{0,…,m-1}|t^iN_i(p(t))|^(t) =inf max_j∈{0,…,m-1}q^j|⟨Θ(p(t))· N_j(p(t)) · p(t)^k⟩|^(t)max_i∈{0,…,m-1}q^i|N_i(p(t))|^(t). Let j' and i' be the values of j and i, respectively, that attain the maxima in (<ref>). Then, max_j∈{0,…,m-1}q^j|⟨Θ(p(t))· N_j(p(t)) · p(t)^k⟩|^(t) max_i∈{0,…,m-1}q^i|N_i(p(t))|^(t) ≥min_ h∈{j',i'}q^h|⟨Θ(p(t))· N_h(p(t)) · p(t)^k⟩|^(t)q^h|N_h(p(t))|^(t) (<ref>)≥ q^-ml+2h This provides a lower bound for the value of (<ref>), namely the case where h=0. The fact that 𝔽_q[p(t)]⊂𝔽_q[t] and the equation (<ref>) show that this lower bound is attained, hence (<ref>)= q^-lm. This completes the proof of Theorem <ref>. The next section proves the link between t-LC and number walls and develops the combinatorial theory required to prove Theorem <ref>. § COMBINATORIAL PROPERTIES OF NUMBER WALLS §.§ Fundamentals of Number Walls This section serves as an introduction to number walls. Only the theorems crucial for this paper are mentioned, and the proofs can be found in the references. For a more comprehensive look at number walls, see <cit.>,<cit.>, <cit.> and <cit.>. The following definition provides the building blocks of a number wall. A matrix (s_i,j) for 0≤ i≤ n, 0≤ j ≤ m is called Toeplitz (Hankel, respectively) if all the entries on a diagonal (on an anti-diagonal, respectively) are equal. Equivalently, s_i,j=s_i+1,j+1 (s_i,j=s_i+1,j-1, respectively) for any n∈ℕ such that this entry is defined. Given a doubly infinite sequence S:= (s_i)_i∈ℤ, natural numbers m and v, and an integer n, define an (m+1)× (v+1) Toeplitz matrix T_S(n, m, v):= (s_i-j+n)_0≤ i ≤ m, 0≤ j ≤ v and a Hankel matrix of the same size as H_A(n, m, v):= (s_i+j+n)_0≤ i ≤ m, 0≤ j ≤ v: T_S(n,m,v):=[ s_n s_n+1 … s_n+v; s_n-1 s_n … s_n+v-1; ⋮ ⋮; s_n-m s_n-m+1 … s_n-m+v ], H_S(n,m,v):=[ s_n s_n+1 … s_n+v; s_n+1 s_n+2 … s_n+v+1; ⋮ ⋮; s_n+m s_n+m+1 … s_n+m+v ]. If v=m, these are shortened to T_S(n,m) and H_S(n,m), respectively. The Laurent series Θ(t)=∑^∞_i=1s_it^-1∈_q((t^-1)) is identified to the sequence S=(s_i)_i≥1. Therefore, define T_Θ(n,m,v)=T_S(n,m,v) and H_Θ(n,m,v):=H_S(n,m,v). By elementary row operations, it holds that. (H_A(n,m))=(-1)^m(m+1)/2(T_A(n+m,m)). Let S=(s_i)_i∈ℤ be a doubly infinite sequence over a finite field 𝔽_q. The number wall of the sequence S is defined as the two dimensional array of numbers W(S)=(W_m,n(S))_n,m∈ℤ with W_m,n(S)=(T_S(n,m)) m≥0 1 m=-1 0 m<-1, In keeping with standard matrix notation, m increases as the rows go down the page and n increases from left to right. A key feature of number walls is that the zero entries can only appear in specific shapes: Zero entries in a Number Wall can only occur within windows; that is, within square regions with horizontal and vertical edges. See <cit.>. For the problems under consideration, the zero windows carry the most important information in the number wall. Theorem <ref> implies that the border of a window[“Zero window" is abbreviated to just “window" for the remainder of this paper.] is always the boundary of a square with nonzero entries. This motivates the following definition: The entries of a number wall surrounding a window are referred to as the inner frame. The entries surrounding the inner frame form the outer frame. The entries of the inner frame are extremely well structured: The inner frame of a window with side length l≥1 is comprised of 4 geometric sequences. These are along the top, left, right and bottom edges and they have ratios P,Q,R and S respectively with origins at the top left and bottom right. Furthermore, these ratios satisfy the relation PS/QR=(-1)^l. See <cit.>. See Figure <ref> for an example of a window of side length l. For i∈{0,…,l+1}, the inner and outer frames are labelled by the entries A_i,B_i,C_i,D_i and E_i,F_i,G_i,H_i respectively. The ratios of the geometric sequences comprising the inner frame are labelled as P,Q,R and S. Calculating a number wall from its definition is a computationally exhausting task. The following theorem gives a simple and far more efficient way to calculate the m^ row using the previous rows. Given a doubly infinite sequence S=(s_i)_i∈ℤ, the number wall (W_m,n)_n,m∈ℤ:=(W_m,n(S))_n,m∈ℤ can be generated by a recurrence in row m∈ℤ in terms of the previous rows. More precisely, with the notation of Figure <ref>, W_m,n= 0 m<-1 (m,n) 1 m=-1; s_n m=0; W_m-1,n^2-W_m-1,n-1W_m-1n+1/W_m-2,n m>0W_m-2,n≠0; D_k=(-1)^l· kB_kC_k/A_k m>0W_m-2,n=0=W_m-1,n; H_k=QE_k/A_k+(-1)^kPF_k/B_k-(-1)^kSG_k/C_k/R/D m>0 W_n,m-2=0≠ W_n,m-1. See <cit.> The value of k above is found in the natural way from the value of m,n and the side length l. The final three equations above are known as the First, Second and Third Frame Constraint equations. These allow the number wall of a sequence to be considered independently of its original definition in terms of Toeplitz matrices. Theorem <ref> provides the link between t-LC and number walls. The following lemma is required for the proof. Let H = (s_i+j-1) with 1≤ i,j≤ n be an m × m Hankel matrix with entries in 𝔽_q. Assume that the first r columns of H are linearly independent but that the first r + 1 columns are linearly dependent (here, 1 ≤ r ≤ m-1). Then the principal minor of order r, that is, (s_i+j-1)_1≤ i,j≤ r, does not vanish. See <cit.> or <cit.>. 1⇒2: The proof is by contraposition. Let N(t)=M(t)· t^n∈_q[t] be a nonzero polynomial with M(t) coprime to t. Let m be the degree of M(t):=∑_i=0^ma_it^i. From the definition of the t-adic norm, f(|N(t)|)·|N(t)|· |N(t)|_t·|⟨Θ(t)· N(t) ⟩| =f(q^m+n)· q^m·|⟨Θ(t)· M(t) · t^n⟩|. This quantity is strictly less than q^-l if and only if |⟨Θ(t)· M(t)· t^n⟩|<q^-l-m-b_f^(q)(m+n), where b_f^(q)(m+n)=⌈log_q(f(q^m+n))⌉. This is equivalent to the coefficients of t^-i in ⟨Θ(t)· M(t)· t^n⟩ being zero for i∈{1,…,l+m+b_f^(q)(m+n)}. This coefficient is given by ∑_j=0^m a_js_i+j+n, and hence the inequality (<ref>) can be written as the matrix equation H_Θ(1+n, l+m+b_f^(q)(m+n)-1, m)·[ a_0; ⋮; a_m ]=0. This implies that H_Θ(n+1, l+m+b_f^(q)(m+n)-1, m) has less than maximal rank, meaning the square matrix H_Θ(n+1,l+m+b_f^(q)(m+n)-1) has zero determinant. In particular, H_Θ(n+1,i) is singular for m≤ i ≤ l+m+b_f^(q)(m+n)-1, since each of these matrices satisfies the same linear recurrence relation in the first m+1 columns. Relation (<ref>) shows that the determinant of H_Θ(n+1,i) is, up to a nonzero multiplicative constant, equal to the determinant of T_Θ(n+1+i,i). Therefore, there is a diagonal of zeros from entry (n+m+1,m) to entry (n+m+l+b_f^(q)(m+n),m+l+b_f^(q)(m+n)-1). The Square Window Theorem (Theorem <ref>) implies that there is a window of size l+b_f^(q)(m+n) in row m and column m+n+1. 2⇒1: Proceeding again by contraposition, assume that there is a window of size l+b_f^(q)(m+n) in row m and column n+m+1 for some m,n∈. The diagonal of this window corresponds to a sequence of nested singular square Toeplitz matrices, T_Θ(n+1+i,i) for m≤ i ≤ l+m+b_f^(q)(m+n)-1. Hence, the matrices H_Θ(n+1,i) for i in the same range are all singular. The largest of these Hankel matrices in this diagonal is singular and thus it has linearly dependent columns. Since the second largest is also singular, Lemma <ref> implies that the first m+l+b_f^(q)(m+n)-1 columns are linearly dependent. Furthermore, since the third largest matrix is also singular this implies that the first m+l+b_f^(q)(m+n)-2 columns are linearly dependent. A simple induction shows that H_Θ(n+1, l+m+b_f^(q)(m+n)-1, m) has less than maximal rank. Let then 𝐚=(a_0,…,a_m)^⊤∈_q^m+1 be a vector such that H_Θ(n+1, l+m+b_f^(q)(m+n)-1, m)·𝐚=0. Define the polynomials M(t)=∑_i=0^ma_it^i and N(t):=M(t)· t^n. By the same steps used to obtain (<ref>) and (<ref>), one has that f(|N(t)|)· |N(t)|· |N(t)|_t·|⟨Θ(t)· N(t) · t^n⟩|<q^-l. Hence, a polynomial N=M· t^n with m:=(M) and (M,t)=1 being a solution to f(|N|)·|N|·|N|_t·⟨ N·Θ⟩<q^-l is equivalent to there being a window of size l+ b_f^(q)(m+n) on row m and column m+n+1. This completes the proof. Taking f to be the constant function, the following corollary becomes clear. Let Θ(t)=∑^∞_i=1s_it^-i∈𝔽_q((t^-1)) be a Laurent series and S=(s_i)_i∈ℕ be the sequence of its coefficients. Then, Θ(t) is a counterexample to t-LC if and only if there exists an l in the natural numbers such that the number wall of (s_i)_i≥0 has no windows of size larger than l. A finite number wall is a number wall generated by a finite sequence. The next subsection begins to develop the combinatorial theory on finite number walls. §.§ The Free-Determined Lemma The proofs of Theorems <ref> and <ref> involve extending a finite number wall to satisfy the growth condition from Theorem <ref> in the size of its windows. To this end, the following definitions and lemmas illustrate basic properties of a finite number wall. For a finite sequence S and its number wall W(S), the depth of W(S) is defined as the greatest value of the row index m such that the entry W_m,n(S) is defined for some column index n∈. It is simple to see that a finite sequence of length r generates a number wall in the shape of an isosceles triangle with depth ⌊r-1/2⌋. To make number walls visually accessible, each entry is given a unique colour depending on its value (See Figure <ref>). Every number wall shown in this paper is coloured with the same colour scheme. Whilst generating number walls can be insightful, it provides no help if one is trying to prove a result for number walls in general. The introduction of dot diagrams remedies this by giving a way to visualise an arbitrary number wall of finite size, allowing for a clearer exposition during proofs. A blank dot diagram can be seen below. All number wall illustrations have top row -2, as this illustrates all three cases of Definition <ref>. Rows -1 and -2 are the initial conditions beginning the induction that calculates the number wall, seen in the Frame Constraints (Theorem <ref>). Each dot in the dot diagram represents an entry of the number wall, with the top row representing the sequence generating the number wall. The dot diagram in Figure <ref> contains no information, but as the paper progresses dot diagrams are used to convey information about the structure and construction of a number wall. Numbers above an entry on the top row of a dot diagram indicate how many choices there are in _q for that entry resulting in the indicated diagram. This is first seen in Figure <ref>. The next lemma in this subsection shows that number walls behave well under symmetry. Let W(S) be the number wall of the finite sequence S=(s_1,…,s_r). Define S'=(s_r,…,s_1) as the `reflection' of S. Then the number wall of S' is the number wall of S, reflected vertically. More precisely, let m and n be natural numbers satisfying 0≤ m≤⌊r-1/2⌋ and 0≤ n≤ r-1-2m, and define W_m,n(S) and W_m,n(S') to be the entry on column n and row m of the number walls of S and S', respectively. Then W_r-m-n,m(S')=W_m+n+1,m(S). The entry of W(S) in column m+n+1 and row m is equal to the determinant of T_S(m+n+1,m). Similarly, the entry in column r-m-n and row m of W(S') is T_S'(r-m-n,m). The definition of T_S'(r-m-n,m) shows it is the transpose of T_S(m+n+1,m), completing the proof. Lemma <ref> is illustrated by Figure <ref>: The following definitions are introduced to facilitate the extension of a finite number wall. Let S=(s_i)_1≤ i ≤ r be a finite sequence of length r, and let W(S)=(W_m,n(S))_m,n∈ be the number wall generated by S. Assume that s_r+1 is to be added to the end of the sequence S. Then: * The 𝐤^𝐭𝐡 diagonal is all the elements of the number wall with column index k-i and row index i for 0≤ i ≤⌊k-1/2⌋. * An entry on the (r+1)^ diagonal of the number wall generated by S∪{s_r+1} is determined if it is independent of the choice of s_r+1. * An entry W_i,r+1-i(S∪{s_r+1}) of the number wall on the (r+1)^ diagonal is free if for any x∈𝔽_q, there exists a value of s_r+1 in the first row making W_i,r+1-i(S)=x. Given a finite number wall generated by a sequence of length r, the following lemma describes how many degrees of freedom there are for the values of the (r+1)^ diagonal. Let W(S)=(W_m,n(S))_n,m∈ℤ be the finite number wall generated by the sequence S=(s_i)_0≤ i ≤ r. Assume that s_r+1 is added to the end of S. Then, for 0≤ i ≤⌊r/2⌋, W_i,r+1-i(S) is determined if and only if W_i-1,r-i(S)=0. Furthermore, if W_i,r+1-i(S) is not determined, then it is free. In particular, picking a value for any non-determined entry on the (r+1)^ diagonal uniquely decides the value for every remaining non-determined entry on the (r+1)^ diagonal. The value of W_i,r+1-i(S) is calculated from the definition of a number wall: W_i,r+1-i(S)=(T_S(r+1-i,i))=[ s_r+1-i … s_r+1; ⋮ ⋱ ⋮; s_r+1-2i … s_r+1-i ]. Due to the Toeplitz structure, s_r+1 only occurs in the top right corner of T_S(r+1-i,i). By expanding along the top row, W_i,r+1-i(S)=s_r+1·(T_S(r-i,i-1))+Y, where Y is a number that depends only on {s_i:1≤ i ≤ r}. Hence, the value of W_i,r+1-i(S) is independent of s_r+1 if and only if (T_S(r-i,i-1))=W_i-1,r-i(S)=0. If W_i-1,r-i(S)≠0, then for any chosen value of W_i,r+1-i(S) there exists a unique choice of s_r+1 that achieves it. Namely, s_r+1=(W_i,r+1-i(S)-Y)·(T_S(r-i,i-1))^-1. This, in turn, fixes the value of every non-determined entry on diagonal r+1. The Free-Determined Lemma (Lemma <ref>) has the following simple corollary: Let S=(s_i)_0≤ i ≤ r be a finite sequence. Define the sequences S' and S” as the sequence S with an additional element, s_r+1' and s_r+1”, respectively, added on the end. If 0≤ j≤⌊r/2⌋ is such that W_j,r+1-j(S') and W_j,r+1-j(S”) are not determined, then W_j,r+1-j(S')=W_j,r+1-j(S”) if and only if s_r+1'=s_r+1”. Clearly, if s_r+1'=s_r+1”, then the entire (r+1)^ diagonal is identical. Hence, assume that W_j,r+1-j(S')= W_j,r+1-j(S”). By Corollary <ref>, this also fixes the value for every non-determined entry in the diagonal. Furthermore, by definition the values on the top row cannot be determined, which completes the proof. That t-LC (and hence p(t)-LC from Theorem <ref>) are false over infinite fields 𝕂 (originally established by Mathan and Teulié in <cit.>) is an immediate consequence of Lemma <ref>. Indeed, if the sequence (s_i)_1≤ i ≤ r over 𝕂 has no windows in its number wall, every entry on the (r+1)^ diagonal is free. Now, Lemma <ref> shows there are infinite choices for s_1+r that do not create any windows. A simple induction beginning with any nonzero sequence of length one generates a counterexample to t-LC. Lemma <ref> explains when a finite sequence can be extended by one digit to obtain a particular value in the number wall. In practice, finite sequences are not extended one digit at a time, but instead by arbitrarily many digits at once. The next subsection proves results about how many ways this can be done whilst preserving certain properties of the number wall. §.§ The Blade of a Finite Number Wall Understanding the bottom two rows of a finite number wall is at the heart of the proofs of Theorems <ref> and <ref>. This motivates the following definition: Given a finite number wall, define its blade as the values on the bottom two rows. Explicitly, if the number wall is generated by a sequence of length r∈, then the blade is rows ⌊r-1/2⌋ and ⌊r-1/2⌋-1. Furthermore, define the right-side (left-side, respectively) blade as entries in the two right-most (left-most, respectively) columns of the blade. These definitions are illustrated below: Studying the right-side blade is sufficient for most purposes, but all of the results in this sub-section are also true for the left-side blade by symmetry from Lemma <ref>. It is often only important that an entry in the number wall is zero or nonzero, with its exact value in the latter case being irrelevant, whence this definition: Let W(S) be the number wall of a sequence S. The shape of W(S) is a two dimensional array with the same width and height as W(S) where every non-zero entry has been replaced with the same symbol. In other words, the shape of a number wall shows only where its zero entries are located. This definition allows for dot diagrams to distinguish between zero and nonzero entries without worrying about specific values. Using this notation, there are seven possible shapes for the right-side blade, as the shape with a nonzero entry in the top left and zeros in the top right and bottom left is not possible due to the Square Window Theorem (Theorem <ref>). In text, these right-side blade shapes are denoted by , , , , , , . The right-side blade comprised of all zeros (furthest right in Figure <ref>) is called the zero right-side blade. All other blades are described as nonzero. Let S=(s_i)_1≤ i ≤ r be a finite sequence of length r in _q whose number wall has a nonzero right-side blade. There are q^2 different ways to add s_r+1 and s_r+2 to the end of S, forming a sequence S' of length r+2. These q^2 continuations can be partitioned by the different blades they create in the number wall of S'. Indeed, let {u_i}_i=1,2,3 and {v_i}_i=1,2,3 be the entries in the right-side blade of the sequence S and S' respectively. Using Lemma <ref>, the following diagrams are created. Each shows a given right-side blade shape for S (in blue, representing u_1,u_2 and u_3) and all the possible right-side blade shapes for S' (in green, representing v_1, v_2 and v_3), with the number of possible continuations from S to S' giving this right-side blade written underneath. These are referred to as the Tree Diagrams. There are six in total, and two of them are explained in detail. The rest follows similarly. In this case, using the notation from Figure <ref>, all the values of {u_i}_i=1,2,3 are nonzero, and hence Lemma <ref> implies all the values {v_i}_i=1,2,3 are free. Lemma <ref> also implies there is only one choice of s_r+1 that would give v_2=0, and hence there are q-1 choices that give v_2≠0. In the same way, there are unique choices of s_r+2 that give v_1=0 and v_3=0. These choices coincide if and only if v_2=0, explaining the final green right-side blade in Figure <ref>. Furthermore, if v_2≠0 then it is impossible that v_1=0=v_3 by the Square Window Theorem (Theorem <ref>). Hence, there are (q-2) choices for s_r+2 making neither v_1 or v_3 zero, giving (q-1)(q-2) choices for s_r+1 and s_r+2 resulting in the first green right-side blade in Figure <ref>. The same reasoning explains the second and fourth right-side blades. Because the sequence S has right-side blade , it is impossible for S' to have right-side blade or , as this would result in a non-square window. Therefore, there are q-1 ways to obtain the third right-side blade. This is much the same as the -Tree Diagram, but now the value of v_3 are determined. However, it still depends on the value of s_r+1, and v_2 is free. Hence, there is one choice for s_r+1 making v_2=0 and q-1 choices for s_r+1 giving v_2≠0. In the latter case, it is impossible to have v_3=0 as this would create two windows touching diagonally, violating the Square Window Theorem (Theorem <ref>). This explains the second right-side blade. However, v_1 is free and hence there are q-1 choices for s_r+1 making it nonzero and a single choice making it zero, explaining the first and fifth right-side blade. The third and sixth right-side blades are impossible due to the Square Window Theorem (Theorem <ref>). The third and seventh right-side blades follow exactly as in Figure <ref>. The remaining Tree Diagrams are shown below. The reader is encouraged to verify them to gain a better understanding. Note that there is no -Tree Diagram. This is because the values of {v_i}_i=1,2,3 are all determined and depend on the values of {s_i}_1≤ i ≤ r-2. Given a finite sequence S with a nonzero right-side blade, the Tree Diagrams show how many length-two continuations of S give any desired right-side blade. The function introduced in the following definition generalises this to continuations of any arbitrary even length. Let B_1,B_2 denote the nonzero right-side blade shapes ,, or . Define the function Q :{,,}^2×→, (B_1,B_2,m)↦ Q(B_1,B_2,m) as the number of ways a finite sequence whose number wall has right-side blade B_1 can be extended by 2m entries such that the number wall of the resulting sequence has right-side blade B_2. For example, Q(,,1) is the number of ways a finite sequence whose number wall has right-side blade can be extended by two digits to have a right-side blade . The -Tree Diagram shows that Q(,,1)=0. Although it is not clear a priori, the map Q is well defined. This is justified below, in Lemma <ref>. The same lemma gives explicit values of Q(B_1,B_2,m). These formulas do not need to be committed to memory and are not individually important. They are used later in the proof of Lemma <ref>. Let B_1 and B_2 be nonzero right-side blades and m be a natural number. Then the function Q(B_1,B_2,m) is well-defined. Explicitly, the value of Q(B_1,B_2,m) only depends on the stated variables, and not any other values in the number wall. Furthermore, Q(B_1,B_2,m) takes the following values for m≥1: B_2=: (1.1) Q(,,m)=Q(,,m)=q(q-1)^2/q+1(q^2m-2-1) if m≥ 2 q(q-1) if m=1.. (1.2) Q(,,m)=q(q-1)/q+1(q^2m-1-q^2m-2+2) if m≥2 q(q-2) if m=1. (1.3) Q(,,m)=q-1/q+1(q^2m-q^2m-1-2). (1.4) Q(,,m)=Q(,,m)=(q-1)^2/q+1(q^2m-1+1). B_2=: (2.1)Q(,,m)=Q(,,m)=q-1/q+1(q^2m-1+1) (2.2) Q(,,m)=q(q-1)/q+1(q^2m-2-1) if m≥2 0 if m=1 (2.3) Q(,,m)=Q(,,m)=q(q-1)/q+1(q^2m-2-1) if m≥2 q if m=1 (2.4) Q(,,m)=q^2(q-1)/q+1(q^2m-3+1) if m≥2 0 if m=1. B_2=: (3.1) Q(,,m)=Q(,,m)=q-1/q+1(q^2m-1+1) (3.2) Q(,,m)=Q(,,m)=q(q-1)/q+1(q^2m-2-1) if m≥2 q if m=1. (3.3) Q(,,m)=q(q-1)/q+1(q^2m-2-1) if m≥2 0 if m=1. (3.4) Q(,,m)=q^2(q-1)/q+1(q^2m-3+1) if m≥2 0 if m=0. For the sake of brevity, the formulae are proved in triples: the value of Q(B_1,B_2,m) is derived for a generic choice of B_2. To obtain an individual formula, a specification of the shape of B_2 is made and a simple calculation is performed. Each formulae will be referred to with the labels from Lemma <ref>. The lemma is proved by induction on m. The base cases are given by the Tree Diagrams (Figures <ref>, <ref> and <ref>), and it is clear these are well-defined. Assume that these formulas hold and that the function is well-defined up to m-1. The following dot diagram displays the situation: Every formula is proved using the same idea. The Tree diagram for B_1 shows which possible shapes the right-side blade could take when the generating sequence is extended by two, and how many ways each shape occurs. For example, the -Tree Diagram (Figure <ref>) shows that when extending the sequence by two, there are q(q-1) (q, respectively) ways to get a right-side blade of shape (of shape , respectively). With the notations of Definition <ref>, it is clear that Q(,B_2,m)=q(q-1)· Q(,B_2,m-1)+q· Q(,B_2,m-1). Picking a specific right-side blade shape for B_2, using the induction hypothesis and doing some simple rearranging gives formulae (1.1), (2.4) and (3.2). It is also clear from this recurrence formulae that, if Q(,B_2,m-1) and Q(,B_2,m-1) are well-defined, then so is Q(,B_2,m). Similarly, the - Tree Diagram (Figure <ref>) shows that Q(,B_2,m)=q(q-1)· Q(,B_2,m-1)+q· Q(,B_2,m-1) and the -Tree Diagram (Figure <ref>) that Q(,B_2,m)=q(q-2)· Q(,B_2,m-1)+q· Q(,B_2,m-1)+ q· Q(,B_2,m-1). Once again, picking a specific shape for B_2, using the induction hypothesis and rearranging proves the two equalities in (1.1), (2.3), and also the equalities (3.4), (1.2), and (3.2). The formulae for B_1=, and are not as simple. The formula for Q(,B_2,m) is first proved. Using the -Tree Diagram (Figure <ref>), Q(,B_2,m)= (q-1)(q-2)· Q(,B_2,m-1)+(q-1)· Q(,B_2,m-1) +(q-1)· Q(,B_2,m-1)+(q-1)· Q(,B_2,m-1) +Q̃_(,B_2,m-1), where Q̃_(,B_2,m-1) is the number of ways to add 2m digits to a generating sequence with right-side blade , with the additional constraint that the first two digits result in the zero right-side blade. This is illustrated below: Let be the window created on row k. Evaluating Q̃_(,B_2,m-1) amounts to counting the number of continuations of the generating sequence given every possible size of . For example, the minimum possible size of is 2× 2, since it contains the zero right-side blade. If has side length 2, then the number wall looks as the left one below: This contributes q(q-1)· Q(,B_2,m-2) to the total sum. In general, if the window has size i× i, it contributes (q-1)q^i-1Q(,B_2,m-i) to the total. This is illustrated in the right-hand side of Figure <ref>. Summing over all values of i gives Q(,B_2,m) =(q-1)(q-2)· Q(,B_2,m-1)+(q-1)· Q(,B_2,m-1) +(q-1)· Q(,B_2,m-1)+∑_i=1^m-1(q-1)q^i-1· Q(,B_2,m-i). Note that the Q(,B_2,m-1) term from (<ref>) has been absorbed into the sum above. At this point, a specific choice for B_2 is made, the induction hypothesis is used and the sum is evaluated using the geometric sequence formula to obtain relations (1.3), (2.2) and (3.1). The values of Q(,B_2,m) and Q(,B_2,m) are derived using an identical method. For this reason, these proofs are omitted. These equations are put to use in the next subsection to count the number of finite sequences with a given window in their number wall. §.§ Counting Sequences with a given Window in their Number Wall The following definitions are made to categorise the types of windows that can appear in a finite number wall. From now on, windows are denoted by squares, such as . A window with side length l and top left corner in row m and column n is written as a triple (l,n,m). There are three types of windows that can appear on a finite number wall, categorised as follows: * A window is complete if the entire window and its inner frame is contained in the number wall. * A window in a finite number wall is closed if the ratios of the geometric sequences comprising its inner frame have all been determined. Equivalently, its size is independent of any entries added to the given sequence generating the number wall. A window is left-side closed if the ratio of the inner frame on the left side is determined. A window being right-side closed is defined similarly. * A window is open if it is not closed. A window being left-side open and right-side open is defined similarly. Equivalently, a window is open if it could grow in size depending what additional entries are added to the start or to the end of the generating sequence. Similarly, a window is right-side open (left-side open, respectively) if an entry added on the end (start, respectively) of the generating sequence could increase the size of the window. A complete window is closed, but a closed window is not necessarily complete. The following terminology describes the entries in a sequence that generate windows in number walls. Let r∈ and let W(S) be the number wall of a finite sequence S=(s_i)_0≤ i≤ r. Additionally, let be a window of side length l∈ in W(S). Using the notation from Figure 1, the hat of is the set {A_0,…,A_l}. That is, the hat of a window is the top row of its inner frame. The hat generator of is the shortest subsequence S_=(s_i)_n_1≤ i ≤ n_2⊆ S such that W(S_) contains the hat of the window. The hat cone of a window is the number wall generated by the hat generator. Let S_={s_i}_1≤ i ≤ k be a hat generator for an open window . Then a left-side closure of S_ is defined as a sequence {s_i}_0≤ i ≤ k with s_0 chosen such that the window is left-side closed. A right-side closure of s_ is defined similarly, as a sequence {s_i}_1≤ i ≤ k+1 which makes right-side closed. Finally, a closure of S_ is a sequence {s_i}_0≤ i ≤ k+1 with s_0 and s_k+1∈_q being such that the window is closed. By Lemma <ref>, q-1 of the possible q choices for s_0 (for s_k+1, respectively) create left-side (right-side, respectively) closures. For a window (l,n,m), it is clear the hat generator has length 2m+l. Furthermore, for a sequence S=(s_i)_1≤ i ≤ r which has (l,n,m) in its number wall, the hat generator is the subsequence S_(l,n,m)=(s_i)_n-m≤ i ≤ n+m+l-1. One further entry on either side of S_(l,n,m) is required to close the window at each end and hence there cannot be a closed or complete window of size l with top left corner on row m and column n if r< n+m+l or if n-(m+1)≤0. Theorem <ref> only demands that a particular square portion of the number wall should be zero; not that the window is exactly the given size or begins exactly in the given place. This motivates the following definition: A window _1=_1(l_1,n_1,m_1) contains a window _2=_2(l_2,n_2,m_2) if _2 is fully inside _1. Explicitly, this occurs if and only if n_2≥ n_1, n_2+l_2≤ n_1+l_1, m_2≥ m_1 and m_2+l_2≤ m_1+l_1. Given a window , the following lemma shows how many ways a number wall with nonzero right-side blade can be extended to have a window containing . Let k,l,m be natural integers, =(l,m+k,m+k+2) be a window and i∈{1,2}. Given a sequence S_k⊂_q^2k+i of length 2k+i with a nonzero right-side blade B in its number wall, there are q^2m+1-i ways to continue it to a sequence of length 2k+1+2m+l so as to have a window ' that contains in the number wall of the resulting extended sequences. The case where i=1 is completed first. The diagram below illustrates the set up. In this case, the result is proved by partitioning the set of all the windows containing by the row and column they begin in. Thus, the first section of the partition contains all the windows whose first column/row is in the same as the first column/row of . This partition is illustrated by the left dot diagram below. With notation from Definition <ref>, there are Q(B,,m) ways of obtaining the purple right side blade in the right image of Figure <ref>, and then Lemma <ref> implies there is a unique extension of length l giving the purple window. By the same method (that is, by repeated applications of Lemma <ref>), there are Q(B,,m) ways of obtaining the blue window. Each red window is counted once in the value of Q(B,,m), and also must be extended l times by applying Lemma <ref>. Therefore, this whole partition contributes Q(B,,m)+Q(B,,m)+Q(B,,m) to the total sum. The next partition is similar, but contains all the windows that have first row or first column one before the first row or column of . This is illustrated in the right-hand side of Figure <ref>. Similar to the previous partition, there are qQ(B,,m-1) ways to obtain the purple window. The extra factor of q is a consequence of the depth of the window being decreased by one and its size being increased by one. The decrease in depth reduces the size of the hat generator by two. However, the increase in size enlarges the size of the hat generator by one. Therefore, the hat generator has only decreased its length by one. This leaves the last entry in the sequence redundant and hence free to take any value. With a similar argument for the red and blue windows, this case contributes q(Q(B,,m-1)+Q(B,,m-1)+Q(B,,m-1)) to the total sum. Each partition is done in a similar way, leading to a total value of ∑^m_i=0q^i(Q(B,,m-i)+Q(B,,m-i)+Q(B,,m-i)). From Lemma <ref>, it can be seen that for m>1, Q(B,,m)+Q(B,,m)+Q(B,,m)=q^2m-1(q-1). If B=,, or , this formula also applies for m=1 and the sum equals 1 for m=0. If B=, or then at m=1, the left-hand side of (<ref>) has value q^2. In the former case when B=, or , (<ref>) =∑_j=0^m-1 q^2m-1-2j(q-1)· q^j + q^m =q^m((q-1)(q^m-1+q^m-2+…+q+1)+1)=q^2m. In the latter case when B=,, or , (<ref>) =∑_j=0^m-2 q^2m-1-2j(q-1)· q^j + q^m-1· q^2 =q^m+1((q-1)(q^m-2+q^m-3+…+q+1)+1)=q^2m. This proves the lemma in the i=1 case. Most of the hard work for the i=2 case has already been done. The right-side diagram of Figure <ref> illustrates the set up. If the bottom right entry of B' (grey interior in the figure) is nonzero, then the right side blade created by adding any of the q elements of _q to S'_k is necessarily nonzero. Hence, there are q^2m-2 continuations giving a window containing (by the i=1 case). If the bottom right entry is zero, the right-side blade B (boxed part of B') must be nonzero since B' is nonzero and the bottom right entry of B' is zero. Furthermore, as the bottom right entry is zero, this implies that B is either , or . The i=1 case shows there are q^2m continuations of a sequence of length 2k+1 with right-side blade B. These continuations are partitioned into those that have zero in bottom right position in B' and those that do not. In the latter case, Lemma <ref> shows the number of continuations with a window containing is (q-1)q^2m-1. Hence, there are q^2m-(q-1)q^2m-1=q^2m-1 continuations when the bottom right entry is zero. This concludes the proof. Lemma <ref> is used to prove the following corollary: Let q be a prime power, r,n,m,l be natural numbers with r≥ 2m+1+l and =(l,n,m) be a window in a number wall over _q. The number of sequences of length r over _q whose number wall has a window containing is q^r-l. Since row -1 is made of entirely 1s, a sequence of length 1 can only have two possible right-side blade shapes. Namely, the possible blades are (q-1 ways) or (1 way). An additional 2m+l entries are required to create the desired window on row m. Using Lemma <ref>, each of these two blades have q^2m possible continuations that give a window containing . This gives a total of (q-1)· q^2m+1· q^2m=q^2m+1 sequences of length 2m+l+1 that have the desired window. The remaining choices for the sequence are free, giving a total number of sequences of length r as q^r-2m-1-l· q^2m+1=q^r-l. This concludes the proof. Lemma <ref> demands that the finite number wall has a nonzero right-side blade. The following lemma deals with the complementary case where the right-side blade is the zero blade. Let k,m∈, i∈{1,2} and '='(l,m+k,m+k+2) be a window. Given a sequence S_k⊂_q^2k+i of length 2k+1 with the zero right-side blade in its number wall, there are q^2m+1-i ways to extend S_k to a sequence of length 2k+2m+l+1 so as to have a window that contains ' in their number wall. Let be the window containing the zero blade of the number wall of S_k. Assume that is closed (see Definition <ref>). The diagram on the left below illustrates the scenario. Define s as the vertical distance from the number wall generated by the inner frame of S_k to the bottom of the deepest right-side blade determined by . Note that if S_k has odd length then s is even, and if S_k has even length, then s is odd. When S_k is continued by s entries, coloured in light blue in Figure <ref>, the right-side blade of the number wall is independent of the choice of continuation, as the right-side blade is in the inner frame of . The conditions of Lemma <ref> are now satisfied, and hence there are q^2m-s continuations in the odd case (and q^2m-1-s in the even case) from this blade to get a window containing '. Multiplying by any of the q^s continuations of S_k concludes the proof of the closed case. Now assume is open and that S_k has odd length. As the sequence S_k is extended, is either closed or continued. For each digit added, there are q-1 choices to close the window and only one choice to continue it. The proof amounts to summing over every possible size the window could have when it is closed. Define u∈ such that starts on row k-u and v∈ such that row k+v contains the bottom row of . Then the open window has size u+1+v. Assume is extended by j entries so it has size j+u+v+1. This is illustrated in the right-side diagram of Figure <ref>. The dark blue entry has q-1 possible choices, as it closes the window . The problem has now reduced to the closed case, and hence this case contributes (q-1)· q^2v+1+j· q^2m-2v-2-2j=(q-1)· q^2m-1-j to the sum. This ends when 2m-2v-2-2j=2, that is, when j=m-v-2. There is one final term of the sum that has not been considered, represented by the number wall where the size of increased so that contains '. This is illustrated below: The size of has increased by l+1 from the case j=m-v-2. As the closing coefficient is no longer required (because ' is contained inside of ), there is now q^m+1+v continuations of S_k of length 2m+l. Hence, the total sum is q^m+1+v+∑_j=0^m-v-2(q-1)· q^2m-1-j=q^2m. This proves the result in the odd case. In the even case, the first term of the sum in (<ref>) is ignored. Repeating the calculation completes the proof. A final definition formalises the idea of two windows `touching': Let be a window inside a given portion of number wall, and let ' be another window in the number wall. Then undershoots ' if and its inner frame do not intersect '. If the inner frame of intersects ', then hits[If ' is on the right side of , this requires that be right-side closed. This is because an open window has not determined the location of its inner frame. Furthermore, if ' hits , then there is no sequence that has both and ' in its number wall.] ' and and ' cannot be on the same number wall. Finally, if contains ', or if intersects ' and can be extended to contain ', then overshoots '. These definitions are illustrated below. The combinatorics on Number Walls needed to establish Theorem <ref> has been developed. § HAUSDORFF DIMENSION OF THE SET OF COUNTEREXAMPLES TO T-LC WITH ADDITIONAL GROWTH FUNCTION For a set X⊂_q((t^-1)), define the diameter of X as (X)=sup{|x-y|:x,y∈ X}. The following classical theorem from <cit.> is used to attain the lower bound for the Hausdorff dimension in Theorem <ref>. Let μ be a probability measure supported on a subset X of _q((t^-1)). Suppose there are positive constants a, s and l_0 such that μ(B)≤ a·(B)^s for any ball B with size (B)≤ l_0. Then (X)≥ s. Recall the set M(f,t) from Definition <ref>. Theorem <ref> shows that if a Laurent series Θ(t) is in M(f,t), then there exists an l∈ such that for any k∈, the windows with top left corner on the k^ column of the number wall generated by Θ(t) have size less than or equal to l+⌈log_q(f(q^k))⌉ -1. For the duration of this proof, a sequence satisfying this property in the size of its windows as k tends to infinity is said to satisfy the window growth property with respect to function f. The strategy of the proof is to construct a Cantor set of sequences with number walls satisfying this window growth property; it is shown to have full dimension. Let 𝒥_n be the set of balls J_n generated by a sequence of length q^n whose number wall has windows satisfying the window growth property with respect to function f. The goal is to construct 𝒥_n+1. To do this, each J_n∈𝒥_n is split into R_n:=q^(q^n+1-q^n) sub-cylinders, each generated by a sequence of length q^n+1. These R_n sequences collectively make the set ℐ_n+1. Every sequence in ℐ_n+1 that has a window of size larger than l+⌈log_q(f(q^k))⌉ -1 starting on column k needs to be removed. To begin, it is sufficient to remove all sequences with a window of size l+⌈log_q(f(q^k))⌉ -1 on diagonal k instead of column k. Indeed, every entry on diagonal k is in column at least k/2. Therefore, for each diagonal, removing all windows of size l+⌈log_q(f(q^k/2))⌉ -1 is sufficient. When f=log^2, this equals l+⌈log_q(k^2)+log_q(1/4)⌉-1. The constant ⌈log_q(1/4)⌉ is absorbed into l, which can be any fixed natural number. The proof proceeds by induction, with the base case being any sequence of length 1. As the number wall generated by the first q^n entries already satisfies the window growth property with respect to function f by induction, only the windows on diagonals q^n<k≤ q^n+1 are checked. This is illustrated by the following dot diagram: If a position in the number wall of I_n is undershot by a window from the number wall of J_n, then Lemma <ref> shows there are R_n· q^-(l+⌈log_q(k^2)⌉ -1)≤ R_n· q^-(l+2n-1) continuations that contain a window of problematic size in that position. Since every sequence in J_n satisfies the window growth property, any position denoted in green in Figure <ref> will not be overshot by any window in J_n. The positions denoted in dark red could be overshot, but are not necessarily, so it is not difficult to see that the number of positions in the extended number wall that a window could begin in is bounded above by q^2n+2. Hence, the amount to be thrown away from the green and red bands combined in Figure <ref> is less than or equal to R_n· q^2n+2-(l+2n-1)=R_n· q^3-l. This represents all the positions that are undershot by a window in J_n. The bands coloured in red and brown in Figure <ref> are more problematic. Entries in the brown band could be overshot by windows from J_n. Note, the red band will be directly adjacent to the brown band of the next level set. For this reason, the light red band is dealt with first. For ease, define L:=l+2n-1, the size of problematic windows, and let l be odd and sufficiently large. In the first diagonal on the light red band, each position is removed from the Cantor set if it has a window of size L-1. Lemma <ref> shows there are R_n· q^-L+1 extensions giving this, and there are less than q^n+1 such positions. Similarly for the second diagonal, all windows of size L-2 are thrown away. This continues until diagonal L/2, where all the windows of size L/2 are thrown away. Nothing is thrown away from the remaining diagonals. This leads to a total removal of R_n· q^n+1· q^-L/2(q^-L/2-1+…+1)≪_q R_n· q^n+1-L/2=R_n· q^1-(l-1)/2. The dark red band is now dealt with. As previously noticed, this is directly adjacent to the light red band of the previous level set. Therefore, by induction, only entries in the first L/2-1 diagonals could be overshot. There are q^n such entries in the first diagonal, and at most R_n· q^-L/2 sequences are thrown away (in the case where the position is overshot by a window from J_n). For an entry in the second diagonal, there are two cases. It is either part of an open window or a closed window. The first case has already been removed, as it would necessarily be part of an open window on the first diagonal. Nothing is removed in the second case, as the window is closed and smaller than the problematic size. Combining equations (<ref>), (<ref>) and (<ref>) shows that the number of extensions that satisfy the window growth rate is bounded below by t_n(l):=R_n(1-q^3-l-D_qq^-l/2) for some constant D_q depending only on q, coming from the implicit constant in (<ref>). If l is sufficiently large, t_n(l) is always positive. Define then r_n:=R_n·(q^3-l-D_qq^-l/2). Let 𝐊(𝕀,𝐑,𝐫) be a Cantor set with 𝐑=(R_n)_n≥0 and 𝐫=(r_n)_n≥0: it is constructed by starting with the unit interval and splitting each level set into R_n sub-balls and throwing at most r_n of them away. Let ε>0 be small. Then there exists some n_0∈ depending on ε such that for all n>n_0 R_n^1-ε≤ t_n(l), which is true for l large enough. The remainder of this proof is similar to an argument made by Badziahin and Velani in <cit.>. A probability measure μ is defined on 𝐊(𝕀,𝐑,𝐫) recursively: For n=0, set μ(J_0):=1. For n≥1, let μ(J_n):=μ(J_n-1)/#{J∈𝒥_n:J⊂ J_n-1}, where J_n-1∈𝒥_n-1 is the unique interval such that J_n⊂ J_n-1. It is shown in <cit.> that μ can be extended to all Borel sets F of _q((t^-1)) by μ(F):=μ(F∩𝐊(𝕀,𝐑,𝐫) = inf{∑_J∈𝒥μ(J)}, where the infimum is over all coverings 𝒥 of F∩𝐊(𝕀,𝐑,𝐫) by intervals J∈{𝒥_n: n≥0}. For any interval J_n∈𝒥_n, (<ref>) and the definition of t_n(l) in (<ref>) implies that μ(J_n)≤ t_n-1^-1μ(J_n-1). Inductively, this yields μ(J_n)≤∏_i=0^n-1t^-1_i. Next, let δ_n denote the length of a generic interval J_n∈𝒥_n. It is clear that δ_n=∏_i=0^n-1R_i^-1. For any arbitrary ball C⊂𝕀 satisfying (C)≤δ_n_0, there exists an integer n≥ n_0 such that δ_n+1≤(C) < δ_n. Hence, μ(C) ≤∑_J∈𝒥_n+1 J∩ C≠∅μ(J)   (<ref>) ≤   ((C)/δ_n+1)( ∏_i=0^n t^-1_i)   (<ref>) =   (C)^ε(∏_i=0^n R_i/t_i)·(C)^1-ε (<ref>) < (δ_n)^ε·(∏_i=0^n R_i/t_i)·(C)^1-ε   (<ref>) =    R_n/t_n(∏_i=1^n-1R_i^1-ε/t_i)·(C)^1-ε. Using that d:=R_n/t_n is a constant depending only on the fixed number l, (<ref>)   (<ref>) <   d·∏_i=1^n_0R_i^1-ε/t_i·(C)^1-ε. Hence, by the Mass Distribution Principle (Lemma <ref>), 𝐊(𝕀,𝐑,𝐫) has dimension greater than or equal to 1-ε for any ε>0. Therefore, 𝐊(𝕀,𝐑,𝐫) has dimension 1. This completes the proof. § STRUCTURES OF ZERO PATTERNS IN NUMBER WALLS The goal is now to prove Theorem <ref>. To this end, recall that Corollary <ref> counts how many sequences have a given square zero portion in their number wall. Two generalisations of this corollary are proved, which replace the square zero portion with one of any connected shape, or two disconnected square zero portions, respectively. §.§ Rectangular Zero Portions Whilst windows are always in a square shape, determining how many finite sequences have a square window that contains a given rectangular portion is essential for proving Theorem <ref>. To this end, the following definition is made: Let r,n,m,l∈, d∈ satisfying d<l and let q be a prime power. Define R^l,d,n,m_q,r as the set of sequences of length r over _q whose number wall contains a rectangular portion of zeros with horizontal length l, height l-d and top left corner in column n and row m. The cardinality of R^l,d,n,m_r,q can be explicitly determined: Let r,n,m,l∈, d∈^+ satisfying 0<d<l-1 and let q be a prime power. Assume at least one of the longest sides of the rectangle defined by R^l,d,n,m_q,r is contained fully on the number wall. Then, * If m-n≤ d≤0, #R^l,d,n,m_r,q=q^r-l-2m-1/q+1((-d+1)· q^2m+2-(-d-1)· q^2m+1-d·(q-1)). * If 0≤ d≤ m, #R^l,d,n,m_r,q=q^r-l-2m/(q+1)^2·((d+1)· q^2m+2+2· q^2m+1-(d-1)· q^2m-q^2d+1). * If d>m, then equation (<ref>) holds upon replacing d with m in the right-hand side. Equations (<ref>) and (<ref>) are proved by induction on d. When d=0, Lemma <ref> reduces to Corollary <ref>, proving both base cases. Assume first that m-n≤ d<0. For the inductive step, let S be a sequence in R^l,d,n,m_r,q. Then, there are three possibilities: (1.1) S∈ R^l,d+1,n-1,m_r,q and S∉R^l,d+1,n,m_r,q. (1.2) S∉R^l,d+1,n-1,m_r,q and S∈ R^l,d+1,n,m_r,q. (1.3) S∈ R^l,d+1,n-1,m_r,q and S∈ R^l,d+1,n,m_r,q. If S is in case (1.3), then S∈ R^l,d+2,n-1,m_r,q. Also, the number of sequences in case (1.1) is #R^l,d+1,n-1,m_r,q-#R^l,d+2,n-1,m_r,q. Similarly, the number of sequences in case (1.2) is #R^l,d+1,n,m_r,q-#R^l,d+2,n-1,m_r,q. Summing over the three cases gives the formula #R^l,d,n,m_r,q=#R^l,d+1,n-1,m_r,q+#R^l,d+1,n,m_r,q-#R^l,d+2,n-1,m_r,q. Using the induction hypotheses and doing some simple algebra completes the proof. Note that, if the top right corner of the desired rectangle is on the right-most diagonal (or the top left corner on the left-most diagonal) of the finite number wall, this method fails. However, in this case the desired rectangle is square and consequently the problem reduces to Corollary <ref>. As the method is fundamentally identical, an abridged version of the proof of the case 0<d≤ m is now completed. By the same method as in the d<0 case, #R^l,d,n,m_r,q=#R^l,d-1,n,m_r,q+#R^l,d-1,n,m-1_r,q-#R^l,d-2,n,m-1_r,q. This is illustrated by the right-side diagram of Figure <ref>. Substituting the induction hypothesis into (<ref>) and doing the necessary algebra completes the proof of (<ref>). Finally, suppose d>m and let R_6 be the rectangle starting in column n and row m with width l and height l-d. This implies that the d-m rows underneath R_6 are also zero. This is illustrated below: The Square Window Theorem (Theorem <ref>) implies R_6 must be fully contained in a single square window. Furthermore, the window must have side lengths greater than or equal to l. As a window cannot start higher than row zero, there are no windows that contain R_6 that do not also contain R_7; namely, the additional m-d rows underneath R_6. The difference between the width and height of R_7 is m, reducing the question to the previous case. This concludes the proof. Corollary <ref> and Lemma <ref> provide a complete picture for counting how many finite sequences have a window containing any connected portion of zeros in their number wall. Even if the portion of zeros is not in a rectangular shape, a minimally sized rectangle can be drawn around it and the Square Window Theorem (Theorem <ref>) shows that this rectangle is also fully zero. The next subsection extends the results of this one to count the number of sequences that have windows containing two given windows in their number wall. This is much more challenging and in some cases only an upper bound will be found. To this end, the following immediate corollary to Lemma <ref> is stated for future reference: Let l∈ and d∈. Then #R_r,q^l,d,n,m≪_q |d|· q^r-l. §.§ Pairs of Windows This subsection provides the final result needed to prove Theorem <ref>. Given windows _1=(l_1,n_1,m_1) and _2=(l_2,n_2,m_2), define the set W_r,q^_1,_2 as all the sequences of length r over _q whose number wall has windows that contain _1 and _2. The following theorem is the main result of this subsection: Let _1=(l_1,n_1,m_1) and _2=(l_2,n_2,m_2) be non-overlapping windows of length l_1 and l_2 with top left entries in positions (n_1,m_1) and (n_2,m_2), respectively. Furthermore, let r be a suitably large natural number. Then, #W_r,q^_1,_2≪_q q^r-l_1-l_2. The proof shows this bound is sharp in many cases. If _1 and _2 overlap, the statement reduces to Corollary <ref> from the square Window Theorem (Theorem <ref>). The remainder of this subsection is devoted to proving Theorem <ref>. Let i∈{1,2}. The proof is split into three distinct cases, depending on how much the hat cones of _1 and _2 intersect. Case [c1]1 is swiftly dealt with, but Cases [c2]2 and [c3]3 are split into sub-cases ([c2a](2.1), [c2b](2.2), [c3a](3.1) and [c3b](3.2)) depending on the relative positions of _1 and _2. Where necessary, these sub-cases are once again split into sub-sub-cases. §.§.§ Case 1: In this case, the hat cones[See Definition <ref>] of _1 and _2 do not intersect. Let d be the distance between the hat cones of _1 and _2. From the original definition of a number wall (Definition <ref>), the Toeplitz matrices making up _1 and _2 and their respective hat cones share no entry. Hence, Corollary <ref> can be applied twice to show there are q^2m_1+1 choices for the 2m_1+1+l_1 entries in the hat generator of _1, and similarly for _2. The rest of the sequence is filled in arbitrarily, since the desired windows have been obtained and hence any choice for the remaining entries can only increase the size of the windows. Hence there are exactly q^2m_1+1· q^2m_2+1· q^d· q^r-2m_1-1-l_1-2m_2-l_2-1-d=q^r-l_1-l_2 such sequences. §.§.§ Case 2: In this case the hat cones of _1 and _2 intersect but neither is contained fully in the other. Let m be the depth of the intersection. There are two sub-cases to consider. In each, r is taken to be the minimum value such that the windows _1 and _2 are able to appear on some number wall generated by a sequence of length r. As seen in Figure <ref>, r=2m_1+2m_2-2m+l_1+l_2+1. §.§.§ Case 2.1: m_1≥ 2m or m_2≥ 2m. Without loss of generality, assume m_1≥2m. Corollary <ref> shows there are q^2m_2+1 sequences of length 2m_2+1+l_2 that have a window containing _2 in the number wall. The goal is to show there are q^2m_1-2m extensions that generate a finite number wall containing both _1 and _2. Taking this for granted, there are q^2m_2+1· q^2m_1-2m=q^r-l_1-l_2 satisfactory sequences. To prove the claim on the number of extensions, note that the largest possible window contained fully in the intersection, denoted (in black in Figure <ref>), has size 2m+1 if the intersection has odd length, or 2m+2 if it has even length. The horizontal distance between the column of _1 closest to the intersection and the lowest point of the intersection is m_1-m. There are two sub-cases. * If m_1>2m, there is always at least one column between _1 and . Lemma <ref> implies there are q^2m_1-2m continuations on the left side of the generating sequence that give a number wall containing _1 and _2. * If m_1=2m, then the only way a window fully generated in the intersection can touch _1 is if it is open on the side of _1 and is of size at least 2m. If is smaller than 2m then it would undershoot _1. Since is open and touches but does not overlap _1, there are l_i unique single digit extensions of the intersection that would let contain _1. The remaining (2m_1-2m-l_1)+l_1=2m_1-2m entries are arbitrary, meaning there are q^2m_1-2m continuations of the intersection to give _1. §.§.§ Case 2.2: m_1<2m and m_2<2m. When both m_1 and m_2 are less than 2m, an exact count is much harder to achieve. Corollary <ref> shows there are q^2m+1-q^2m sequences of length 2m+1 that have a nonzero entry at the lowest point. Two applications of Lemma <ref> give q^2m_1+2m_2-4m extensions whose number wall has windows containing _1 and _2. Hence, there are q^2m_1+2m_2-2m(q-1)· q^r-(2m_1+2m_2-2m-l_1-l_1+1)=q^r-l_1-l_2-1(q-1) satisfactory sequences of length 2m_1+2m_2-2m+l_1+l_2+1. Similarly, if the intersection has odd length there are q^2m_1+2m_2-2m-2·(q^2-1) such sequences. All that remains is to deal with the sequences that do have a window on the lowest row of the intersection. From now on, such a window is called a central window. The corresponding generating sequences are partitioned into four subsets; * S_0: The set of sequences where the central window overshoots neither _1 or _2. * S_1: The set of sequences where the central window overshoots _1 and not _2. * S_2: The set of sequences where the central window overshoots _2 and not _1. * S_3: The set of sequences where the central window overshoots both _1 and _2. The sizes of S_0,S_1,S_2 and S_3 are either calculated explicitly or bounded above. It is difficult to calculate the size of S_0 exactly, but it is easy to find a non trivial upper bound. Indeed, a sequence generating the intersection has a central window that does not overshoot _1 or _2, then the central window must undershoot or hit _1 and _2. If the central window hits _1 or _2, then it contributes nothing to the sum. If the central window undershoots _1 and _2, then by Lemma <ref> it can be continued in q^2m_2+2m_2-4m ways. A nontrivial bound on the size of S_0 is then #S_0≤ q^2m· q^2m_2+2m_2-4m=q^2m_2+2m_2-2m, that is, the number of possible sequences making up the intersection with a central window multiplied by the number of ways each can be extended, assuming the central window undershoots _1 and _2. This is clearly over-counting all the sequences in the intersection that have central windows that hit or overshoot _1 and _2, however for the purposes of proving Theorem <ref> this suffices. §.§.§ Case 2.2(i): Cardinality of the Set S_3. The size of S_3 is also simple to bound above. Without loss of generality, assume that m_2-m+l_2≥ m_2-m+l_1. If a sequence is in S_3, then the central window contains _2. Hence, it is necessary that it contains the square window of size 2(m_2-m+l_2)-1 that is fully generated in the intersection and contains _2. Call this window . Assumption (<ref>) and symmetry then imply that contains _1. As is a square and contains both _1 and _2, it has side length L≥ l_1+l_2. Lemma <ref> then implies that #S_3≤ q^2m_1+2m_2-2m+l_1+l_2+1-L≤ q^2m_1+2m_2-2m+1. §.§.§ Case 2.2.(ii): Cardinality of the Sets S_2 and S_1. Consider the hat cone of _2. Let X be the set of sequences of length 2m_2+1+l_2 that contain the central window and _2 (that is, those containing the green rectangle in Figure <ref>). The size of X is given by Lemma <ref> with d=1: #X=q^m_2-m/(q+1)^2(2· q^2m+2+2· q^2m+1-q^2+1)≪_q q^m_2+m. Similarly, define Y as the set of sequences with windows containing the purple rectangle in Figure <ref>. Lemma <ref> gives the size of Y as #Y=q^m_2-2m+m_1+2/(q+1)^2(2q^2(2m-m_1)+2q^2(2m-m_1)-q^2+1)≪_q q^m_2+2m-m_1≤ q^m_2+m. Here, #X-#Y is the number of sequences whose number wall has a window containing the central window and _2 but that does not touch _1. By Lemma <ref>, every sequence counted in #X-#Y can be extended in q^2m_1-2m different ways. Hence, #S_2= (#X-#Y)· q^2m_1-2m≪_q q^m_2+2m_1-m The proof of the inequality #S_1≪_q q^m_1+2m_2-m is the same, swapping the roles of _1 and _2. §.§.§ Completion of Case 2.2. Combining (<ref>),(<ref>),(<ref>), (<ref>) and (<ref>) gives #S_r,q^_1,_2 =q^2m_1+2m_2-2m+1-q^2m_1+2m_2-2m+#S_0+#S_1+#S_2+#S_3 ≪_q q^2m_1+2m_2-2m+1+ m· q^m_1+m_2+ q^m_2+2m_1-m+q^m_1+2m_2-m. Using that m≤ q^m for all m,q∈ and that m_i≥ m gives (<ref>) ≪_q q^2m_1+2m_2-2m+1. Multiplying by the q^r-(2m_1+2m_2-2m+l_1+l_2+1) arbitrary continuations completes the proof of Case 2. §.§.§ Case 3 The third and final case is when the hat cone of one window is contained fully within the hat cone of the other. Assume without loss of generality that the hat cone of _1 is contained inside the hat cone of _2, and that _1 appears on the right side of or above _2. Extend the hat generator of _1 on the right until it reaches the end of the hat generator of _2. For this case, this is called the extended hat generator of _1. There are two sub-cases. §.§.§ Case 3.1: Here, the number wall generated by the extended hat generator of _1 has depth greater than m_2, as illustrated below: Corollary <ref> shows there are q^2m_1 sequences of length 2m_1+l_1 which are hat generators for _1. Using Lemma <ref>, there are q^2m_2-2m_1-l_1 continuations to the hat generator of _1 on the right-hand side with length 2m_2+1-2m_1-l_1 such that the lowest light blue entry in Figure <ref> is zero. Lemma <ref> shows there is at most a single continuation of length l_2” on the right-hand side and of length l_2' on the left-hand side such that the number wall has a window containing _2, although sometimes such a continuation may not exist. Hence, there are at most q^2m_2-l_1 sequences with windows containing _1 and _2 in their number wall. Multiplying by q^r-2m_2-l_2 for all the arbitrary choices of the sequence that are not in the hat generator of _2 yields, as required, #S_r,q^_1,_2< q^r-l_1-l_2. §.§.§ Case 3.2: Here, the number wall generated by the extended hat generator of _1 has depth m<m_2. Without loss of generality, assume _1 is to the right of _2. Once again, there are two sub-cases: * If m_2> 2m, then every possible sequence of length 2m+1 only has windows that undershoot _2, as illustrated in Figure <ref>. Hence, for any of the arbitrary q^2m-2m_1-l_1 continuations of the hat generator of _1, Lemma <ref> shows there are q^2m_2-2m continuations giving a window containing _2. This gives a total of q^2m_2-l_1 sequences of length 2m_2+l_2 with windows containing _1 and _2. Multiplying by the q^r-(2m_2-l_2) arbitrary continuations completes the proof. * If m_2≤2m, the number wall generated by the extended hat generator for _1 either has no window on row m or else a window that undershoots, hits or overshoots _2. * If such a window does not exist, or it undershoots _2, then Corollary <ref> or Lemma <ref> show there are q^2m_2-2m left-side extensions that give a number wall with a window containing _2. Hence, as an over-estimate, there are less than or equal to q^2m-2m_1-l_1 right-side continuations of the hat generator for _1 that satisfy this criteria. Therefore, these cases contribute at most q^2m_2-l_1 to the total. * If a window in the number wall overshoots _2 then it also contains _1. Let l≥ l_1+l_2+1 be the size of . This is illustrated below: < g r a p h i c s > figureIf a sequence has a zero portion in the shape of the minimal rectangle containing _1 and _2 (dashed, yellow), it necessarily contains the full square window (black). By Corollary <ref> there are less than or equal to q^2m_2-l_1-1 sequences in this case. This completes the proof of Theorem <ref>. § A KHINTCHINE-TYPE RESULT IN P(T)-ADIC APPROXIMATION This section contains the proof of Theorem <ref>. This is a corollary of the following theorem, proved using the combinatorial statements established in the previous section. Let f:{q^k}_k≥0∪{0}→ℝ+ be a increasing function and p(t)∈_q[t] be an irreducible polynomial. For μ-almost every Laurent series Θ∈𝕀, the inequality N(t)· f(|N(t)|)·|N(t)|_p(t)·|⟨|N(t)|·Θ(t)⟩|≤1/q has infinitely (finitely, respectively) many solutions if ∑_k≥0k/f(q^k) diverges (converges, respectively). Theorem <ref> implies Theorem <ref>. To show this, the following set is defined: fix a polynomial N(t)=M(t)· t^k where k∈∪{0} and M(t) is coprime to t. Define h:=(M(t)), implying h+k=(N(t)), and let[A_N(t) also depends on the irreducible p(t), but this is dropped from the notation.] A_N(t),f:={Θ(t)∈: N(t)· f(|N(t)|)·|N(t)|_p(t)·|⟨|N(t)|·Θ(t)⟩|≤ q^-1}. If the sum <ref> converges, the proof is clear. In the divergence case, the set W(f,t) (defined in equation (<ref>)) is expressed as W(f,t)=⋂_i=0^∞⋂_N(t)∈_q[t]\{0}A_N,q^if. As this is a countable intersection of full measure sets, W(f,t) has full measure. From now on, A_N(t),f is written as A_N(t).   §.§.§ The Convergent Case Recall the notation of a ball in the field of Laurent series, introduced in (<ref>). Given an irreducible polynomial p(t), the set A_N(t) is decomposed as follows: A_N(t) =⋃_P∈(𝔽_q[t])_(N){Φ∈: |NΦ-P|≤q^-1/f(|N|)· |N|·|N|_p(t)} = ⋃_P∈(𝔽_q[t])_(N) B(PN^-1, q^-1/f(|N|)·|N|^2·|N|_p(t)). This implies that μ(A_N(t))≤ q^(N)+1q^-1/f(|N|)·|N|^2·|N|_p(t)=1/f(|N|)·|N|·|N|_p(t) Define m:=(p(t)). Rewriting N as N=M· p(t)^k for M∈_q[t] such that (M,p(t))=1 and for k∈∪{0} gives ∑_N∈_q[t]\{0}1/f(|N|)·|N|·|N|_p(t) =∑_M∈_q[t]\{0} (M,p(t))=1∑_k≥01/f(|p(t)|^k·|M|)· |M| Define h=(M(t)) and expand M(t) as M=∑_i=0^H A_i(t)p(t)^i, where A_i(t)∈(_q[t])_m-1, H=⌊h/m⌋ and (A_H)=h-mH. If (M,p(t))=1, this also implies that A_0(t)≠0. Assuming h≥ m, there are (q^m-1)_coefficients of A_0(t)≠0·q^m(H-1)_ coefficients of A_1(t),…,A_H-1(t)·q^h-mH-1· (q-1)_ coefficients of A_H(t)≍ q^h such polynomials M. If h≤ m, then any of the q^h+1-1 nonzero polynomials of degree h are coprime to p(t). Hence, (<ref>)=∑_h≥0∑_M∈_q[t]\{0} (M,p(t))=1 (M)=h∑_k≥01/f(|p(t)|^k·|M|)· |M|≍∑_h≥0∑_k≥01/f(q^h+mk) Defining the exponent i:=h+mk, and since h≡ i m, each value of i appears in (<ref>) ⌊i/m⌋=H+k times. Thus, (<ref>)≪∑_i≥0i/m· f(q^i)·≪∑_i≥0i/f(q^i) Applying the Borel-Cantelli lemma proves the convergence case. §.§.§ The Divergence case: The problem is rephrased in terms of number walls. If Θ(t)∈ A_N(t), Theorem <ref> implies there is a window of size ⌊log_q(f(q^h+k))⌋ in column h+k+1 and row h of the number wall generated by Θ(t). This is the window represented by 𝐍(𝐭), denoted _N(t). By Corollary <ref>, for r a large natural number, there are q^r-⌊log_q(f(q^h+k))⌋ possible sequences of length r over _q whose number wall contains _N(t). However, it is clear that _N(t) only depends on k and h, and not the specific choice of M(t). This motivates the following definition: for h,k∈, define the set A_h,k:={Θ(t)∈ A_N(t): N(t)=M(t)· t^k,  (M(t))=h,  (M(t),t)=1}. Explicitly, A_h,k is the union of all the sets A_N(t) which can expressed as N(t)=M(t)· t^k for some polynomial M(t) of degree h. In particular, A_h,k is the union of finitely many sets A_N(t). Therefore, if Θ(t) is in the set A_N(t) for infinitely many N(t)∈_q[t] then Θ(t) is also in infinitely many sets A_h,k. The converse is also clear, implying lim sup_|N(t)|→∞A_N(t)=lim sup_min(h,k)→∞A_h,k. Similarly as above, define _h,k as the window represented by any N(t) which can be decomposed as M(t)· t^k, where (M(t))=h and M(t) and t are coprime. The following classical lemma is used to attain positive measure of W(f,t) from the divergence of the series (<ref>). The reader is referred to <cit.> for the proof and further reading. Let {E_n}_n∈ be a sequence measureable sets in a space with measure μ. Define E_∞:=lim sup_n→∞E_n. Assume ∑_n≥0μ(E_n)=∞ and that there exists a constant C>0 such that the inquality ∑_s,t≤ rμ(E_s∩ E_t)≤ C(∑_n≤ rμ(E_n))^2 holds for infinitely many r∈. Then μ(E_∞)≥1/C. This lemma is now used to show that A_∞=lim sup_min(h,k)→∞A_h,k has positive measure when p(t)=t. The case of any irreducible polynomial will be inferred from this one afterwards. Let N(t)∈_q[t] be a polynomial. The sum ∑_h_1≤ r∑_k_1≤ r-h_1∑_h_2≤ r∑_k_2≤ r-h_2μ(A_h_1,k_1∩ A_h_2,k_2) is partitioned into two pieces: S_1 =∑_h_1≤ r∑_k_1≤ r-h_1∑_h_2≤ r∑_k_2≤ r-h_2 _h_1,k_1 touches _h_2,k_2μ(A_h_1,k_1∩ A_h_2,k_2), and S_2 =∑_h_1≤ r∑_k_1≤ r-h_1∑_h_2≤ r∑_k_2≤ r-h_2 _h_1,k_1 does not touch _h_2,k_2μ(A_h_1,k_1∩ A_h_2,k_2). The sum S_1 is dealt with first. The goal is to show that S_1≪(∑_h≤ r∑_k≤ r-hμ(A_h,k))^2=∑_h_1≤ r∑_k_1≤ r-h_1(∑_h_2≤ r∑_k_2≤ r-h_2μ(A_h_2,k_2))μ(A_h_1,k_1). To achieve this, fix h_1 and k_1, then consider all the values of h_2 and k_2 such that _h_1,k_1 touches _h_2,k_2. As the size of _h,k is determined only by h,k and the function f, there are only a finite number of windows _h_2,k_2 touching _h_1,k_1. These values are partitioned into disjoint sets {P_i}_i≥0 depending on how much larger than _h_1,k_1 a window must be to contain _h_1,k_1 and _h_2,k_2. Explicitly, P_i contains all the pairs (h_2,k_2) such that the minimum size of any window containing _h_1,k_1 and _h_2,k_2 is i more than _h_1,k_1. For example, P_0 contains only the pairs (h_2,k_2) such that _h_2,k_2=_h_1,k_1 (that is, h_2=h_1 and k_2=k_1). Similarly, P_1 contains all the pairs (h_1,k_1) such that the window containing _h_1,k_1 and _h_2,k_2 is only one larger than the size of _h_1,k_1. Below, the diagram used in the calculation of the measure of P_1: A window with top left corner in the red positions would have exactly the same size as _h_1,k_1. Then, a window in the blue (green, respectively) positions would have a size less than or equal to (greater than or equal to, respectively) the size of _h_1,k_1. Each starting position determines the shape of the portion of the number wall that must be full of zeros. For example, if h_2 and k_2 are such that _h_2,k_2 starts in the top left blue position in Figure <ref>, then _h_2,k_2 has size less than or equal to _h_1,k_1 and hence _h_1,k_1 and _h_2,k_2 are contained in a single square of length l_1+1. Let be a window of size l. Using Corollary <ref> and the ultrametic property of the absolute value gives μ({Θ∈_q((t^-1)): Θ contains in its number wall})=q^r-l· q^r=q^-l. Hence, in this case μ(A_h_1,k_1∩ A_h_2,k_2)=q^-l_1-1. Similarly, if h_2 and k_2 are such that _h_2,k_2 starts in the top middle blue position, then _h_1,k_1 and _h_2,k_2 are contained inside a rectangle with height one greater than its width. In this case, Lemma <ref> implies that μ(A_h_1,k_1∩ A_h_1-1,k_1)≪ 2q^-l_1-1. Doing a similar calculation for the other six pairs (h_2,k_2)∈ P_1 shows that ∑_(h_2,k_2)∈ P_1μ(A_h_1,k_1∩ A_h_2,k_2)≤8·μ(A_h_1,k_1∩ A_h_1-1,k_1)≪· q^-l_1-1. Similarly, the following diagram depicts starting positions for _h_2,k_2 when (h_2,k_2)∈ P_i. There are 8i pairs (h_2,k_2)∈ P_i. If h_2 and k_2 are such that _h_2,k_2 starts on the top middle blue position, then _h_1,k_1 and _h_2,k_2 are contained in a rectangle with height i more than its width. Every other pair (h-2,k-2)∈ P_2 has _h_1,k_1 and _h_2,k_2 contained in a rectangle with difference between width and height at most i. Once again, Lemma <ref> implies that ∑_(h_2,k_2)∈ P_2μ(A_h_1,k_1∩ A_h_2,k_2)≤8i·μ(A_h_1,k_1∩ A_h_1-2,k_1) ≪ 16 · (i+1)· q^-l_1-i. Depending on the function f, the size of the window _h_2,k_2 could be different from _h_1,k_1. This leads to some pairs (h_2,k_2)∈ P_i being replaced with (h_2',k_2') representing the starting positions being moved up and to the left. In some cases, pairs in P_i are ignored entirely. Moving a starting position does not effect the calculation and it suffices to assume none are ignored when calculating an upper bound. Furthermore, when i=l_1+1, the sets P_i become empty, as the windows _h_1,k_1 and _h_2,k_2 are no longer touching. Hence, ∑_h_2≤ r∑_k_2≤ r-h_2 _h_1,k_1 touches _h_2,k_2μ(A_h_1,k_1∩ A_h_2,k_2) ≪_q q^-l_1·(1+8·∑_i=1^∞ i^2q^-i) =μ(A_h_1,k_1)·(1+8·∑_i=1^∞ i^2q^-i). Therefore, using the assumption that ∑_h≥0∑_k≥0μ(A_h,k)=∞, there exists some value r such that ∑_h+k≤ rμ(A_h,k)>1 in such a way that S_1  ≪  ∑_h+k≤ rμ(A_h,k)  <  (∑_h+k≤ rμ(A_h,k))^2. Next, S_2, as defined in (<ref>), is dealt with. If _h_1,k_1 does not touch _h_2,k_2, then μ(A_h_1,k_1∩ A_h_2,k_2 is given by the number of sequences that have windows containing _h_1,k_1 and _h_2,k_2 multiplied by the measure of the ball centered around them. By Theorem <ref>, this is μ(A_h_1,k_1∩ A_h_2,k_2) ≪ q^r-l_1-l_2· q^-r=q^-l_1-l_2. Hence, S_2 ≪∑_h_1+k_1≤ r∑_h_2+k_2≤ r _h_1,k_1 does not touch _h_2,k_2 q^-⌊log_q(f(q^h_1+k_1)⌋-⌊log_q(f(q^h_2+k_2)⌋. To acquire an upper bound on this sum, the condition that _h_1,k_1 and _h_2,k_2 do not touch can be dropped: (<ref>) ≤∑_h_1+k_1≤ r∑_h_2+k_2≤ r q^-⌊log_q(f(q^h_1+k_1)⌋-⌊log_q(f(q^h_2+k_2)⌋ = (∑_h_1+k_1≤ rq^-⌊log_q(f(q^h_1+k_1)⌋)·(∑_h_2+k_2≤ rq^⌊log_q(f(q^h_2+k_2)⌋) =(∑_h+k≤ rμ(A_h,k))^2. Hence, by the Divergence Borel-Cantelli lemma (Lemma <ref>), the set A_∞ has positive measure. The following theorem is used to go from positive measure to full measure: Let ϕ:_q[t]→ be a function and let E_N={Θ∈𝕀: | Θ-P/N|≤ϕ(N)/|N| for some polynomial P with (P)≤(N) and (P,N)=1}. Then μ(lim sup_|N|→∞E_N)∈{0,1}. The proof of this proposition is completed by Inoue and Nakada in <cit.>. It is a direct adaptation of the proof in the real case, provided by Gallagher in <cit.>. The sets E_N in the statement of Proposition <ref> are the sets A_N(t) from (<ref>), and hence positive measure of A_h,k implies full measure. The proof in the case p(t)=t is thus complete. Finally, the p(t)=t case is shown to imply the divergence part of the theorem for an arbitrary irreducible polynomial p(t). Assume that the sum (<ref>) diverges and let N(t)∈_q[t] and Θ(t)∈_q((t^-1)) be a pair satisfying the inequality (<ref>). If b,c∈ and Ξ(t)=∑_i=b^ca_it^i∈_q((t^-1)), then Ξ(p(t)):=∑_i=b^ca_ip(t)^i. If m=(p(t)) and k and l are natural numbers such that |N(t)|_t:=q^-k and |⟨ N(t)·Θ(t)⟩|=q^-l, then |N(p(t))|_p(t)=q^-mk and |⟨ N(p(t)·Θ(p(t))⟩|=q^-lm. Hence, |N(p(t))|_p(t)· |N(p(t))|· f(|N(t)|)^m·|⟨ N(p(t))·Θ(p(t))⟩| = (|N(t)|· f(|N(t)|)·|N(t)|_t·|⟨ N(t)·Θ(t)⟩|)^m≤ q^-m. The following lemma is used to extend the p(t)=t case to a general irreducible polynomial p(t): Define f'(q^k):=f(q^km)^1/m. If ∑_k≥1k/f(q^k)=∞, then ∑_k≥1k/f'(q^k)=∞. For now, take Lemma <ref> for granted. Note that f'(|N(t)|)^m=f(|N(p(t))|). The p(t)=t case of Theorem <ref> and the equation (<ref>) show that for μ-almost all Θ(t)∈, |N(p(t))|_p(t)· |N(p(t))|· f(|N(p(t))|)·|⟨ N(p(t))·Θ(p(t))⟩|≤ q^-m. Define the set ϑ:= {Θ(p(t))∈_q((p(t)^-1)): Θ(p(t)) satisfies (<ref>) for infinitely many N(p(t))∈_q[p(t)]}, and let t^i ϑ={t^i·Θ(p(t)): Θ(p(t)∈ϑ}. If j is the highest power[As Θ(t) was in the unit interval, j is well defined.] of t in any Laurent series in t^iϑ, then it is clear that j≡ i m. Therefore, every Laurent series Ξ(t) in the Minkowski sum M(ϑ):=⊕_i=0^m-1t^iϑ satisfies |N(p(t))|_p(t)· |N(p(t))|· f(|N(p(t))|)·|⟨ N(p(t))·Ξ(p(t))⟩|≤ q^-1. To see that M(ϑ) has full measure, let B be an arbitrary ball in _q((t^-1)) and B_p(t)={Θ(p(t)): Θ(t)∈ B}. It is clear that B=M(B_p(t)). As the Haar measure of a set is defined as the infimum over all coverings of that set with balls, this is sufficient to conclude μ(M(ϑ))=1. This implies M(ϑ) has full measure. It remains to proof Lemma <ref>: If m=1, then the claim is trivial. Therefore, assume m≥2. Since the series (<ref>) diverges, for any c,ε>0 there exist infinitely many k∈ such that f(q^k)<k^2+ε/c. Let I⊂ be the set of all natural numbers k satisfying (<ref>). Furthermore, define J={k∈ I: f(q^k)<k^1+ε}. Then, at least one of the following is true: ∑_k∈ Jk/f(q^k)=∞ or ∑_k∈ I\ Jk/f(q^k)=∞. This follows from the definition of the set and from the identity (<ref>)= ∑_k∈ Jk/f(q^k)+∑_k∈ I\ Jk/f(q^k)+∑_k∉Ik/f(q^k)_<∞ =∞. Assume the first case of (<ref>) holds. Then, J is clearly infinite and ∑_k∈ Jk/f'(q^k) =∑_k∈ Jk/f(q^km)^1/m ≫_m,ε ∑_k∈ Jk^1-1+ε/m=∞. Now, assuming the second case of (<ref>), ∑_k∈ I\ Jk/f'(q^k) =∑_k∈ I\ Jk/f(q^km)^1/m≫_m,ε∑_k∈ I\ J k^1-2+ε/m. If m>2 and ε is sufficiently small, (<ref>) trivially diverges. If m=2, from the definition of k∈ I\ J: (<ref>) =∑_k∈ I\ Jk^-ε/2≥∑_k∈ I\ Jk/f(q^k)=∞. Therefore, both cases of (<ref>) have a divergent subsequence and the proof is complete. This completes the proof of Theorem <ref>. § OPEN PROBLEMS Theorem <ref> has reduced disproving p(t)-LC to disproving t-LC. OVer the duration of this work, python code <cit.> was written that generated the number walls of any given finite sequence. This provides strong experimental evidence that the counterexample to t-LC over _3 from <cit.> is also a counterexample to t-LC over _p for p congruent to 3 modulo 4. Furthermore, define the Adapted Paper-Folding Sequence as (f_n)_n∈ as f_n= k_n-1/2, where k_n is the greatest odd factor of n reduced modulo 8. There is equally strong experimental evidence that this is a counterexample to t-LC over fields of characteristic p congruent to 1 modulo 4. The code <cit.> has also been used to show that every sequence over _2 has windows of size at least 3 in its number wall. There is no obvious reason this should not be true when 3 is replaced by any natural number. This leads to the following conjecture. The t-adic Littlewood Conjecture is true over _2, but false over all other finite fields. A second natural continuation is to try and improve Theorem <ref>. However, doing so would require a new method. An obvious suggestion would be to adapt the method of Badziahin and Velani from <cit.> to get full dimension with a growth function of log(|N|)·log(log(|N|)), as they do in the real case. However, any attempt to do this has been unsuccessful. A finite number wall generated by a sequence of length n has O(n^2) points. This is the reason for the log^2 term in Theorem <ref> and it motivates the following conjecture: The set of counterexamples to t-LC with growth function f has zero Hausdorff dimension when f is of the form log^1+λ for any λ<1. This is a departure from the real case, where it is conjectured the equivalent set has full Hausdorff dimension for f=log.
http://arxiv.org/abs/2307.03281v1
20230706203153
Accreting Black Holes Skewing and Bending the Optical Emission from Massive Wolf-Rayet Companions -- A Case Study of IC10 X-1
[ "Sayantan Bhattacharya", "Dimitris M. Christodoulou", "Andre-Nicolas Chene", "Silas G. T. Laycock", "Breanna A. Binder", "Demosthenes Kazanas" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
firstpage–lastpage Physics-Infused Machine Learning Based Prediction of VTOL Aerodynamics with Sparse Datasets Manaswin Oddiraju [Ph.D. Student, Department of Mechanical and Aerospace Engineering, University at Buffalo, AIAA Student member.], Divyang Amin [Flight Sciences Engineering Lead, Bechamo LLC], Michael Piedmonte[Chief Technical Officer, Bechamo LLC], Souma Chowdhury[Associate Professor, Department of Mechanical and Aerospace Engineering,University at Buffalo, AIAA Senior member. Corresponding author. Email: [email protected]] ==================================================================================================================================================================================================================================================================================================================================================================================================================================================== We present a statistical analysis of the Heii 4686 emission line in the spectra of the black hole and Wolf-Rayet (WR) star of the high-mass X-ray binary IC10 X-1. This line is visibly skewed, and the third moment (skewness) varies with the binary's orbital phase. We describe a new method of extracting such weak/faint features lying barely above a noisy continuum. Using the moments of these features, we have been able to decompose these skewed lines into two symmetric Gaussian profiles as a function of the orbital phase. The astrophysical implications of this decomposition are significant due to the complex nature of wind-accretion stream interactions in such binary systems. Previous studies have already shown a 0.25 phase lag in the radial velocity curve of the star and the X-ray eclipse, which indicates that the Heii emitters might be in the stellar wind, hence not tracing the star's orbital motion. Results from this work further suggest the existence of two separate emitting regions, one in the stellar wind in the shadow of the WR star, and another in the accretion stream that impacts the black hole's outer accretion disk; and the observed skewed Heii lines can be reproduced by superposition of the two corresponding time-dependent Gaussian emission profiles. accretion, accretion discs – black hole physics – methods: statistical – stars: Wolf-Rayet – techniques: spectroscopic – X-rays: binaries § INTRODUCTION Spectroscopic observations of Wolf-Rayet (WR) stars have always revealed interesting physics about their environments. WR stellar spectra are dominated by multiple emission lines (He, N, C, and O) due to a massive and optically thick wind emanating from the WR star <cit.>. Analyzing these spectral features serves the versatile purpose of studying the star itself, its companion, and their interactions. In this work, we are studying a member of this rare subset of binaries, IC10 X-1, an eclipsing high-mass X-ray binary (HMXB) consisting of a black hole (BH) of estimated mass ∼15-35M_⊙ and a WN3-type WR star <cit.>. <cit.> determined [MAC92] 17A to be the donor star in the IC10 X-1 system and identified it as a WNE star situated in a crowded field with three other stars to within 0.3^''-0.4^'' of the X-ray source. In this study, a model spectrum <cit.> was used to find the stellar parameters: T_* = 85,000 K, log(L/_⊙) = 6.05, Ṁ= 4 × 10^-6 _⊙ yr^-1, and v_∞ = 1750 km s^-1. Several early-type WR stars in the Small Magellanic Cloud (SMC) (similar in metallicity to IC10) have similar properties to [MAC92] 17A. On the other hand, the association of [MAC92] 17A with a compact object makes it more similar to X-ray sources, such as Cyg X-3 and NGC300 X-1, rather than isolated or non-compact binary WR stars. <cit.> monitored this system with 10 Keck spectra and determined a radial velocity of 370±20 km s^-1. This resulted in a BH mass in the range of 23.1 ± 2.1 _⊙ to 32.7 ± 2.6 _⊙, making IC10 X-1 the most massive stellar-mass BH at that time. In HMXBs, the compact object (possibly a BH in our case) accretes material from the stellar wind of the massive companion (the WR star in IC10 X-1), and in turn, it emits X-ray photons. Most of the stellar wind is highly ionized, except for the shadow region behind the WR star with respect to the BH. This phenomenon is very clearly observed in the spectra of these systems. The Heii 4686 line presumably originates from the relatively smaller and less ionized shielded region behind the WR star <cit.>. The radial velocity curve determined from this configuration <cit.> tracks the wind's motion rather than the stellar orbital motion <cit.>, hence we observe a 0.25 phase lag in the radial velocity (RV) curve, rendering the BH's mass determination inaccurate. In the absence of a well-determined compact object mass, the other parameters (orbital period of the system, eclipse duration, and the stellar parameters of the WR star) can be used to explore more plausible solutions <cit.>. This mass conundrum also opens the door for consideration of a low-mass BH or even a neutron star (NS) companion, as shown in Table 1 of <cit.>. Although the Heii emission line is not useful in determining accurate Keplerian binary parameters, it can provide solid information about the stellar wind itself, viz., its ionization structure and velocity distribution. In this work, we have studied the time evolution of statistical moments (primarily up to the fourth moment) of the Heii emission line and we have deciphered the physics governing the outflow from the orbiting WR+BH binary in IC10 X-1. The WR wind has a high mass loss rate and outflowing velocity <cit.>, and the accretion by the compact object varies drastically around the stellar orbit <cit.>. The velocity vectors of multiple wind components imprint their signatures on the optical emission lines. The pronounced skewness of the Heii 4686 emission line and its variation over the binary orbit can be used to understand the accretion (BH) - wind (WR star) interaction. The data can even be used to determine the locations of individual Heii emitters around the binary orbit, as we show in this work. Comparisons of the WN star in IC10 X-1 with WR stars, especially the WN population in the Large Magellanic Cloud <cit.> and the SMC <cit.>, can help us understand the effects of the compact object's presence in more detail. Such studies derive fundamental stellar parameters of WN stars including the distribution of terminal velocities (v_∞). The value of v_∞ of our WN star in IC10 X-1 is comparable to the results of the above-mentioned studies; a more complete comparison of other parameters using fits to radiative transfer models requires much higher-quality spectra. An emission line profile can be mathematically approximated by a Gaussian function. Depending on the movement of the line's centroid, we can determine the velocity of the emitting regions using Doppler shifts. This is a very well-known phenomenon, but when the observed emission line is skewed, the measured velocity could be a representation of multiple superposed velocity vectors. When we decompose such a skewed Gaussian line profile into two (or more) unskewed components, then the velocities of individual components can be determined. Such asymmetry in the profile could be caused by an inhomogeneous stellar wind from the WR star, as shown by <cit.>; in their work, the inhomogeneity is caused by clump formation inside the wind. Profile asymmetry may also be caused by the presence of a compact object around its orbit, and we have tried to model this scenario in the present work; we have formulated a new analytic method to decompose the skewed emission lines into distinct Gaussian components, and we discuss the physics of two such components as they evolve with orbital phase. The outline of the paper is as follows: In  <ref>, we describe the archival data used in this work and the method used to detect and analyze the especially weak Heii 4686 emission lines at different phase bins. In  <ref>, we describe the analytic results of decomposing a skewed Gaussian profile into two unskewed Gaussian components. In  <ref>, we discuss the astrophysical implications of this decomposition for IC10 X-1, and we conclude in  <ref> with a summary of our results. § OBSERVATIONS AND REDUCTION METHODS IC10 X-1 has been observed by the GEMINI-North/GMOS telescope through multiple observation campaigns over a long time span (2001-2019), and we have analyzed all of these archival data. The spectral data are summarized in Table <ref>, all collected in the MOS mode and archived by <cit.> <cit.>. All the spectra were obtained using the B600 grating in the MOS mode, most of the time with a slit-width of 1.0^''. The B600 grating has a resolving power R=1688 at the blaze wavelength of 461 nm. The 0.5 Å/pixel dispersion results in a ≈ 32 km/s velocity resolution per pixel at 4686 Å. Spectra from different observation campaigns with different observing conditions have been phasewise stacked, hence the quality varies significantly in each phase bin. In particular, spectra around phase ϕ = 0.7 are of very low quality, and the Heii line is barely detected above the noise. The gemini package in iraf was used to process the raw data. Steps described in gmosexample have been followed to perform standard calibrations and extraction of the one-dimensional spectra. No preliminary sky subtractions were performed at this stage because of the faintness of the source. The background was subtracted by a new procedure described in detail in  <ref> below. Conventional techniques do not "see" the Heii 4686 emission line protruding above the noise. The new procedure starts by taking into account the asymptotic decay of the line into the surrounding background. In the process, we obtained a significant number of spectra (52); yet, the Heii line was not detected in most of them individually; hence, we had to stack them to amplify the signal. The spectra were stacked phase-wise to increase the signal-to-noise ratio in each phase bin. Two binning schemes were used, one involving 10 bins of width Δϕ = 0.1, and another with 4 bins of width Δϕ=0.25. Then, the two data sets were analyzed together in order to look for variations in the moments of the Heii 4686 line as a function of the orbital phase. §.§ New Data Fitting Procedure for a Noisy Distribution with a Weak Embedded Signal At faint stellar magnitudes, the Gemini/GMOS spectra are dominated by strong emission lines due to the intervening atmosphere and by the noise that makes it hard to find out where the continuum lies around the location of a weak optical emission line such as the Heii 4686 line coming from the WR star or its wind in IC10 X-1 and similar WR binary systems. The usual procedures for spectrum decontamination do not work, even after piling up many spectra within each phase bin to amplify the signal. So, we had to devise an alternative procedure capable of fitting the Heii line along with its true continuum. (An example, the ϕ=0 case with a bin width of Δϕ = 0.25, is illustrated in Figure <ref>.) The method proceeds in the following steps: (a) Fitting starts with a visual inspection of the combined spectrum in each phase bin. We determine through experimentation the footpoints of the line and the level C_1 of the apparent continuum on the left side of the line. The left side is used because there are no important lines except N III 4630/43, which is in this case buried in the jitter, and we avoid that region in all cases. The right side is inappropriate for use because the apparent continuum is very much raised due to the presence of strong nearby emission lines such as the Hβ 4861 line. The jitter of a long segment on the left side is not fitted by a polynomial function, as is usually done. That would depress the apparent continuum significantly. Instead, we find the maximum number of counts in the jitter near the left side of the left footpoint of the Heii 4686 line, and we adopt this value for C_1. Note that this value also includes the asymptotic left tail of the Heii line. (b) We fit the left side of the Heii line with a decaying exponential function, C(λ) = L_1 + (C_ max-L_1)exp(-|λ - μ|/(2)), where (μ, C_ max) is the mode and is a free parameter resembling variance; and we determine L_1, the asymptotic value of the line that contributes to the region of the apparent continuum. Then, the true continuum lies at a level of C_0 = C_1 - L_1. In this region devoid of other lines, if C_1 > L_1, we obtain a reasonable value of true continuum C_0. In the few cases in which L_1≳ C_1 (differing by no more than 3-5 counts), we reset C_0 = L_1 (because this is where the tail of the line is headed after all; and then, it specifies the true continuum all by itself). Such cases arise from the poor quality of the spectra that we are analyzing, but the differences in counts are too small to be of particular significance. On the other hand, any traditional attempt to use the mean jitter level (ignoring background flaring and the tail of the line) for removing the “visible continuum” is doomed to failure—the weak Heii line signal is washed out by the noise. (c) Having determined the true continuum C_0, we proceed to fit both sides of the Heii line, again with exponential functions that have the appropriate asymptotic behavior C_0 (Figure <ref>a), and we determine their pseudo-variances, say __ LS and __ RS (on the left side and the right side, respectively). In the few cases with C_0 = L_1, then __ LS =. (d) The newly fitted curves are still not Gaussian because their amplitudes and areas are not consistent (Figure <ref>b). Thus, we reset the amplitudes so that the area under each half-curve is 1/2; and we fit again the two half-profiles, this time with actual half-Gaussian functions, to determine their true variances, say V_ℓ and V_r. For convenience, we also shift F(λ) in wavelength space, so that its mode is located at μ = 0. (e) We concatenate the two half-Gaussians into one distribution function F(λ) (Figure <ref>c). It is easy to verify by numerical integration that the total area under the curve is equal to 1, and we did so. (f) Now, we are ready to extract the Heii signal out of the noisy spectral data. The best-fitted profile is not noisy at all, although we can assign typical error bars to its values by standard uncertainty propagation analysis. We find that the relative errors do not exceed ±10% in all observed points. We choose a grid spacing of 1 mÅ(so that numerical errors are negligible), and we generate synthetic data points from the distribution F(λ) out to -5√(V_ℓ) on the left and +5√(V_r) on the right of the mode μ = 0. The large number of data points generated guarantees that all corrections for bias will be quite small. Nevertheless, we apply the small bias corrections <cit.> to the moments discussed below. §.§ The Distribution Function F(λ) The final distribution function F(λ) is obviously asymmetric with a discontinuity at μ = 0 (Figure <ref>c). This does not prevent us from determining its moments by direct integrations. We computed and inspected the results up to the 8^ th normalized moment u(8)/[u(2)]^4, and we confirmed that we see a real signal in all of these moments (for example, moment u(8)/[u(2)]^4 must be significantly larger than 7!! = 105 — indeed, it is, in all phase intervals under consideration). We also investigated whether skewness could arise from instrumental effects. We used a WN star model spectrum <cit.>, and we convolved it using the instrumental line spread function. We did not find any significant skewness in this synthetic model of the Heii 4686 line. Hence, we can be confident that the observed skewness is due to physical causes associated with the IC10 X-1 compact binary system. In this work, we are primarily interested in the lower three normalized moments of the distribution function F(λ) besides the mean; specifically, the 2^ nd (variance V), 3^ rd (skewness S), and 4^ th (kyrtosis[This ought to be the Latinized spelling of the Greek word “kúrtosis” that translates to “curvature” or “bending," also used in the title; we note that Greek authors consistently use “kyrtosis" (with a "y") in the literature.] K) moments. We analyze these moments analytically in  <ref>, where we decompose the signal into two partially overlapping Gaussian components, and we follow their evolution as a function of orbital phase. § SKEW-KYRTIC MODEL DECOMPOSITION INTO TWO PURE GAUSSIAN COMPONENTS We assume that the best-fit model for the Heii line of IC10 X-1 is a unimodal superposition of two Gaussians, each of which carries a weight of p=1/2, and that are located on either side of the observed mode μ. In this pilot study, we have no reason to favor one distribution against the other by using different weights, in which case we would have to include the 5^ th moment too, and then solve a 9^ th-order polynomial equation <cit.>. §.§ Toward an Analytic Solution With the parameters μ, V, S, and K determined by the above procedure in each phase bin around the orbit of the binary system, we proceed to decompose the skewed and kyrtic distribution function F(λ) (see, e.g., Figure <ref>c) into two partially overlapping Gaussians with means μ_i, variances V_i, and corresponding normalized moments S_i=0 and K_i=3, where i=1, 2, respectively, and μ_1 ≥μ_2 by design. Having adopted p=1/2 for the weights of the mixture, we are called to solve only a cubic polynomial equation, an analytically tractable endeavor, which we describe below. Considering the first four equations in the nonlinear system of equations (2.10) given by <cit.> and using p=1/2, S_i=0, and K_i=3, we reduce the Gaussian solution set {μ_±, V_±} (where ± corresponds to i=1, 2, respectively) to obeying the following system of equations: [ 0 = δ_1 + δ_2; 2 V = V_1 + δ_1^ 2 + V_2 + δ_2^ 2; 2 S = δ_1(3V_1+δ_1^ 2) + δ_2(3V_2+δ_2^ 2); 2 K = 3V_1^ 2 + 6V_1δ_1^ 2 + δ_1^ 4 + 3V_2^ 2 + 6V_2δ_2^ 2 + δ_2^ 4; ] , where δ_i = μ_i - μ    for  i=1, 2 . The above equations can be reduced to a system involving a cubic equation for a new variable x≥ 0 (defined in equation (<ref>) below), viz. 6x^3 + 3( K - 3) x - S^2 = 0 , μ_± = μ ± c , c = √( Vx ) = D√(x ) , and V_± = V(1 - x ± S/3√(x)) , where D≡ V^1/2 is the standard deviation of the original input distribution. These equations also imply some important auxiliary relations between parameters, viz. μ_1 + μ_2 = 2μ ,    μ_1 - μ_2 = 2c ≥ 0 , and x = (c/ D)^2 ≥ 0 . We see now that x is dimensionless, since c and D have dimensions of [length]. Equation (<ref>) can be solved analytically for the nontrivial case, where S≠ 0. Depending on the parameters S and K, it has either one real positive root or three real roots, two of which are negative. Thus, a solution to our problem (a positive real root) always exists and it is unique. Figure <ref> shows an example in which we set S^2 =1 in equation (<ref>) and we plotted the cubic polynomial y(x) = 6x^3 + 3( K - 3) x - 1 for various values of K. The caption describes the various cases involving the zeroes of y(x). The general equations that we used to obtain the critical points shown in Figure <ref> are obtained from the cubic function of equation (<ref>), viz. y(x) = 6x^3 + 3( K - 3) x - S^2 , as follows: (a)The critical point (1, 0) is obtained directly from y(1)=0. Then, we find that it occurs for K=1 + S^2/3 (K=4/3 in Figure <ref>). (b)The inflection point at x=0 is obtained from y^''(0) = 0. We also find that y^'(0) = 0 for K=3, as seen in Figure <ref>. (c)The real double root at negative x-values is obtained by solving simultaneously the system of equations { y(x)=0, y^'(x)=0 }. We then find that x = -( S^2/12)^1/3 and K=3 - (3 S^4/2)^1/3 (x=-0.43679 and K=1.8553 in Figure <ref>). (d)The zero of the K=3 case is obtained easily from 6x^3 - S^2=0. We find that x = ( S^2/6)^1/3 (x=0.55032 in Figure <ref>). (e)The zero of the K=2, S^2 = 1 case (x=0.83624), given in the caption of Figure <ref>, is found analytically by solving the particular cubic equation 6x^3 -3x -1 = 0. §.§ The Analytic Solution The “discriminant” of equation (<ref>) takes the form d = 6( K-3)^3 + 9 S^4 , and d>0 for leptokyrtic distributions (“slender” ones with K>3) and mesokyrtic distributions (“middle” ones with K=3), such as the those we obtained for the Heii line of IC10 X-1. For d<0, the above system of equations has two negative real roots and one positive real root. The critical case d=0 corresponds to the double-root case described in  <ref>, item (c) above. In such platykyrtic (“broad” top, light tails) cases, the signal cannot be composed of a mixture of two Gaussians—no matter how the two Gaussians are arranged, close or far apart, they cannot get both the top of the mixture to be broad and the tails to be thin. When d > 0 in equation (<ref>) (i.e., for K > 3 - (3 S^4/2)^1/3; cf.  <ref>, item (c) above), another intermediate variable appears to dominate the solution, viz. Q = 3 S^2 + √(d ) , so that the solution of equation (<ref>) takes the relatively compact form x = ( Q/36)^1/3 -  (( K-3)^3/60.5 Q)^1/3 , where, in general, we expect that 0 < x ≤ ( S^2/6)^1/3 (for K≥ 3) in this work. Yet another, equivalent form of this solution turns out to be more convenient for computations: x = ( S^2/12)^1/3[ (1 + √(1 + g )0.75)^1/3 - (g/1 + √(1 + g ))^1/3] , where g = 2/3( K-3)^3/ S^4 . As stated above, here S≠ 0, but the normalized kyrtosis could potentially take the value K=3, in which case, we deduce that g = 0 and x = ( S^2/6)^1/3. We also find that, as g→ 0, the combination of terms in the square brackets tends to 2^1/3[1 - (g/4)^1/3 + O(g)] and then x≈ ( S^2/6)^1/3 [1 - (g/4)^1/3] . Since x<1 in our problem, the normalized skewness should then be restricted approximately in the interval of S^2 < 6 for K≈ 3, but this range turns out to be too wide. A much more accurate constraint on S^2 can be obtained from the momentous condition that S^2 < K - 1 , proven by <cit.> for any distribution. For K = 3, this inequality predicts emphatically that S^2 < 2, thus we expect that S∈(-1.414, +1.414) in our problem as well. §.§ Significance of the Output Parameters With x determined from equation (<ref>) or equation (<ref>), we can proceed to find the means μ_± (equation (<ref>)) and the variances V_± (equation (<ref>)) of the two Gaussian signals in the mixture that produces the observed weak Heii line in IC10 X-1. The main results are listed in Table <ref>, where the means of the two Gaussian lines are shown relative to the rest-frame wavelength of 4686 Å. Before we interpret these results, we should revisit the fundamental parameters discussed in the subsections above, and assign physical meaning to what is measured and derived in this analysis. Pivotal parameter c (equations (<ref>) and (<ref>)-(<ref>)) comes first in this deeper examination. According to equation (<ref>), c = (μ_1 - μ_2)/2 ≡Δμ/2≥ 0, thus c is one-half of the separation Δμ of the two Gaussian means. Thus, we think of c as unnormalized “standard deviation” of the two means, and we define a dimensionless separation distance D_μ of the means by D_μ≡Δμ/(2 D) = c/ D = √(x ) , where D = V^1/2. Obviously then, the positive root of the cubic equation (<ref>) provides the separation distance squared (x=D_μ^ 2). Furthermore, since x<1, then 0≤Δμ < 2 D (although these limits are too wide), where we emphasize again that D is the standard deviation of the original mixed sample. Finally, we note that the inverse of D_μ, i.e., 1/√(x), resembles the well-known “coefficient of variation CV,” which is equal to D/μ for the original sample. A surprising realization concerning the equations of  <ref> is that, just like equation (<ref>), they all practically “beg” to be normalized to the variance V of the original skewed and kyrtic data set, and not to a combination of the derived Gaussian variances V_1 and V_2. This approach has not been implemented previously in a statistical context, probably because people do not generally feel comfortable mixing input and output parameters in the same normalized expression. We undertook a literature search, and we found one work <cit.> in which the original sample's V was used as a normalization factor in the proposed bimodality index k, which in our notation would read k = (V_1+V_2)/(2 V). Based on the above elaboration, we also define a dimensionless separation distance D_V of the variances of the two Gaussians by D_V≡V_1 - V_2/2 V = S/3√(x ) = S/30.3D_μ , and a dimensionless bimodality index B_k of the variances <cit.> of the two Gaussians by B_k≡V_1 + V_2/2 V = 1 - x = 1 - D_μ^ 2 . The separation distances D_μ and D_V (equations (<ref>) and (<ref>)) are plotted versus ϕ∈[0, 1] in Figure <ref>, along with the skewness S of the original data set. We note the following properties of the above relations: (a) When x=0 (or D_μ=0), then S≡ 0 necessarily, and the last two fractions in equation (<ref>) become indeterminate; then, D_V, which may be nonzero, can only be determined from the first equality in equation (<ref>). In this case,  μ_1 = μ_2 = μ,  Δμ = c = 0,  and  B_k = 1. (b) Equation (<ref>) shows that D_V < 0 when V_1 < V_2 (i.e., when the larger mean is associated with the smaller variance). Furthermore, S∝ (V_1 - V_2)/ V, implying that the sign of (V_1 - V_2) is determined from the sign of the skewness S of the initial distribution. (c) There is no need to define a bimodality index for the means μ_1 and μ_2, analogous to B_k in equation (<ref>). Such an index would be equal to μ/ D, the inverse of the coefficient of variation CV≡ D/μ that describes the original distribution. Naturally then, CV also describes the same sample after its decomposition to two overlapping Gaussian distributions. (d) The inverse of the original CV is the harmonic mean of the inverses of CV_± = D/μ_±, where ± corresponds to i=1, 2, respectively. That is, 1/CV_+ + 1/CV_- = 2/CV for the inverse coefficients of variation of the two decomposed Gaussian distributions with values given by 1/CV_± = 1/CV ± D_μ . (e) Suppose one can determine μ_1 and μ_2 through some other means. For example, if the original distribution is clearly bimodal, then it seems reasonable to assume that the two visible modes represent the two Gaussian means. Then, there is no need to go through the long procedure that we described above. The separation distance D_μ is determined from the first equality in equation (<ref>), and the variances are readily determined from equation (<ref>) with x=D_μ^ 2, viz. V_± = V(1 - D_μ^ 2 ± S/30.3D_μ) . §.§ Modality In a final set of diagnostic tests, we should also consider the modality of the mixture of the two derived Gaussian distributions (despite the fact that we can graph the combined distribution and see for ourselves whether two distinct modes emerge in the mixture). We consider 5 diagnostic test values of modality (for equal weights p=1/2) that we can obtain rather easily from the input parameters and the results listed in Table <ref> and in Table <ref> given below: (1) A likelihood ratio test for bimodality <cit.>, based on the value of d_1 = D_μ√( V/(D_1 D_2)), where D_i=√(V_i ) (i=1, 2). The mixture of the two Gaussians is unimodal if d_1≤ 1, or if d_1>1    and   ln[d_1-(d_1^ 2-1)^1/20.3]  +  d_1(d_1^ 2-1)^1/2≤ 0 . (2) Ashman's d_2 statistic <cit.> in astrophysics, where d_2 = 2 D_μ (1 - D_μ^ 2)^-1/2. A clear separation of the components is expected when d_2 ≥ 2, or equivalently, when D_μ > 1/√(2). (3) A bimodal separation index d_3 <cit.>, defined by d_3 = D_μ[ D/(D_1+D_2)]. This index moves above 1 only if the two Gaussian distributions essentially do not overlap. (4) The bimodality index B_k=d_4 of <cit.>, based on variances and discussed in  <ref> above. In our notation, it takes the form d_4 = 1-D_μ^ 2 (equation (<ref>)), so if the separation distance between the two means is large, the index will assume low values. (5) A bimodality coefficient d_5 <cit.>, based on the seminal work of <cit.>, and defined by d_5 = ( S^2 + 1)/ K, where d_5∈(0, 1) (cf. equation (<ref>)). A bimodal distribution is expected to have light tails (low kyrtosis) and/or a strongly asymmetric form (high skewness), in which case the value of d_5 will be high. On the other hand, if d_5≤ 5/9 (the value for a uniform distribution), then the distribution is certainly unimodal <cit.>. We calculated the above diagnostic quantities d_j (j=1-5) from the input parameters and the results listed in Table <ref> (and including additional values at each quarter phase from Table <ref> shown below). We collect these results in Table <ref>, where we also show the separation distances D_μ and D_V, the conditions for a unimodal distribution, and the predictions of the diagnostic quantities. All diagnostics indicate that the derived mixture is unimodal in each phase bin. Naturally, we have confirmed the above results by plotting the mixture of the two Gaussian distributions for each phase bin, which resembles the skewed Heii 4686 lines observed in the spectra of IC10 X-1. The diagnostic analysis then serves to verify the robustness of the five criteria used for theoretical modality predictions. §.§ Signal Decomposition to Two Gaussian Emitters We can now follow the two distinct Gaussian components at various phases, as they move around and switch positions relative to one another. An illustration is shown in Figure <ref>, where the two emitting components are shown in cyan and purple colors, respectively. The utilized spectra have been averaged in order to capture nearly equidistant phases separated by Δϕ = 0.25 (see also Table <ref> for the decomposed values corresponding to the four frames of the figure). Phases are roughly a quarter phase apart because stacking spectra does not produce exact quarter phases. This is noticeable in the quarter phases of Figure <ref>, where the emission is systematically redshifted, with the implication that the average phase values are ϕ≲ 0.25 and ϕ≳ 0.75, respectively. The cyan component does however show blueshifted tails, especially at ϕ=0.75. We have incorporated these approximations in Figure <ref> below, where we show schematic diagrams of the two emitting components around the binary orbit. The purple component is clearly more slender and taller than the cyan component, which is always spread out in λ-space (the area under each curve is equal to 1). These characteristics support the hypothesis that the two components originate in different regions of the WR wind outflow, and that they move independently because they switch positions roughly every quarter phase. We believe that the extended (cyan) component originates in the expanding wind <cit.>; whereas the slender (purple) component comes from a hotspot (HS) and the accretion stream impacting the accretion disk that has formed around the BH <cit.>. This is because when the shielded wind expands along our line of sight (LOS) (ϕ=0 and 0.5), we expect to see a large dispersion of velocities originating at various distances from the WR star. Specifically, we imagine the orbital configuration of the two emitters evolving as follows (see also Tables <ref> and <ref>, and the schematic diagrams drawn with precision in Figure <ref>): (a)Phase ϕ=0.5: Both components show maximum blueshifts and the wind velocities show a much larger dispersion (D_ wind/D_ HS=2; Table <ref>). The shielded wind is directed toward us, and the HS is located close to 9:00 on a 12-hour wall-clock centered on the BH (top view of Figure <ref>). Figure <ref> also shows that the BH is 10^∘ before coming to mid-eclipse because the average binary phase depicted in this panel is actually ϕ=0.44. (b)Phase ϕ=0: The shielded wind shows here maximum redshift (Table <ref>), as was expected. The slender stream/HS component shows a smaller redshift (Figure <ref>). Then, the HS must be located close to 1:00 on its clock (Table <ref>: cos^-1(0.980/2.21)≃ 64^∘, i.e., 2 hr behind the 3:00 mark, and no more than a few minutes off of the 1:00 mark). This configuration presents a ∼40% larger dispersion of velocities, and a ∼40% larger redshift to the shielded wind. (c)Phase ϕ=0.25: The shielded sector of the wind is nearly orthogonal to the observer, yet it manages to produce a small mean redshift (Table <ref>). The HS shows about the same redshift as at ϕ=0, thus it has not moved too far off from its previous location. Note however that its orbital period does not have to be 1:1 with the binary period; the HS could have done a full orbit returning back to about the same location. (See Appendix <ref> for an in-depth investigation of the orbital motion of the HS.) In this phase, the dispersion of wind velocities is about twice as much now as that of the HS; we note that the ratio D_ wind/D_ HS is ≃2 in the next two quarter phases as well (see Table <ref>, where the last two columns clarify the sources in each phase). (d)Phase ϕ=0.75: The shielded sector of the wind is again nearly orthogonal to our LOS, and it manages again to produce a small mean redshift (Table <ref>). On the other hand, the HS shows its largest redshift (∼74% of the maximum blueshift), which places it close to 4:30 on its clock (Table <ref>: cos^-1(1.63/2.21) ≈ 45^∘, i.e., 1.5 hr ahead of the 3:00 mark, and no more than 5 minutes off of 4:30). In this phase, characterized by the poorest quality spectra, we find only one striking difference relative to phase 0.25: the velocity dispersions of both the HS and the wind have more than doubled (D_0.75/D_0.25≈ 2.5, as obtained from the values listed in Table <ref>). We summarize the counterclockwise movements of the HS around its clock during the BH orbital quarter phases from ϕ=0 to ϕ=1, respectively. In sequence, the HS is located roughly at 1:00, 0:50, 9:00, 4:30, and returns back to 1:00. Its orbit is somewhat eccentric as would be expected for an accretion stream that impacts the outer accretion disk of the BH. These timings allow us to determine approximately the axes of the ellipse, and dynamical theory allows us to work out the kinematics of the HS. We undertake this task in the next subsection, where our investigation comes to fruition. §.§ The Eccentric Stream/Hotspot Orbit The HS appears to be moving quite slowly (1:03→0:53) within the first quarter of the orbital phase; and much faster within the third quarter (9:00→4:25). These extremes allow us to determine the axes of the ellipse: the major axis is along the 0:42-6:42 line; periastron is at 6:42 for the orbiting HS, and apastron is at 0:42 due to the slow HS motion around the top of the clock; the minor axis then is along the 3:42-9:42 line that connects the covertices of the ellipse. Therefore, the apastron is tilted by -21^∘ (i.e., clockwise) from 12:00. Since the HS clock does not rotate about the BH and the 6:00 mark always faces the observer, this angle is also the inclination of the apsidal line to our LOS. The geometric properties of the above ellipse and the kinematics of the orbiting HS are collected in Table <ref>. We use units such that the specific angular momentum h of the HS and the standard gravitational parameter GM_ BH <cit.> are both equal to 1, and times are expressed in hours. This normalization scheme also causes the semilatus rectum ℓ = b^2/a <cit.> to be 1, giving us the useful relation a=b^2 between the semiaxes a and b<a of the ellipse. The condition b<a , imposed by the kinematics (Doppler shifts) allows us to pinpoint the orbital period of the HS with surprising accuracy (P_ HS=6.96 hr; see Appendix <ref>). This value implies that the HS executes 5 full orbits about the BH for every full orbit of the BH (i.e., 11/4 orbits per quarter of the binary phase). We also see in Table <ref> that the eccentricity of the orbit is low (b/a≈ 0.97) and that the ratio of speeds V_ max:V_ cov:V_ min = 1.69:1.30:1 . Thus, the HS is orbiting faster(slower) by a factor of 1.3 as it traverses each of these locations on the upper(lower) half of the ellipse. As a consequence of Kepler's second law, this speed differential causes the HS to spend 2.91 hr between covertices when going around periastron (0.418 P_ HS), compared to 4.05 hr when going around apastron (0.582 P_ HS). § DISCUSSION §.§ Winds and Accretion Streams in HMXBs Relatively strong and broad emission lines, such as the Heii 4686 line, appear in WR stars due to their pronounced stellar winds, and these spectral features most likely originate in regions within the outflowing winds, hence containing no tangible information about the binary orbits of such massive stars <cit.>. So, it seems that the dynamical information is encoded in the powerful winds emanating from these stars ( <ref> and <ref>). The stellar wind structure is quite complex in WR stars because of its enormous spatial extent and the presence of a nearby compact accreting object (presumably a BH) that intensifies and complicates the dynamics and the emission processes in both optical and X-ray wavelengths. <cit.> and <cit.> carried out theoretical investigations of stellar-wind velocities in isolated WR stars. Several other authors <cit.> studied the wind velocities of WR stars in HMXBs, such as IC10 X-1 and NGC300 X-1 <cit.>. In the theoretical studies <cit.>, the wind velocity v_ w(r) at distance r from the WR star is generally expressed by the relation v_ w(r) = v_0 + (v_∞ - v_0)(1 - R/r)^ β , where v_0 is the outflow speed at the surface of the WR star, R is its photospheric radius, and β≈ 0.8-1 <cit.>. For the terminal (asymptotic) velocity v_∞ in IC10 X-1, <cit.> suggest a characteristic value of 1750 km s^-1, corresponding to a rest-frame redshift of z_ CC=5.83×10^-3. The largest rest-frame shift, |z|_ HS = 2.21×10^-3, that we measured is the blueshift of the HS at 9:00 (Table <ref>; Figure <ref>), which is ∼0.4 z_ CC corresponding to a typical speed of ∼700 km s^-1. One may expect higher speeds in the accretion stream, in case the HS is not precisely located at 9:00 at orbital phase ϕ=0.5 and a larger velocity vector is projected on to our LOS (Appendix <ref>); and also because of dispersion in the component velocities (Figure <ref>). In any case, we do not expect to observe speeds much higher then ∼1000 km s^-1 at optical wavelengths (Table <ref>). These assertions will be put to the test in Appendix <ref> (see also Table <ref>), where we describe the unique determination of the orbital period of the HS about the BH. §.§ WR+BH Binary System IC10 X-1 In the HMXB system IC10 X-1, the accretion process involves the capture of the stellar wind by the compact object and the creation of an accretion disk around the companion BH. The origin of the Heii 4686 emission line detected in this system can be the shadowed sector of the stellar wind <cit.>, or the accretion stream that impacts the outer accretion disk of the compact object <cit.>, or both. In this study, we have decomposed the skewed and kyrtic Heii emission from IC10 X-1 at various phases into two Gaussian components that presumably emanate from different wind regions around the binary <cit.>. Figure <ref> and Table <ref> show that the two emitters switch positions relative to one another at different phases, as the skewness of the combined signal switches from positive to negative and back, roughly twice during each binary orbit (Figure <ref>). The skewness of the Heii 4686 line is created by the sleek (purple) component, as it dances around the more extended (cyan) component identified with wind emission. As the shadowed sector of the wind goes about its orbital phases, the purple component, identified with emission from the accretion stream/HS <cit.>, executes its own orbital motion around the BH. A consistent model that shows these behaviors and precisely-scaled velocity vectors in each binary phase is drawn in Figure <ref>. In the four frames of the figure, we also assign to the HS accurate clock times (in brown color) drawn from a 12-hour wall-clock attached to the BH/accretion disk. These times help us describe the location of the HS in time, as it executes its own motion about the BH. So, for clarity, we use phase values for the binary orbit and BH-clock cycles for the HS. In the orbital quarter phases, where we expected nearly zero redshifts, the wind managed to show small nonzero redshifts, as compared to the "eclipse phase" in which both components are strongly blueshifted despite the average phase being ϕ=0.44 (the BH is ∼10^∘ off of superior conjunction with respect to the observer in the ϕ=0.5 frame of Figure <ref>). But, to our surprise, the HS displays sizeable redshifts at the same times (in fact, maximum redshift at ϕ = 0.75). It is obvious that this component is executing an independent motion about the BH and that its speed is varying along its orbit. In Table <ref>, we list the blueshifts (negative) and redshifts (positive) of the Heii 4686 line that we have deduced around a full orbit of the binary. At ϕ=0, we caught the shielded wind moving away from the observer (as was expected), at the same time that the HS shows a relatively small redshift. At ϕ≈0.5, with the BH at 10^∘ (i.e., 1 hr) before mid-eclipse, both components show maximum blueshifts. §.§ Hotspot Orbital Period and Distance from the BH Imagine another 12-hour wall-clock overlaid on to the binary orbit and centered at the CM in Figure <ref>. The BH is at 6:00 (inferior conjunction) at ϕ=0 and moves counterclockwise by 3 clock hours during each quarter of its phase. The overall orbit of the HS appears to be at least as fast as the binary orbit, implying a stream/HS period of P_ HS=34.8 hr <cit.> in the model depicted in Figure <ref>. But this does not have to be the actual period of the HS, and most likely it is not (see Appendix <ref>): During a phase change Δϕ=0.25, the HS can in principle complete any number of full cycles plus a fraction of a cycle that takes it to its next location. This extra fractional cycle is the only condition imposed on the HS by the results of the data analysis. In fact, the model cannot distinguish HS prograde versus retrograde rotation either: were the HS located at 3:00 on its own clock at ϕ=0.5 and moving clockwise, the observed Doppler shifts would have been the same as those quoted in Figure <ref>. Because we do not know with certainty the masses of the two stars in the binary <cit.>; in order to make some progress; we had to carry out a separate analysis for the viable values of the orbital period of the accretion stream/HS and its distance from the BH. We summarize this dynamical analysis in Appendix <ref>, where we calculate these quantities assuming various stellar masses and relying on the insights gained from the model depicted in Figure <ref>. An important contribution to this investigation comes from an analysis of the time-dependent velocity dispersion of the stream/HS (phase-dependent standard deviations are listed in λ-space in Table <ref>), presumably caused by shear due to the tidal forces exerted by the BH itself. § SUMMARY OF RESULTS Our investigation of the Heii 4686 emission line from the WR+BH binary system IC10 X-1 produced the following results: (1)The Heii 4686 line, although weak in the Gemini/GMOS spectra, is definitely skewed (skewness S≠ 0) and kyrtic/curved (kyrtosis K>3). (2)These asymmetric optical properties likely arise from a mixture of two approximate Gaussian emitters that do not emanate from the bulk of the WR star. We are convinced that one emitting component lies in the extended wind of the WR star, in particular, in the shadowed sector that is shielded from the X-rays emanating from the BH companion <cit.>; whereas the other (always less dispersed) component originates from a HS in the BH's outer accretion disk, which is impacted by the backflowing accretion stream that develops from a stagnation point behind the BH <cit.>. (3)Our decomposition of the overall signal into two Gaussian components reveals how these two emitting regions are evolving in time, as the binary is orbiting around its CM (Figures <ref> and <ref>; Tables <ref> and <ref>). Figure <ref> depicts quite accurately that both components show maximum blueshift near mid-X-ray eclipse ϕ≈0.5—when the HS is at 9:00 on a 12-hour wall-clock fixed on to the BH/accretion disk for the sake of describing the motion of the stream/HS; and that only the wind shows maximum redshift at the BH's inferior conjunction (ϕ=0) relative to the observer. (4)The HS appears to be in a low-eccentricity orbit about the BH. At ϕ=0, it is located near 1:00 on its clock (assuming counterclockwise motion), and, for this reason, it shows a relatively small redshift. On the other hand, it displays maximum redshift at ϕ=0.75, after it has come around the clock to 4:30, on its way back to 1:00. Our efforts to investigate the kinematics and the dynamics of the stream/HS are described in  <ref> and Appendix <ref>. § ACKNOWLEDGEMENTS We appreciate the comments and suggestions made by the referee that helped us produce a precise model of the binary system (Figure <ref> and Table <ref>). We thank UMass Lowell and the Lowell Center for Space Sciences and Technology for supporting this research. This work was supported in part by NSF-AAG grant 2109004. The research was also supported in part by the international GEMINI Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the GEMINI partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. Our investigation was enabled by observations made from the Gemini-North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. This investigation has made use of Astropy (www.astropy.org) a community-developed core Python package and an ecosystem of tools and resources designed specifically for astronomy <cit.>. § DATA AVAILABILITY The raw data were downloaded from the GEMINI telescope data archive. The extracted spectra are available in this repository <cit.> in fits format. Additional processed data and products (extracted spectra in ASCII or pdf formats, tables, and figures) can be obtained by contacting the corresponding author. mnras § STREAM/HS ORBITS AND DISPERSIONS FOR VARIOUS ASSUMED BH MASSES AND HS PERIODS We investigate dynamical scenarios for an outer accretion stream/HS orbiting the BH faster than the BH is orbiting the WR star <cit.>; such as the unique solution with P_ HS=6.96  hr , whose geometric properties and kinematics are listed in Table <ref> above. This particular scenario and P_ HS-value are singled out from the two-component model depicted with the utmost precision in Figure <ref>, along with the dynamical considerations that follow. §.§ The Unique Orbit of the Stream/HS about the BH Table <ref> lists some possible periods and physical characteristics of the orbiting HS according to the diagram in Figure <ref> for a BH mass of 17M_⊙ and 32M_⊙, respectively. The possible number of HS cycles during each quarter binary phase (Δϕ=0.25) is shown in the second column of the table. Multiplication by 4 produces column 3, the number of complete HS cycles (1, 5, 9, 13) during one complete binary orbit. In the following calculations, we solve first for a roughly circular HS orbit around a BH using the famous Keplerian and kinematic equations V = √(G M Ω ) , and R = V/Ω , where V and R are the HS orbital speed and the mean distance from the BH, respectively, G is the Newtonian gravitational constant, and Ω≡2π/P_ HS , is the angular velocity of the HS (column 3 in Table <ref>). Speed V in Table <ref> characterizes the accretion stream/HS which is produced from a stagnation point behind the BH <cit.> and the accelerated backflow of the gas returning toward the accretion disk <cit.>. HS distance R is given in units of the solar radius R_⊙ = 6.957×10^5 km; it should be compared to the separation of the two stars a=20.00.5R_⊙ and the volumetric size of the BH Roche lobe (R_L)_0.5BH=6.410.3R_⊙ , for a mass ratio of M_ BH/M_ WR=0.5 <cit.>; correspondingly, for the case with M_ BH/M_ WR=1 <cit.> in Table <ref>, we find that a=21.60.5R_⊙ and (R_L)_0.5BH=8.170.3R_⊙ . Here, we also find that the WR star (for R_ WR≈ 8 R_⊙) fills its own Roche lobe (R_ WR/(R_L)_0.5WR = 0.98), as compared to the former case in which R_ WR/(R_L)_0.5WR = 0.91. From the top row of Table <ref>, we see that an HS:BH periodicity of 1:1 (as in the illusion seen in Figure <ref>) is not acceptable since the HS distance R≳ 2(R_L)_0.5BH in both cases, and the BH accretion disk/stream crosses well inside the Roche lobe of the WR star (for which (R_L)_0.5WR=8.800.3R_⊙ and (R_L)_0.5WR=8.170.3R_⊙, respectively). Thus, the HS has to be faster than the binary, and model 1 in Table <ref> is discarded. Models 3 and 4 are also discarded, but for a different reason: In the last column of Table <ref>, we have also listed the ellipticity b/a of orbit of the stream/HS about the BH (see  <ref>), as this was deduced from the nonequidistant clock timings of the HS in the binary orbital phases of Figure <ref>. Obviously, models 3, 4, and subsequent models with faster HS orbits must be discarded because they get the axis of the ellipse wrong. Therefore, we are left with only one viable model of the HS orbit, model 2 in Table <ref>. §.§ Properties of the Unique Model 2 Listed in Table <ref> The physical and geometric properties of model 2, the only viable model of the stream/HS orbiting about the BH, are summarized in Table <ref>. The most important dynamical characteristics of the stream and the BH have been pointed out by asterisks and have been discussed in the footnotes to the table. The geometric properties at the rightmost columns of the table indicate that the stream/HS orbit is only mildly elliptical. This allows us to not iterate on the circular model described in the main text (Figure <ref>) and make ellipticity corrections to the orbit of the HS. §.§ Wrap-up of Procedures and the Two Unbroken Symmetries of the Model In the work that we described above, we undertook the following cumbersome steps concerning observations, data reduction, and data analysis and interpretation of the Heii 4686 emission line emanating from the high-mass X-ray binary IC10 X-1: (1)We dealt with low-quality spectra of the source, the kind that most people would not consider useful at all. (2)We devised a new technique to remove the continuum without losing the embedded weak signal. (3)We devised a new analytical method that exploits skewness in the line profiles and separates the signal into two different Gaussian components. (4)We observed these components moving independently and dancing around one another (Figure <ref>), as the BH+WR star binary kept orbiting, as usual, about its own center of mass. (5)We tried to interpret the motions of the two distributed components according to their mean Doppler shifts and their variances. We identified them with the shielded wind (large variances) blowing away from the WR star in the direction opposite to the BH, and with an accretion stream/HS (smaller variances) orbiting about the BH itself, just outside its accretion disk. (6)We constructed a precise model of the motions of all emitting components during various orbital phases (Figure <ref>), and the model told us that the orbit of the HS is mildly elliptical (ellipticity b/a≃0.97). (7)We determined the geometry and kinematics of the mildly elliptical orbit (Table <ref>) and the orbital period of the HS about the BH (P_ HS≃ 7.0 hr). (8)Finally, we analyzed various dynamical models of the stream/HS orbiting the BH, but we could not constrain the mass of the BH or the size of its accretion disk (Section <ref>). We have learned to execute these steps with minimum hardship, and to obtain useful results that make physical sense. Our next target is IC10 X-1's "twin" X-ray source NGC300 X-1 for which high-quality spectra of the Heii 1640 emission line are readily available <cit.>, and the measured Dopper shifts are quite different than those in IC10 X-1. Two obstacles still remain that prevent a categorical determination of the stream/HS orbit in the IC10 X-1 binary system: (a)We cannot resolve a prograde versus a retrograde motion of the stream/HS around the BH ( <ref>). In Figure <ref>, the HS is assumed to rotate in the counterclockwise direction about the BH, just as the BH does about the center of mass of the binary. This is prograde motion. For the retrograde motion of the HS, we relocate it from 9:00 to 3:00 on its clock at ϕ=0.5 and imagine that it rotates clockwise. (b)We have no way of knowing whether the HS at ϕ=0.5 is indeed close to 9:00, as depicted in Figure <ref>. We can place the HS anywhere on the left semicircle of its orbit, and it will still show a blueshift upon projection onto the LOS. Because the HS shows maximum Doppler shift at ϕ=0, we assumed that its velocity vector is parallel to our LOS, thus we do not just see a large projection of an even larger invisible velocity vector. The former symmetry creates ambiguity as to the location of the HS in each orbital phase. But there exist only two possibilities. The latter symmetry creates many more ambiguities. For example, we can easily match the large velocities (827 and 1021 km s^-1) of the two (very different) dynamical models listed in Table <ref> to the observed projected HS speed of 662 km s^-1 at ϕ=0.5 by relocating the HS 1hr+14min and 1hr+39min, respectively, away from 9:00 in either direction. The only good news in this conundrum is that the major axis of the elliptical orbit then rotates away from its -21^∘ orientation to the LOS by only ±13^∘ and ±15^∘ for M_ BH=17 M_⊙ and 32 M_⊙, respectively.
http://arxiv.org/abs/2307.03191v1
20230706175955
Where shadows lie: reconstruction of anisotropies in the neutrino sky
[ "Willem Elbers", "Carlos S. Frenk", "Adrian Jenkins", "Baojiu Li", "Silvia Pascoli", "Jens Jasche", "Guilhem Lavaux", "Volker Springel" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
Gravitational Waves, Bubble Profile, and Baryon Asymmetry in the Complex 2HDM [ Received: XX, XX, XXXX. Accepted: YY, YY, YYYY, Report Number: NORDITA 2023-020 =================================================================================== § INTRODUCTION Precise measurements of a near-perfect black-body energy spectrum and of a power-law spectrum of temperature fluctuations in the Cosmic Microwave Background (CMB) reveal detailed information about the state of the Universe at the time of decoupling around t=10^5 <cit.>. There is strong but indirect evidence for another Big Bang fossil in the form of N_eff=2.99_-0.33^+0.34 species of fermionic particles that were relativistic when the radiation decoupled <cit.>. This is consistent with the prediction of N_eff=3.045 for the Cosmic Neutrino Background (CNB), consisting of three species that decoupled far earlier, at only t=1 <cit.>. That these particles are indeed neutrinos could be confirmed if they were found to be non-relativistic today, given the standard prediction for the present-day neutrino temperature, T_ν=1.68e-4, and the minimum mass, m_ν≳0.05, required by neutrino oscillations for the most massive species <cit.>. Although detecting the indirect cosmological effects of massive neutrinos is challenging, this target could soon be in reach, as suggested by improved constraints on the cosmic neutrino mass fraction <cit.>. Direct detection of relic neutrinos will be more challenging still and is likely beyond our immediate capabilities. The Karlsruhe Tritium Neutrino Experiment (KATRIN) recently placed an upper bound of 9.7×10^10 on the local neutrino overdensity relative to the cosmic mean <cit.>, far greater than the density predicted in this paper and elsewhere. An experiment specifically designed for CNB detection has been proposed by the PTOLEMY collaboration <cit.>. Like KATRIN, the PTOLEMY proposal aims to capture neutrinos through the inverse β-decay of tritium <cit.>, but with targets bound to a graphene substrate to enable a larger target mass, which has its own challenges <cit.>. Other detection proposals rely on the net momentum transfer from the neutrino wind to macroscopic test masses <cit.>, absorption features in the cosmic ray spectrum <cit.>, blocking of neutrino emission from de-exciting atoms due to the Pauli exclusion principle <cit.> or the capture of neutrinos on high-energy ion beams <cit.>. We refer to <cit.> for a detailed review of the subject. Like the CMB, the neutrino background carries both primordial or primary perturbations and secondary gravitational perturbations imprinted by the large-scale structure at late times <cit.>. Since neutrinos are massive particles, secondary perturbations are more significant and depend on the neutrino mass and momentum, giving the background additional structure compared to the CMB. In some cases, gravitational effects may lead to slight modifications of the expected signal and in others they open up entirely new ways of testing neutrino physics. For tritium capture experiments like PTOLEMY, the expected event rate is proportional to the local number density of neutrinos <cit.>, given by the monopole moment of the phase-space distribution. Some proposals depend on the velocity of neutrinos in the lab frame <cit.>, requiring at least the dipole moment. Other proposals require additional phase-space modelling. For instance, the orientation of the dipole is important for methods that rely on periodic or angular modulation of the capture rate <cit.>. Pauli blocking could in principle probe the momentum distribution <cit.>. Additionally, gravitational perturbations may change the flavour <cit.> and helicity <cit.> makeup of the neutrino background, affecting the ability of experiments like PTOLEMY to distinguish between Dirac and Majorana neutrinos. To determine the prospects of current and future CNB detection proposals, we therefore need to model the phase-space distribution of relic neutrinos, including its higher-order directional perturbations. Previous studies have looked at the gravitational enhancement of the monopole moment due to the Milky Way <cit.> and nearby Andromeda and Virgo <cit.>. A very recent study also considered the gravitational influence of dark matter structures in a random (25)^3 region on the neutrino phase-space distribution <cit.>. Here, we expand on these works in several ways. First and foremost, we model the full six-dimensional phase-space distribution of relic neutrinos, taking into account perturbations imprinted on the neutrinos before they entered our galactic neighbourhood. Second, we use self-consistent cosmological simulations to accurately model the time evolution of the large-scale structure and the neutrino background. Third, we use an accurate non-linear treatment of massive neutrinos <cit.>, which includes the gravitational effects of the neutrinos themselves. Fourth, we model the large-scale distribution of matter within 200[In this expression, h is defined in terms of Hubble's constant as h ≡ H_0 / (100).] over the full sky, using observations from the galaxy redshift catalogue <cit.>. Fifth, we use a more recent estimate of the Milky Way mass from <cit.>, which is significantly lower than the value used in previous studies, depressing the effect of the Milky Way. Using our constrained phase-space simulations, we compute the expected density, velocity, and direction of relic neutrinos, as well as expected event rates for PTOLEMY. We also study the distribution of angular anisotropies, finding that local neutrino density perturbations are anti-correlated with the projected matter distribution, due to the capture and deflection of neutrinos by massive objects along the line of sight. To facilitate future analyses of the neutrino phase-space distribution, we publicly release our simulation data alongside this paper (see Appendix <ref>). The paper is organized as follows. We describe our simulation and calibration methods in Section <ref>. Our main results are presented in Section <ref>. We finally conclude in Section <ref>. § METHODS We now describe our simulation and analysis methods, starting with the details of the constrained simulations in Section <ref>, our calibration procedure for applying constraints to different neutrino cosmologies in Section <ref>, and our treatment of non-linear neutrino perturbations in Section <ref>. §.§ Constrained simulations Our analysis is based on constrained ΛCDM simulations of the local Universe. Whereas most cosmological simulations start from random initial conditions and only reproduce observations in a statistical sense, constrained simulations employ specialized initial conditions that give rise to an in silico facsimile of the observed large-scale structure. Within the precision of the constraints, objects appear in the right relative positions and with the right dimensions, enabling a one-to-one comparison with observations. The past few years have seen constrained simulations being used for a wide range of applications and employing a variety of methods to set up the initial conditions <cit.>. In this paper, we use a Bayesian forward modelling approach known as `Bayesian Origin Reconstruction from Galaxies' (BORG) <cit.>. This approach uses a Hamiltonian Monte Carlo algorithm to draw samples from the posterior distribution of initial conditions, given a likelihood function that connects initial conditions with observations and a Gaussian prior. The forward model consists of a Comoving Lagrangian Acceleration (COLA) code <cit.> that approximates the process of structure formation in the ΛCDM paradigm and a non-linear bias model that connects the final dark matter density field to observed galaxy positions. The Hamiltonian Monte Carlo algorithm is used to efficiently sample a high-dimensional parameter space, consisting of a grid of 256^3 initial phases, multiple bias parameters, and the observer velocity in the CMB frame. The constraints used in this paper are based on galaxies from the catalogue <cit.>. This is a catalogue of galaxy positions and redshifts, compiled from the 2MASS, 6dF, and SDSS redshift surveys, that covers the full sky out to a distance of 200. We refer the reader to <cit.> for further details on the BORG analysis of this catalogue. This analysis provides not only an accurate reconstruction of the three-dimensional density field in the local Universe, but also reproduces the masses of nearby clusters, with the notable exception of the Perseus-Pisces cluster for which the mass is biased low <cit.>. This is most likely due to a systematic error in the analysis, but could perhaps also indicate an observational issue <cit.>. Interestingly, the sibelius-dark simulation <cit.>, which is based on a similar but older BORG reconstruction, found its most massive dark matter halo at the location of Perseus. However, sibelius-dark was less accurate in other respects, such as the motion of the Local Group, which is important for our purposes here. Our work is based on nine draws from an earlier version of the chain described in <cit.>, which used ten COLA steps instead of twenty, but was identical in every other respect. We therefore expect the results to be broadly consistent. After discarding an initial burn-in portion, we selected every 432nd draw from the chain to minimize the serial correlation between consecutive draws. This sample of initial conditions allows us to estimate both the expected signal and the uncertainty in our predictions. To demonstrate the effectiveness of the constraints, we show slices of the dark matter and neutrino densities in a portion of the sky in Fig. <ref>, overlaid with galaxies (white dots). All prominent structures present in the catalogue are reproduced by the simulations, revealing the underlying dark matter filaments and surrounding neutrino clouds. Our simulations assume periodic boundary conditions in a (1)^3 cube, with the observer located at the centre. The constraints mostly cover a central sphere of radius 200 and gradually taper off beyond that. This means that sufficiently far away from the centre, the initial conditions revert to purely random fluctuations. Given that the phases are provided in the form of 256^3 grids, the constraints only cover 4 scales and larger. Fluctuations on smaller scales are unconstrained and purely random. Dark matter initial conditions are generated with 3LPT at z=31, using a modified version of monofonIC that adds corrections from massive neutrinos <cit.>, while the neutrinos themselves are generated with fastdf, using linear geodesic integration <cit.>. The transfer functions are computed with class <cit.>. The simulations were carried out with a version of Gadget-4 <cit.> that was modified to be bitwise reversible (see Appendix <ref>) and to add support for massive neutrinos and radiation. We use a 3^rd-order Tree-PM algorithm for the gravity calculation. Neutrinos are followed with the δ f method to minimize shot noise, boosting the effective particle number without neglecting their non-linear evolution <cit.>. We use N_cb=384^3 dark matter and baryon particles[We will treat cold dark matter and baryons as a single cold fluid and refer to it as dark matter on occasion.] and N_ν = 384^3 massive neutrino particles. In order to increase the sampling density of neutrinos locally, upon completion of a simulation, we isotropically inject an additional N=224^3∼10^7 `spectator' neutrinos at the observer location and run the simulations backwards, allowing us to trace the neutrinos back in time through the evolving large-scale structure (see Section <ref>). To ensure that the accelerations are identical in the forwards and backwards directions, spectator neutrinos contribute no forces. A final consideration is that Milky Way-sized perturbations have a characteristic length that is much smaller than 4. Hence, our constraints are not sufficient to guarantee the formation of a Milky Way at the centre. Since we expect the Milky Way (MW) to have a considerable effect on the neutrino background, we run two backwards versions of each simulation. Initially, neutrinos are only traced back through the large-scale structure without accounting for MW effects. In the second version, we additionally apply forces from the MW dark matter halo. Following <cit.>, we model the MW halo as an NFW profile <cit.> with a mass of M_200=0.82 × 10^12M_⊙ and a concentration of c_200=13.31.[Here, M_200 is the mass contained in a spherical region of radius R_200 with a density equal to 200 times the critical density and c_200=R_200/R_s, with R_s the scale radius of the NFW profile.] For computational simplicity, we use the uncontracted version of the model, since both versions fit the data nearly equally well. We place the centre of the NFW potential at a distance of 8 from the centre of the simulation in the direction of Sag-A^*. We also include the motion of the galactic centre in the CMB rest frame of the simulation, by letting the centre of the NFW potential move at a constant speed of 567/ in the direction of galactic coordinates (l,b)=(267^∘,29^∘) <cit.>. In Section <ref>, we additionally correct for the motion of the Sun relative to the CMB, v_⊙=369.8/ towards (l,b)=(264^∘,48.3^∘) <cit.>, which is otherwise unresolved by the simulations. Crucially, we note that we use a more recent and considerably smaller estimate of the MW mass than that used in previous related works <cit.>. We therefore expect to find a smaller effect from the MW. Since we are mainly interested in the imprint of the large-scale structure, we do not include the various gaseous and stellar components of the MW, which are altogether less important than the dark matter halo itself. §.§ Model selection To derive constrained initial conditions with BORG, we have to assume a particular cosmological model. The constraints used in this paper were derived assuming a flat ΛCDM model with parameters (Ω_cdm, Ω_b, h, A_s, n_s, ∑ m_ν) = (0.2621, 0.04897, 0.6766, 2.105×10^-9, 0.9665, 0). Despite the fact that this model does not include massive neutrinos, we wish to run constrained simulations for different neutrino masses, without running an expensive MCMC analysis for each case. Doing this requires modifying the cosmological model slightly without altering the clustering on small scales, since otherwise the same phase information would give rise to structures that differ somewhat from the observations. We therefore take the following approach. When increasing ∑ m_ν, we decrease Ω_cdm such that Ω_m = Ω_cdm + Ω_b + Ω_ν is fixed. In addition, we modify the primordial scalar amplitude A_s, such that the non-linear power spectrum at z=0 is fixed at the non-linear scale k_nl=1. Note that P_cb, the power spectrum of cold dark matter and baryons, is the relevant power spectrum, given that halos are primarily biased with respect to the cold matter, as opposed to the total matter density <cit.>. To achieve this in practice, we perform a small number of calibration runs and iteratively select values of A_s that satisfy this condition. As noted before, the data mostly constrain scales larger than 4 within 200 of the observer. As shown in Fig. <ref>, this leaves enough flexibility on large scales to accommodate neutrino masses up to ∑ m_ν∼0.6.[We note that this breaks the agreement with CMB observations, which primarily constrain large scales. This is simply another way of stating that the combination of CMB and LSS data can rule out large neutrino masses in νΛCDM, although we make no attempt to do this here.] To see this, note that the left-hand panel shows total matter power spectra, P_m(k), for nine realizations assuming ΛCDM without massive neutrinos. Although the power spectrum is well-constrained on small scales, there is considerable variance on large scales (k≲0.03). The right-hand panel shows the power spectrum of dark matter and baryons, P_cb(k), for the calibrated models with different neutrino masses, relative to the massless case. For the largest mass considered, ∑ m_ν=0.6, the ratio is still within 1σ of the average. We also checked that the cross-correlation coefficients of the final density fields are within 1% for k≤ k_nl and ∑ m_ν≤0.3 and within a few per cent for ∑ m_ν≤0.6, indicating that the phase information is the same on large scales. Finally, we performed a visual inspection to confirm that the we recover the same large-scale structure for all neutrino masses. Hence, the outcome of this procedure is a plausible cosmological model with massive neutrinos that reproduces the observations. Although the resulting power spectra are compatible with the constraints at the 1σ-level, one may wonder whether the 20%-30% differences seen for ∑ m_ν=0.6 on the largest scales could still affect the results. We expect the impact of this offset to be small, because the distance travelled by neutrinos is inversely proportional to the mass, such that heavier neutrinos are less sensitive to large-scale density perturbations. Therefore, matching only the small-scale power spectrum for ∑ m_ν=0.6 is likely justified. Using the above procedure, we calibrate six models with different neutrino masses: four models with three degenerate neutrino species, ∑ m_ν∈{0.15,0.3,0.45,0.6}[Hence, the individual neutrinos have masses m_ν∈{0.05,0.1,0.15,0.2}.], and two models with a single neutrino species, ∑ m_ν∈{0.01,0.06}. The relevant model parameters are given in Table <ref>. Although not strictly allowed by oscillation data, the first four models assume a degenerate neutrino mass spectrum, neglecting the mass-squared differences |Δ m^2_31|= 2.5e-3^2 and Δ m^2_21=7.4e-5^2 <cit.>. Of course, the last two models are also not allowed. The penultimate case is included to examine the behaviour of very light neutrinos. The last model is included as it approximates the cosmological effects of the minimal neutrino mass case under the normal mass ordering. In each case, the intent is only to recover the correct cosmological evolution for a given neutrino mass, m_ν, and for this purpose, the mass splittings have a negligible effect <cit.>. §.§ Neutrino treatment Let us now discuss our treatment of neutrino perturbations. The evolution of the phase-space distribution, f(𝐱,𝐪,τ), is governed by the collisionless Boltzmann equation: ∂ f/∂τ + d𝐱/dτ·∇ f + d𝐪/dτ·∇_𝐪f = 0, where τ is conformal time and 𝐪 the neutrino momentum. We solve this equation by generating particles from a sampling distribution g(𝐱,𝐪) and tracing their evolution through the constrained volume using the relativistic equations of motion <cit.> dx^i/dτ = q^i/√(q^2+m^2a^2), dq_i/dτ = -ma ∇_iΦ, where a is the scale factor and Φ the gravitational potential. The sampling distribution g need not be the same as the physical distribution f and can be chosen arbitrarily, subject to being normalized and the set {g=0 f≠ 0} having measure zero. We model neutrino perturbations with the δ f method <cit.>, a variance reduction technique in which the phase-space distribution is decomposed as f(𝐱,𝐪,τ) = f̅(q) + δ f(𝐱,𝐪,τ). In this approach, particle data are only used to estimate the perturbation δ f to an analytical background model f̅, which allows sampling noise to be reduced by orders of magnitude. Let A be an arbitrary phase-space statistic, such as the number or momentum density. Then, its δ f estimate is A(𝐱,τ) = ∫d^3q[f̅(𝐱,𝐪,τ) + δ f(𝐱,𝐪,τ)]A(𝐱,𝐪,τ) ≅A̅(τ) + ∑_k=1^N δ f(𝐱_k,𝐪_k,τ)/g(𝐱_k,𝐪_k)A(𝐱_k,𝐪_k,τ) δ^(3)(𝐱-𝐱_k), where A̅ is the analytical background solution and we sum over particle data {𝐱_k,𝐪_k}. The Dirac function is often replaced with a spatial smoothing kernel W(𝐱-𝐱_k). The fraction on the right-hand side corresponds to a statistical weight w = δ f/g, which is simple to compute in practice, given that the background density f̅ is an analytical function and that f and g are conserved along particle trajectories. Throughout, we use a standard Fermi-Dirac distribution, f̅(q)=(1+exp(q/k_bT_ν))^-1, for the background model and we set g=f when generating the initial conditions. This approach is sufficient for describing the neutrino distribution on large scales, as illustrated in Fig. <ref> for 0.06 neutrinos. However, given the (1)^3 ambient volume of our simulations, there is a more efficient way to estimate the properties of neutrinos incident on Earth. For this, we inject `spectator' neutrinos at the location of Earth and run our simulations backwards. For these neutrinos, we adopt an isotropic Fermi-Dirac sampling distribution g. We then apply our δ f logic in reverse: given the known sampling density g and the background density f̅(q) with the momentum q from the final (z=31) snapshot of the backwards simulation, we obtain the statistical weight w=(f̅-g)/g. We again estimate phase-space statistics using Eq. (<ref>). Note that in this case, the assumed sampling distribution g is not equal to the physical distribution f. In particular, we do not expect the distribution of local relic neutrinos to be exactly isotropic. However, the assumption of an isotropic and homogeneous Fermi-Dirac distribution at z=31 still allows us to use Eq. (<ref>) to obtain physical phase-space estimates. Finally, we note that running N-body simulations backwards is non-trivial and we refer the reader to Appendix <ref> for details on how this is accomplished. § RESULTS Having described our simulation methods, we are now in a position to discuss the results. In Section <ref>, we present the expected number density, bulk velocity, and deflection angles of relic neutrinos in the Milky Way. We also compute expected event rates for PTOLEMY. In Section <ref>, we turn to the angular distribution of neutrino anisotropies. In Section <ref>, we adopt a cosmographical perspective and look at maps of the large-scale distribution of neutrinos in the local Universe. §.§ Local abundance and bulk motion A crucial input for relic neutrino detection efforts is the expected gravitational enhancement of the local neutrino density. Using our constrained simulations, we are able for the first time to compute the total effect of the observed large-scale structure. The result is shown in the left-hand panel of Fig. <ref>. The black line (labelled LSS) shows the effect from the large-scale structure, excluding the Milky Way, on the neutrino overdensity, δ_ν=n_ν/n̅_ν-1, as a function of neutrino mass m_ν. The error bars indicate the dispersion among the nine constrained realizations. We see that the enhancement is negligible for m_ν≤0.05. In fact, for the smallest mass of 0.01, we find a small deficit of δ_ν=-0.0038±0.0006. From there, the density contrast increases approximately linearly with mass up to δ_ν=0.25±0.08 for 0.2. The red line shows the combined effect of the large-scale structure and the Milky Way dark matter halo (LSS + MW). The importance of the MW increases with mass, relative to the LSS. For m_ν=0.1, they are approximately equally important. For m_ν=0.2, the MW is responsible for three-quarters of the effect. This is a result of the decrease in free-streaming length with mass: at average speed, an unperturbed 0.01 neutrino has travelled 3.1 since z=31, while the number is only 200 for 0.2. As a result, lighter neutrinos are sensitive to more distant structures. We will confirm this explicitly in Section <ref>. Taking the difference between the results with and without the MW, we find that the galactic effect is well described by δ_ν^MW=27.6(m_ν/1)^2.29. The near-quadratic scaling agrees with <cit.>, who found δ_ν^MW=76.5(m_ν/1)^2.21, but our amplitude is three times smaller. Similarly, we find significantly smaller overdensities compared to <cit.>. This may be partially due to the absence of gaseous and stellar Milky Way components in our simulations. However, the primary reason is most likely the more recent but lower estimate of the dark matter mass used in this work (M_200=0.82 × 10^12M_⊙ here compared to M_200=3.34× 10^12M_⊙ in <cit.> and M_200=1.79×10^12M_⊙ in <cit.>).[In this comparison, we converted their virial masses to masses within a spherical region containing 200 times the critical density. We also note that <cit.> used a generalized NFW profile with an additional parameter, precluding an exact one-to-one comparison.] To confirm this, we verified for one simulation that doubling the MW mass approximately restores agreement with <cit.>. As an exception to the rule, the recent study <cit.> found even smaller overdensities by up to an order of magnitude (e.g. 0.7% compared to our 7% for 0.05). However, the authors note that their estimates are conservative. Some detection proposals depend on the neutrino velocity in the lab frame <cit.>. From our simulations, we estimate the bulk neutrino velocity v_ν. Given that the simulation is carried out in the rest frame of the CMB, a value of v_ν = 0 indicates that the neutrino dipole aligns with that of the CMB. We show the expected magnitude of the velocity perturbation in the right-hand panel of Fig. <ref>. As for δ_ν, the gravitational effect of the large-scale structure and Milky Way is negligible for m_ν=0.01. The velocity perturbation increases to 211/ at m_ν=0.05 and trends towards 415/ for m_ν=0.2. These neutrinos are approximately at rest with respect to the bulk flow of matter in the inner 10 of the simulation (see Table <ref>). When we include the effect of the Milky Way, the velocity appears to converge for the largest neutrino masses. Combined with the increased density perturbation, this indicates that the simulated MW and the surrounding structure are capable of trapping 0.2 neutrinos in significant numbers. In addition to the magnitude of the velocity perturbation, we can also predict its orientation. Table <ref> shows the predicted direction of the neutrino dipole, for the runs without MW, in galactic coordinates and compares it with the measured values for the CMB dipole from Planck <cit.> and the direction of the simulated matter flow within 10 of the observer. For 0.01, the predicted 1σ range of the neutrino dipole contains the measured CMB dipole. As m_ν increases to 0.2, the values appear to converge towards the direction of the bulk flow of dark matter.[Note that the uncertainties are larger for the bulk dark matter velocity, because it is computed from the forward simulations, which have a much lower sampling density near the observer.] The results are broadly similar for the runs with MW. In the case of 0.01, we find (l,b)=(258.0^∘±0.5^∘,47.7^∘±0.1^∘), which is still very close to the CMB dipole. For 0.2, the direction changes somewhat more to (l,b)=(203.2^∘±2.9^∘,7.2^∘±6.0^∘). It is interesting to note that the ecliptic north pole is at l=97^∘, b=30^∘. This means that the neutrino dipole is close to the plane of Earth's orbit around the Sun, making an angle of ϕ≈ 10^∘. The Earth's orbital velocity is v_⊕≈30/, producing a (2v_⊕ / v_ν)cosϕ∼ 20% perturbation for a typical neutrino velocity of v_ν=300. Hence, for experiments that depend on the neutrino velocity, an annual modulation may be detectable <cit.>. Finally, we note that the sibelius-dark simulation, which used similar techniques to set up the initial conditions, did not accurately reproduce the observed direction of the local matter flow <cit.>. We therefore caution that the theoretical uncertainty in the direction may be greater than the dispersion among the nine realizations given in Table <ref>. A related quantity to the velocity perturbation is the deflection angle between the initial and final velocities, cosθ = (𝐯_ν·𝐯^ini_ν) / (v_ν v^ini_ν). For non-relativistic neutrinos, the gravitational effect on the spin is negligible, such that a deflection of the momentum vector by an angle θ implies a change in the helicity from ±1 to ±cosθ, with a probability P=1/2-cosθ/2 of observing a reversed spin <cit.>. It has recently been argued that the gravitational effect of the Virgo Supercluster might result in large deflection angles, significantly altering the helicity makeup of the neutrino background <cit.>. These authors compute deflection angles for neutrinos in halos of a similar mass to Virgo, M=1.48× 10^15M_⊙, finding an average of ⟨cosθ⟩=0.54 - 0.60 for m_ν=0.05. Using our constrained simulations, which include Virgo, we can estimate directly the effect that the large-scale structure has on neutrinos that arrive on Earth. We give the average for different neutrino masses and for the cases with and without Milky Way in Table. <ref>. For 0.05, we find ⟨cosθ⟩=0.99482±0.00084, when including the Milky Way. Given that the deflection is even smaller for lighter neutrinos, we expect the effect of gravitational deflection to be negligible for the minimal neutrino mass case, ∑ m_ν=0.06. Gravitational clustering also has the potential to alter the flavour composition of the local neutrino background <cit.>. The mass eigenstates ν_i considered so far are superpositions of flavour eigenstates ν_α, with α=e,μ,τ, for electron, muon, and tau neutrinos. The two bases are related by the unitary Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix U_α i <cit.>. The flavour composition could be altered, since the degree of clustering depends on mass. For instance, assuming the mass ordering is normal, the contribution of ν_e to the heaviest mass state ν_3 is only | U_e3|^2 = 2.3%. Therefore, if ν_3 is much more strongly clustered than ν_1 and ν_2, most relic neutrinos on Earth would be ν_μ or ν_τ. For this effect to be large, the masses must be hierarchical (m_1≪ m_3 or m_3≪ m_1), which requires m_ν≲0.1. Fig. <ref> shows that the differences in the density contrast δ_ν are then small, which implies that the fraction of ν_e is not significantly altered from its primordial value of 1/3. We nevertheless incorporate this effect in the calculation below. We now have the necessary ingredients to compute the expected event rate for an experiment like PTOLEMY. The CNB capture rate is <cit.> Γ_CNB = ∑_i=1^N_ν N | U_ei|^2 σ̅[n^+_iA_i^+ + n_i^- A_i^-], where N is the number of targets, U_ei are the PMNS mixing elements, σ̅ is the average cross section, n_i^± are the number densities for the two spin states, A^±_i = 1∓ v_i/c is a spin-dependent factor, and v_i is the velocity of the ith mass eigenstate. As discussed, gravitational deflection by an angle θ reverses the spin with probability P=1/2-cosθ/2. The number densities for both spin states are then given by n^±_i = n_i[1/2±1/2⟨cosθ⟩_i]. In the absence of clustering and deflection, ⟨cosθ⟩_i=1, such that n^+_i=n_i=n̅ and n^-_i=0 for Dirac neutrinos. For Majorana neutrinos, the densities are both equal to the mean: n^+_i=n^-_i=n̅. Consequently, for non-relativistic neutrinos with A^±_i=1, the expected signal is twice as large in the Majorana case. If we allow for gravitational effects, we instead obtain Γ^D_CNB = Nσ̅∑_i=1^N_ν| U_ei|^2 [1 + ⟨cosθ⟩_i v_i/c] n_i, Γ^M_CNB = Nσ̅∑_i=1^N_ν| U_ei|^2 2n_i, for the Dirac and Majorana cases, respectively. Plugging in the number N=100 / m_^3H of tritium atoms for PTOLEMY <cit.> and the average cross section σ̅=3.834e-45^2 from <cit.>, as well as the PMNS mixing elements | U_ei|^2=(0.678,0.299,0.023) <cit.>, and a mean number density of n̅=56^-3 per degree of freedom, we obtain the event rates in Table <ref>. To translate our results to the minimal neutrino mass case, we assume that the lightest neutrino is relativistic and unaffected by gravitational clustering and combine the 0.01 and 0.05 results for ∑ m_ν=0.06 under the normal mass ordering (NO) or use the 0.05 results twice for ∑ m_ν=0.1 under the inverted ordering (IO). We generally predict a factor ∼2 difference between the Dirac and Majorana cases. The only exception is ∑ m_ν=0.06, for which the difference is only 19%. In this case, the cross section is larger as the lightest eigenstate (with v_i=c) dominates the electron-neutrino. Comparing the most and least massive cases, we see that gravitational clustering only has a marginal effect, boosting the capture rate by less than a factor of two. §.§ Angular anisotropies Having presented our results for the monopole and dipole moments, we now turn to higher-order moments of the neutrino distribution. Fig. <ref> presents maps of the predicted angular anisotropies in the number density for four different masses, after subtracting the monopole and dipole perturbations. In each case, we average over nine realizations from the reconstruction. The top-left panel shows the map for m_ν=0.01 and the right-hand panels show maps for m_ν∈{0.05,0.1,0.2}. First of all, we observe that the magnitude of the perturbations strongly depends on mass: they are 𝒪(10^-2) for m_ν=0.01 and 𝒪(1) for m_ν=0.2. We also see that the largest neutrino mass maps have large-scale perturbations that are suppressed, relative to small-scale perturbations, for the smaller neutrino masses. The middle-left panel shows the projected density of dark matter and baryons, 1 + δ^Π_cb(n̂,R_max) = ∫_0^R_maxρ_cb(𝐫) dr/∫_0^R_maxρ̅_cb(𝐫) dr, up to a distance of R_max=200 from the observer. Comparing this with the neutrino maps, we find that distant matter fluctuations are anti-correlated with local neutrino fluctuations. This can be seen more clearly in the bottom-left panel, in which the projected matter perturbations are overlaid on the neutrino perturbations for m_ν=0.01. The anti-correlation is much more evident for smaller neutrino masses. In Fig. <ref>, we show the angular power spectrum of the neutrino density, C^ν_ℓ, for five different masses, averaging over nine realizations from the chain. To uncover the perturbations imprinted by the large-scale structure, we fit smooth spectra of the form C^fit_ℓ = exp[c_1 + c_2logℓ + c_3(logℓ)^2], to the simulation predictions, restricting to the multipoles with 1≤ℓ≤15, since higher-order multipoles are noisy and poorly constrained. The thick curves in Fig. <ref> correspond to these fits, with the solid and dashed lines indicating the LSS-only and combined LSS + MW results, respectively. As expected from the previous section, the effect of the MW is most pronounced for the largest neutrino masses and the lowest-order multipoles. The difference between the dashed and solid curves is negligible for m_ν≤0.05, but clearly visible for m_ν=0.2. After converting our results to dimensional temperature power spectra with δ T_ν∼δ n_ν/3 and T̅^2_ν=(1.95)^2, we obtain qualitative agreement with the linear theory calculations of <cit.> for masses m_ν≤0.1, the largest mass considered amenable to their analysis. These authors model the gravitational deflection of neutrinos with a lensing potential, similar to what is done for the CMB <cit.>. A key difference between our results and theirs is the presence of oscillatory perturbations around the smooth spectra in Fig. <ref>, which are much larger than their predicted lensing effect. This can be seen more clearly in the inset graph, which zooms in on the lowest-order multipoles (ℓ≤10) and shows the simulation predictions relative to the smooth fits. The perturbations depend sensitively on mass, being most prominent for 0.01 and nearly absent for 0.2. The origin of these perturbations becomes clear when we plot the angular power spectrum, C^cb_ℓ, of the projected CDM and baryon density up to 200, in Fig <ref>. Fitting a smooth power spectrum (<ref>) in the same way as for the neutrinos, reveals the same oscillatory perturbations. This suggests that cosmic variance in the matter density field is imprinted on the local neutrino background if the neutrino mass is sufficiently small. To confirm this explicitly, we compute the cross-correlation coefficient, r_cbν(ℓ) = C^cbν_ℓ/(C^ν_ℓ C^cb_ℓ)^1/2, between the local neutrino density and the projected dark matter and baryon density, as a function of the maximum projected distance R_max. The results, averaged over the lowest-order multipoles, 1≤ℓ≤10, and smoothed with a Savitzky-Golay filter, are shown in Fig. <ref>. We additionally split the results into ten equal-sized neutrino momentum bins, with redder curves indicating faster neutrinos. For both neutrino masses shown, m_ν=0.01 (left) and 0.05 (right), there is a strong anti-correlation that peaks around r_cbν=-0.8. In both cases, faster neutrinos are sensitive to more distant matter fluctuations. To emphasize this point, we indicate the locus of the barycentre of each curve by a black dashed line. Note that r_cbν trends upwards as R_max decreases, eventually becoming positive for the fastest neutrinos. This might be explained by the gravitational attraction of neutrinos to positive density perturbations close to the observer. In this case, a positive correlation should be expected. In line with expectation, the distance at which the correlation becomes positive increases with neutrino momentum. Interestingly, the anti-correlation becomes weaker with neutrino momentum for 0.01 and stronger with neutrino momentum for 0.05. A simple explanation for this could be that the anti-correlation begins trending upwards earlier for faster neutrinos, causing a reversal in the trend, as can be seen for R_max<100 in the case of m_ν=0.05. For m_ν=0.01, this reversal may only happen at distances that are not constrained by the data underlying our simulations. Just before this paper was submitted, a related study appeared in which neutrino anisotropy maps are analysed for different random configurations of dark matter halos in a (25)^3 volume <cit.>. For some configurations, they report positive or negative correlations between the neutrino and projected dark matter densities. Overall, the ensemble average of cross-power spectra is consistent with zero. Taking into account the smaller volume of the simulations, this can probably be understood in terms of the aforementioned transition from positive to negative correlations close to the observer. §.§ Cosmography In this section, we make a first attempt at neutrino cosmography. Given the limited resolution of our simulations, we focus on one illustrative example and run a higher-resolution constrained simulation with N_ν=N_cb=1024^3 particles for ∑ m_ν=0.06. In Fig. <ref>, we present maps of the neutrino density (left) and dark matter and baryon density (right), in a slice of 500×500×60 that includes the Local Group and several well-known clusters. A few striking observations can be made. First of all, the large-scale neutrino and dark matter densities are positively correlated. This explains the anti-correlation seen in the previous section. Relic neutrinos that are captured by massive objects form localized clouds. Hence, while they are visible from the hypothetical viewpoint[The viewpoint of a distant observer looking at the Milky Way in its cosmic environment. One might call this the Archimedean viewpoint, based on Archimedes' claim that he could lift the Earth given only a fulcrum and a place to stand.] depicted in Fig. <ref>, they would not be seen from Earth along lines of sight that intersect those structures. After plotting the locations of several famous galaxy clusters, we find massive dark matter structures associated with each of them. Surrounding most of these structures, we also identify neutrino clouds that stretch over 10 scales and reach central overdensities of 30%. Two interesting exceptions are the Perseus and Pisces clusters, which lie close to the Taurus void <cit.> and appear to inhabit a large region that is deficient in neutrinos (a `glade' in the neutrino cloudscape). Although we see some collapsed dark matter structures at their locations, these are more dispersed compared to other clusters. This could be due to a failure of the constrained simulations to model the Perseus-Pisces wall accurately <cit.>. The Milky Way is marked by a white triangle, located along a filament that stretches towards the Virgo cluster. For this neutrino mass, m_ν=0.06, we appear to inhabit a region with a large-scale neutrino overdensity that is not due to the Milky Way. It was this large-scale modulation of the neutrino density that originally motivated our investigation. Its effect was shown in Fig. <ref> as a function of mass. For 0.01, we predicted a small neutrino deficit. We now see that this could be due to our proximity to the Taurus/Perseus-Pisces glade. Hence, the local neutrino density depends on the interplay between the overdensities associated with Virgo and the Local Group and nearby underdensities. The direction of the neutrino dipole is indicated by a white arrow. It points away from the overdense region around the Coma cluster, which is consistent with our motion towards the Shapley Supercluster and the Great Attractor <cit.>. Correspondingly, it points towards an underdense region known as the Dipole Repeller, which causes an apparent repulsion <cit.>. In short, the behaviour of the CNB dipole is similar to that of the CMB when the neutrino mass is small, consistent with our findings in Section <ref>. § CONCLUSION Direct detection of the Cosmic Neutrino Background (CNB) remains one of the great challenges in cosmology. In this paper, we have analysed the gravitational effects of the large-scale structure and the Milky Way on the local neutrino background. Through the use of the `BORG' framework for Bayesian forward modelling of large-scale structure observations <cit.>, we have carried out constrained simulations of the local Universe for different neutrino cosmologies with masses between ∑ m_ν=0.01 and ∑ m_ν=0.6. The constraints are based on the catalogue <cit.>, which maps the local Universe out to a distance of 200. We account for the Milky Way dark matter halo, using an updated estimate of the mass from <cit.>. By tracing neutrinos back through the galaxy and large-scale structure with a bitwise reversible version of the N-body code Gadget-4 <cit.>, keeping track of phase-space density perturbations, we compute statistics of the expected neutrino flux. Our results suggest that the gravitational clustering of neutrinos due to the large-scale structure is not negligible compared to the effect of the Milky Way, with both contributing about half of the total effect for 0.1 neutrinos. Despite the inclusion of the large-scale structure, we find smaller overdensities compared to earlier studies <cit.>. We attribute this to a decrease in recent estimates of the Milky Way halo mass. We therefore predict only marginal increases in the event rates for tritium capture experiments like PTOLEMY <cit.>. Additionally, we also predict a smaller impact of gravitational deflection on the helicity distribution of the neutrino background compared to <cit.>, due to our distance from the centre of the Virgo cluster. As a result, the difference between the event rates for Dirac and Majorana neutrinos is slightly smaller, though still close to 100% in most cases. Similarly, we also predict a smaller impact on the flavour composition compared to <cit.>, with an electron-neutrino fraction that is close to 1/3 even for hierarchical masses. We also make predictions for the neutrino dipole. In the limit of very small neutrino masses, m_ν≤0.01, we recover the CMB result with a dipole that corresponds to Solar motion towards (l,b)=(264^∘, 48^∘) at a relative velocity of around 300/. The velocities are significantly perturbed for larger masses and the dipole direction shifts, but remains nearly parallel to the ecliptic plane. This implies a near-maximal annual modulation in the neutrino velocity throughout Earth's orbit around the Sun. Although perhaps unlikely, a future directional CNB detector might image the angular distribution of relic neutrinos. We have produced maps and power spectra of the non-linear neutrino perturbations imprinted by the large-scale structure. Our findings are in qualitative agreement with the linear theory results of <cit.> for masses m_ν≤0.1, but with a much larger gravitational effect that produces an oscillatory feature in the power spectrum. This feature is related to cosmic variance in the dark matter density field. Indeed, we find that local neutrino density perturbations, in principle detectable from Earth, are anti-correlated with the projected dark matter density up to at least 250, the largest distance constrained by the catalogue, although for very nearby structures and fast neutrinos, we instead predict a positive correlation. The distance at which neutrinos are most sensitive to the intervening cosmic structure increases with momentum and decreases with mass, potentially enabling a kind of neutrino tomography of the large-scale structure, which would be impervious to extinction by gas and dust. Finally, we presented maps of the forecasted neutrino distribution in the local Universe, identifying neutrino clouds associated with several well-known clusters, such as Coma and Hercules. We release our simulation data to the public, which we hope will be useful for future analyses of the neutrino background. We are grateful to the authors of class, gadget-4, and monofonIC for making their codes available to the public. WE is supported by the Durham Prize Scholarship in Astroparticle Physics. We acknowledge support from the European Research Council through ERC Advanced Investigator grant, DMIDAS [GA 786910] to CSF. BL is supported by the European Research Council (ERC) through ERC starting Grant No. 716532. WE, CSF, AJ, and BL acknowledge STFC Consolidated Grants ST/P000541/1, ST/T000244/1, ST/X001075/1. SP acknowledges partial support from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. H2020-MSCAITN-2019/860881-HIDDeN (ITN HIDDeN) and from the European Union’s Horizon Europe research and innovation programme under the Marie Sklodowska-Curie Staff Exchange grant agreement No. 101086085 – ASYMMETRY. JJ acknowledges support by the Swedish Research Council (VR) under the project 2020-05143 – “Deciphering the Dynamics of Cosmic Structure”. GL acknowledges support by the ANR BIG4 project, grant ANR-16-CE23-0002 of the French Agence Nationale de la Recherche, and the grant GCEuclid from “Centre National d'Etudes Spatiales” (CNES). This work was supported by the Simons Collaboration on “Learning the Universe”. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. § DATA AVAILABILITY The simulation data are available at <https://www.willemelbers.com/files/nubg_data/>. There are 2×9×6 simulation files, corresponding to the versions with and without Milky Way, for nine posterior realizations of the initial conditions, and six different neutrino masses. For each spectator particle, we provide the phase-space density g, the sampled final peculiar velocity vector 𝐯_f at z=0, its backtraced initial peculiar velocity 𝐯_i at z=31, its initial and final comoving coordinates, and its statistical phase-space weight w=(f̅-g)/g. Phase space statistics can be computed using Eq. (<ref>). For instance, the perturbation to the local number density is simply given by the average weight: n/n̅=1+⟨ w⟩. We provide analysis scripts at <https://github.com/wullm/nubg_scripts>. § REVERSIBLE SIMULATIONS Running a cosmological N-body simulation backwards to recover the initial conditions is non-trivial (see <cit.> for related ideas). In principle, leapfrog integration is time-reversible <cit.>. However, in practice, small rounding errors inevitably accumulate in the backwards direction. This is problematic if one aims to recover a low entropy initial configuration (such as two merging galaxies that are initially well separated) from a final high entropy configuration (the merged galaxy). The root of the problem is the non-associativity of standard floating point arithmetic, causing different rounding errors in backwards integrations. Furthermore, floating point errors are not necessarily reproducible in parallel programs, because of the unpredictable execution order of threads. We here briefly discuss the modifications necessary to make a cosmological code reversible, in anticipation that this may be useful for other applications. To test the bitwise reversibility of Gadget-4, we periodically compute a hash of all particle data. The state of the simulation should be identical in the forwards and backwards directions at the beginning and end, respectively, of each corresponding step. Unsurprisingly, the code is not reversible by default. A first step towards achieving this is to store particle positions and velocities as integers. Implementing integer velocities is a natural step, because Gadget-4 already uses integer positions by default to achieve constant precision throughout the simulation domain <cit.>. However, this is by no means enough to guarantee reversibility, if only because the gravitational Tree-PM algorithm still relies on floating point operations. To guarantee reversibility, we must therefore also ensure that different threads execute their tree calculations in the same order in both directions. Furthermore, there can be no time-asymmetric decision making. For instance, we use a basic geometric tree opening criterion <cit.>, because the more adaptive opening criterion available in Gadget-4 depends on the particle accelerations from the previous step, which are different in the backwards direction. Similarly, the time step is usually chosen based on the maximum distance that particles can move or on the acceleration of particles in the previous time step, which again introduces an asymmetry. To address this problem without adopting a constant time step, we store a list of step sizes used in the forwards direction and feed this file back in the backwards direction. Special consideration is also needed for the neutrinos to ensure that the δ f weighting is time-reversible. Special relativistic velocities (<ref>) can be absorbed in the leapfrog integration scheme <cit.>. The domain decomposition is another point of concern. By default, Gadget-4 uses floating point arithmetic for load balancing, which can lead to differences between the forwards and backwards runs. These operations are therefore modified to use integers. As a final example, recall that we inject additional `spectator' neutrinos at the start of the backwards runs. We take steps to ensure that their presence affects neither the domain decomposition of the original particles nor alters the gravity calculation. With these modifications, we exactly recover the initial conditions when running our constrained neutrino simulations backwards. JHEP
http://arxiv.org/abs/2307.00374v1
20230701160852
Revisiting Sample Size Determination in Natural Language Understanding
[ "Ernie Chang", "Muhammad Hassan Rashid", "Pin-Jie Lin", "Changsheng Zhao", "Vera Demberg", "Yangyang Shi", "Vikas Chandra" ]
cs.CL
[ "cs.CL" ]
Residual-based attention and connection to information bottleneck theory in PINNs [ August 1, 2023 ================================================================================= ∗ These authors contributed equally to this work. Knowing exactly how many data points need to be labeled to achieve a certain model performance is a hugely beneficial step towards reducing the overall budgets for annotation. It pertains to both active learning and traditional data annotation, and is particularly beneficial for low resource scenarios. Nevertheless, it remains a largely under-explored area of research in NLP. We therefore explored various techniques for estimating the training sample size necessary to achieve a targeted performance value. We derived a simple yet effective approach to predict the maximum achievable model performance based on small amount of training samples – which serves as an early indicator during data annotation for data quality and sample size determination. We performed ablation studies on four language understanding tasks, and showed that the proposed approach allows us to forecast model performance within a small margin of mean absolute error (∽ 0.9%) with only 10% data[Our code is available at: <https://github.com/pjlintw/sample-size>.]. § INTRODUCTION Labeled data play an important role in creating performant machine learning models, which makes data annotation a fundamental process for any natural language application pipeline <cit.>. Recent work has sought to reduce the annotation costs through the use of active learning <cit.> and data sampling <cit.>. Indeed, these approaches are shown to be effective in identifying or constructing data subsets needed to achieve a competitive model performance. For instance, the active learning paradigm adds new data iteratively to the existing set before model retraining <cit.>, improving upon the traditional human annotation pipeline that obtains the entire labeled set all at once. Nevertheless, the data labeling process typically annotates as much data as the annotation budget permits, or by clearly defined stopping criteria to terminate the labeling process. Unfortunately, this is usually challenging as annotators do not have the knowledge of the effect of added labels to model performance nor how much more data is needed to arrive at the desired model generalizability  <cit.>. The stopping condition is in fact tied to the quality of data samples w.r.t. model parameters <cit.>, which influences the effective sample size[It is the size of datasets which could have been achieved by an effective unweighted random sample <cit.>.], and it is then beneficial to obtain an approximation of the expected performance  <cit.>. Therefore, knowing the approximate amount of training data needed for this particular performance would serve as an useful knowledge not only for deciding when to stop adding labeled data, but also as an early indication for the data quality. For instance, by having early label quality signals, we can decide between two different types of annotation, or even between two pools of annotators with different expertise. To this end, we explored the relationship between data sample size and model performance in the context of language understanding via learning curve modeling, which defines model performance as a function of dataset sizes. By modeling this relationship in low resource settings, we obtain useful early signals with approximated accuracies for any given the labeled set, which can provide an idea for the sample size and data quality <cit.>. Previous studies have shown that nonlinear weighted curve fitting methods such as inverse power laws or exponential functions can provide decent approximations of the empirical predictive performances <cit.>. We thus put forward an ensemble of these functions which we showed to display a consistently highly correlated behavior across four language understanding benchmarks and with as little as 10% of the entire training set. This work makes the following contributions: * We revisit the task of sample size determination in four natural language understanding benchmarks and empirically explore the correlation strengths of several successful techniques. * Based on our findings, we propose an Ensemble function and demonstrated across several benchmarks and low resource settings that the ensemble function is consistently providing a high correlation with the empirical learning curve plots. § BACKGROUND Our method is a sample size determination technique that helps to design annotation projects by determining the necessary sample size. Previous methods have focused on identifying the sample size required to reach a specific target performance, such as a high correlation coefficient <cit.>, which often involves predicting the sample size necessary for a classifier to attain a specific accuracy level <cit.>. There are two main approaches for predicting the sample size needed to achieve a particular classifier performance: (1) <cit.> present a model-based method for predicting the number of samples required for classifying microarray data. (2) A more general approach involves fitting a classifier's learning curve to inverse power law models <cit.>. Examples of this approach include algorithms proposed by <cit.>. § THE APPROACH Learning Curve Modeling. A learning curve is a graphical representation of how a classifier's performance changes as the size of the training set increases. The curve typically has three sections: an initial section where performance improves rapidly with increasing training set size, a middle section where the rate of improvement slows down, and a final section where the classifier reaches its maximum performance and further increases in training set size do not lead to significant improvements. This relationship can be quantified using a set of data points, each of which represents the expected performance of the classifier E_acc on a particular training set size D_k. These data points can be plotted to create the learning curve, which can help to understand the behavior of the classifier and inform decision-making about how much training data is needed to achieve a desired performance level. Task Description. Given a downstream classification task with N_total data points, a learning curve model F predicts the expected performance E_acc when a classifier trained on the an observed range of training set size (D_k; k>=N). The empirical learning curve is assessed by the parametric models for the learning algorithm performance extrapolation. In our settings, we set k << N_total to simulate practical settings, where few data points consisting of (E_acc, D_K) are to be obtained. Types of Extrapolations. Here, we study different forms of learning curve models with few learnable parameters that have been proven as simple yet effective. The simplest type of learning curve model exponential function (Exp) only introduces two parameters a and b to fit the exponent behavior of learning curve <cit.>. The second form, Inverse Power Law function (Inverse), fits the inverse power law  <cit.> and has three parameters. The third form uses a function from the power law family – Power4 function (Pow4)  <cit.> with four parameters. Lastly, we propose to combine all functions into one (Ensemble) so that it has all their characteristics in order to make it more robust across benchmarks. Table <ref> shows the formulae of our investigated extrapolating functions. § EXPERIMENTAL SETTINGS We study four NLU tasks: (1) IMDb <cit.> is a binary classification dataset (25K/–/25K)[Expressed in the order (train/dev/test).] where model predicts the sentiment (positive/negative) for movie reviews from IMDB; (2) SST2 <cit.> is also a sentiment classification datatset (67K/0.8K/1.8K) containing reviews of different movies and since the model predicts if the review is positive or negative, it also falls in the category of binary classification; (3) AG NEWS is a multi-class classification dataset (120K/–/7.6K) containing texts from different news where the model predicts whether the news text is about sports, science/technology, world or business from the four different classes. We also consider one other multi-class classification task, (4) DBpedia dataset (560K/–/70K) , since it could help us in testing the robustness of the methods used in our experiments. Configs. To investigate how changes in data size affect the predictiveness of the learning curves, under the assumption that the model structure and settings remain unchanged, we perform all experiments using a transformer model <cit.> and average the results over 3 initialization runs. The embedding and hidden layer dimensions are 1000 and 1820; and we use a 6-layer encoder with 4 multi-heads, and the dropout is 0.2. To find the parameters of learning curve models, we consider unweighted and for the gradient descent and non-linear least squares optimizers. The Adam algorithm <cit.> was used as the optimizer with learning rate of 1e-5 and ReLU was used as the activation function. The cross-entropy objective was used for all classification benchmarks, and we select the models using loss values. Finally, we chose a batch size of 8 with 200 number of epochs. Evaluation. We use the aforementioned functions: Exp, Inverse, Pow4 and Ensemble for fitting the empirical learning curve. For each dataset, we select training set sizes ranging from 1% to 10% data sizes at an interval of 1%. The learning curve testsets were created with the data splits in the range [55, 100] at 5% interval by training the classifier, and obtaining the testset[Here, we make the distinction between testset for learning curve and the original testset split.] performance for each corresponding data split. Therefore, we collect the accuracies against different sample sizes and report the mean absolute error (MAE) as the evaluation metric for learning curve modeling. § RESULTS AND ANALYSIS We present results of ensemble method for learning curve modeling on the NLU benchmarks. §.§ Main Results Figure <ref> demonstrates that by using only 10% of the data for learning curve modeling, Ensemble is able to effectively predict model performance within a 0.9% margin of the actual model performance. Moreover, we observe the same trend across all four benchmarks consisting of different training set sizes (i.e. ranging from 25K to 250K) and varying number of classification classes (i.e. ranging from 2 to 14), see the appendix <ref> for remaining figures. Our result shows that the proposed approach is not confined by the classification types and sample sizes. Table <ref> shows the saturated points of the learning curve when the performance improvement is less than a threshold α=0.2 – we found that the predicted performance with only 19% data is within 2.44 accuracy points from the trained model performance for IMDb. Another key observation is that the size (%) needed to predict a low L1 distance increases as the number of classification classes goes up, which indicates that task difficulty does influence the ease of extrapolation. An example is that AG News requires up to 51% to predict a low L1 distance. Next, we perform further ablation studies to investigate the effect of sample size, types of non-linear functions used, or the effect of data weighting. §.§ Ablation Study Effect of sample size. In Figure <ref>, we study the correlation between sample sizes and the absolute mean error between the learning curve model and empirical model performance trend. Surprisingly, we discovered by having more samples does not necessarily help with modeling a better learning curve[We showed this result in the Appendix <ref>.], and that with only 10% data to build the (D_k, E_acc) data points is sufficient to obtain rather small errors across all four benchmarks. Types of learning curve functions. We are also interested in seeing how each of the non-linear learning curve function fare against each other in simpler settings. To this end, we used up to 10% data to model the learning curves and obtained their respective mean absolute error values. In Figure <ref>, we present this comparison where we showed that on IMDb and SST2, the Ensemble function consistently fit best against the empirical data. We observed a similar trend across other benchmark DBpedia with the exception of AG NEWS. We placed the plot for AG NEWS in appendix  <ref>. Influence of data weighting. Previous work <cit.> has found that not all data points are equally important in terms of curve fitting. In fact, data points at a later phase corresponding to more samples are to be given more weight compared to earlier points. We thus investigate this phenomenon in the context of our benchmark, and we observed this to be true anecdotally. The detailed result can be found in Appendix <ref>. The reason for this is that the more data samples there are, the more closely they resemble the entire training set, and this makes their signals a better estimation of a point on the actual learning curve. Another perspective is that the more data samples are used, the less the effect of random sampling on the performance, which affects model performance in extremely low resource scenarios. § CONCLUSIONS AND FUTURE WORKS In this work, we investigated techniques for estimating the amount of training data needed to achieve a target performance in four natural language understanding benchmarks. We demonstrated that our approach allows for accurate prediction of model performance using only a small portion of the data, which can be useful in scenarios with limited resources. Nevertheless, we also recognize the limitation in our current study. For instance, we did not explore sampling techniques other than random sampling; while recent works <cit.> have shown promising directions in data sampling that outperforms random sampling. Another interesting direction is to explore the model architecture's influence on generalizability, and thus the learning curve, which we left for future works. § LIMITATIONS While the effectiveness of the expressive learning curve in settings with limited data has been demonstrated, it is uncertain if this success can be replicated in more complex natural language understanding tasks, such as question answering or tasks that involve a large amount of data. Furthermore, it is assumed that all data samples have the same impact on the model's performance. However, the actual performance of the model may vary based on the method used to select the data or the specific set of tasks being performed, e.g., coreset selection. Similarly, the quality of the labels used for the data can also play a significant role in predicting the performance of the model. Overall, we plan to further investigate these questions and explore them in future studies. § ETHICS STATEMENT We address the efficiency of data annotation by investigating learning curves to estimate the necessary training sample size to reach a desired model performance. However, it is imperative to take into consideration the potential biases that may exist in the model predictions when utilizing a reduced amount of labeled data in the system construction process. Furthermore, when addressing complex tasks such as machine translation and text summarization, it is essential to guarantee the factuality of output generated by the system trained with the suggested data sample size. acl_natbib § DETAILED RESULTS §.§ Predicting the Required Data Size Table <ref> presents the results of required data size prediction using threshold α=0.1 and α=0.3. §.§ Data Weighting We apply data weighting on three extrapolating functions using gradient decent methods in <ref>. §.§ Learning curve on 10% data sizes of AG NEWS Figure <ref> shows the learning curves fitting on 10% data sizes of AG NEWS dataset. §.§ Learning curve on 10% data sizes of DBpedia Figure <ref> shows the learning curves fitting on 10% data sizes of DBpedia dataset. §.§ Effect of Sample Sizes for Learning Curve Fitting We examined the relationship between sample sizes and the difference in mean absolute error (MAE) between the predicted and actual performance trends across four benchmarks. Table <ref> showed MAEs when Ensemble fitting on 50% and 10% of data respectively. We observed that having more samples does not necessarily lead to a better model and that using only 10% resulted in smaller MAEs on all four benchmarks. Therefore, we select 10% of data points for learning curve modeling.